id
stringlengths 2
8
| title
stringlengths 1
130
| text
stringlengths 0
252k
| formulas
listlengths 1
823
| url
stringlengths 38
44
|
---|---|---|---|---|
10296 | Einstein–Podolsky–Rosen paradox | Historical critique of quantum mechanics
The Einstein–Podolsky–Rosen (EPR) paradox is a thought experiment proposed by physicists Albert Einstein, Boris Podolsky and Nathan Rosen which argues that the description of physical reality provided by quantum mechanics is incomplete. In a 1935 paper titled "Can Quantum-Mechanical Description of Physical Reality be Considered Complete?", they argued for the existence of "elements of reality" that were not part of quantum theory, and speculated that it should be possible to construct a theory containing these hidden variables. Resolutions of the paradox have important implications for the interpretation of quantum mechanics.
The thought experiment involves a pair of particles prepared in what would later become known as an entangled state. Einstein, Podolsky, and Rosen pointed out that, in this state, if the position of the first particle were measured, the result of measuring the position of the second particle could be predicted. If instead the momentum of the first particle were measured, then the result of measuring the momentum of the second particle could be predicted. They argued that no action taken on the first particle could instantaneously affect the other, since this would involve information being transmitted faster than light, which is impossible according to the theory of relativity. They invoked a principle, later known as the "EPR criterion of reality", positing that: "If, without in any way disturbing a system, we can predict with certainty (i.e., with probability equal to unity) the value of a physical quantity, then there exists an element of reality corresponding to that quantity." From this, they inferred that the second particle must have a definite value of both position and of momentum prior to either quantity being measured. But quantum mechanics considers these two observables incompatible and thus does not associate simultaneous values for both to any system. Einstein, Podolsky, and Rosen therefore concluded that quantum theory does not provide a complete description of reality.
The "Paradox" paper.
The term "Einstein–Podolsky–Rosen paradox" or "EPR" arose from a paper written in 1934 after Einstein joined the Institute for Advanced Study, having .
The original paper purports to describe what must happen to "two systems I and II, which we permit to interact", and after some time "we suppose that there is no longer any interaction between the two parts." The EPR description involves "two particles, A and B, [which] interact briefly and then move off in opposite directions." According to Heisenberg's uncertainty principle, it is impossible to measure both the momentum and the position of particle B exactly; however, it is possible to measure the exact position of particle A. By calculation, therefore, with the exact position of particle A known, the exact position of particle B can be known. Alternatively, the exact momentum of particle A can be measured, so the exact momentum of particle B can be worked out. As Manjit Kumar writes, "EPR argued that they had proved that ... [particle] B can have simultaneously exact values of position and momentum. ... Particle B has a position that is real and a momentum that is real. EPR appeared to have contrived a means to establish the exact values of "either" the momentum "or" the position of B due to measurements made on particle A, without the slightest possibility of particle B being physically disturbed."
EPR tried to set up a paradox to question the range of true application of quantum mechanics: Quantum theory predicts that both values cannot be known for a particle, and yet the EPR thought experiment purports to show that they must all have determinate values. The EPR paper says: "We are thus forced to conclude that the quantum-mechanical description of physical reality given by wave functions is not complete." The EPR paper ends by saying: "While we have thus shown that the wave function does not provide a complete description of the physical reality, we left open the question of whether or not such a description exists. We believe, however, that such a theory is possible." The 1935 EPR paper condensed the philosophical discussion into a physical argument. The authors claim that given a specific experiment, in which the outcome of a measurement is known before the measurement takes place, there must exist something in the real world, an "element of reality", that determines the measurement outcome. They postulate that these elements of reality are, in modern terminology, local, in the sense that each belongs to a certain point in spacetime. Each element may, again in modern terminology, only be influenced by events which are located in the backward light cone of its point in spacetime (i.e. in the past). These claims are founded on assumptions about nature that constitute what is now known as local realism.
Though the EPR paper has often been taken as an exact expression of Einstein's views, it was primarily authored by Podolsky, based on discussions at the Institute for Advanced Study with Einstein and Rosen. Einstein later expressed to Erwin Schrödinger that, "it did not come out as well as I had originally wanted; rather, the essential thing was, so to speak, smothered by the formalism." Einstein would later go on to present an individual account of his local realist ideas. Shortly before the EPR paper appeared in the "Physical Review," "The New York Times" ran a news story about it, under the headline "Einstein Attacks Quantum Theory". The story, which quoted Podolsky, irritated Einstein, who wrote to the "Times," "Any information upon which the article 'Einstein Attacks Quantum Theory' in your issue of May 4 is based was given to you without authority. It is my invariable practice to discuss scientific matters only in the appropriate forum and I deprecate advance publication of any announcement in regard to such matters in the secular press."
The "Times" story also sought out comment from physicist Edward Condon, who said, "Of course, a great deal of the argument hinges on just what meaning is to be attached to the word 'reality' in physics." The physicist and historian Max Jammer later noted, "[I]t remains a historical fact that the earliest criticism of the EPR paper — moreover, a criticism which correctly saw in Einstein's conception of physical reality the key problem of the whole issue — appeared in a daily newspaper prior to the publication of the criticized paper itself."
Bohr's reply.
The publication of the paper prompted a response by Niels Bohr, which he published in the same journal ("Physical Review"), in the same year, using the same title. (This exchange was only one chapter in a prolonged debate between Bohr and Einstein about the nature of quantum reality.)
He argued that EPR had reasoned fallaciously. Bohr said measurements of position and of momentum are complementary, meaning the choice to measure one excludes the possibility of measuring the other. Consequently, a fact deduced regarding one arrangement of laboratory apparatus could not be combined with a fact deduced by means of the other, and so, the inference of predetermined position and momentum values for the second particle was not valid. Bohr concluded that EPR's "arguments do not justify their conclusion that the quantum description turns out to be essentially incomplete."
Einstein's own argument.
In his own publications and correspondence, Einstein indicated that he was not satisfied with the EPR paper and that Rosen had authored most of it. He later used a different argument to insist that quantum mechanics is an incomplete theory. He explicitly de-emphasized EPR's attribution of "elements of reality" to the position and momentum of particle B, saying that "I couldn't care less" whether the resulting states of particle B allowed one to predict the position and momentum with certainty.
For Einstein, the crucial part of the argument was the demonstration of nonlocality, that the choice of measurement done in particle A, either position or momentum, would lead to "two different" quantum states of particle B. He argued that, because of locality, the real state of particle B could not depend on which kind of measurement was done in A and that the quantum states therefore cannot be in one-to-one correspondence with the real states. Einstein struggled unsuccessfully for the rest of his life to find a theory that could better comply with his idea of locality.
Later developments.
Bohm's variant.
In 1951, David Bohm proposed a variant of the EPR thought experiment in which the measurements have discrete ranges of possible outcomes, unlike the position and momentum measurements considered by EPR. The EPR–Bohm thought experiment can be explained using electron–positron pairs. Suppose we have a source that emits electron–positron pairs, with the electron sent to destination "A", where there is an observer named Alice, and the positron sent to destination "B", where there is an observer named Bob. According to quantum mechanics, we can arrange our source so that each emitted pair occupies a quantum state called a spin singlet. The particles are thus said to be entangled. This can be viewed as a quantum superposition of two states, which we call state I and state II. In state I, the electron has spin pointing upward along the "z"-axis ("+z") and the positron has spin pointing downward along the "z"-axis (−"z"). In state II, the electron has spin −"z" and the positron has spin +"z". Because it is in a superposition of states, it is impossible without measuring to know the definite state of spin of either particle in the spin singlet.
Alice now measures the spin along the "z"-axis. She can obtain one of two possible outcomes: +"z" or −"z". Suppose she gets +"z". Informally speaking, the quantum state of the system collapses into state I. The quantum state determines the probable outcomes of any measurement performed on the system. In this case, if Bob subsequently measures spin along the "z"-axis, there is 100% probability that he will obtain −"z". Similarly, if Alice gets −"z", Bob will get +"z". There is nothing special about choosing the "z"-axis: according to quantum mechanics the spin singlet state may equally well be expressed as a superposition of spin states pointing in the "x" direction.
Whatever axis their spins are measured along, they are always found to be opposite. In quantum mechanics, the "x"-spin and "z"-spin are "incompatible observables", meaning the Heisenberg uncertainty principle applies to alternating measurements of them: a quantum state cannot possess a definite value for both of these variables. Suppose Alice measures the "z"-spin and obtains "+z", so that the quantum state collapses into state I. Now, instead of measuring the "z"-spin as well, Bob measures the "x"-spin. According to quantum mechanics, when the system is in state I, Bob's "x"-spin measurement will have a 50% probability of producing +"x" and a 50% probability of -"x". It is impossible to predict which outcome will appear until Bob actually "performs" the measurement. Therefore, Bob's positron will have a definite spin when measured along the same axis as Alice's electron, but when measured in the perpendicular axis its spin will be uniformly random. It seems as if information has propagated (faster than light) from Alice's apparatus to make Bob's positron assume a definite spin in the appropriate axis.
Bell's theorem.
In 1964, John Stewart Bell published a paper investigating the puzzling situation at that time: on one hand, the EPR paradox purportedly showed that quantum mechanics was nonlocal, and suggested that a hidden-variable theory could heal this nonlocality. On the other hand, David Bohm had recently developed the first successful hidden-variable theory, but it had a grossly nonlocal character. Bell set out to investigate whether it was indeed possible to solve the nonlocality problem with hidden variables, and found out that first, the correlations shown in both EPR's and Bohm's versions of the paradox could indeed be explained in a local way with hidden variables, and second, that the correlations shown in his own variant of the paradox couldn't be explained by "any" local hidden-variable theory. This second result became known as the Bell theorem.
To understand the first result, consider the following toy hidden-variable theory introduced later by J.J. Sakurai: in it, quantum spin-singlet states emitted by the source are actually approximate descriptions for "true" physical states possessing definite values for the "z"-spin and "x"-spin. In these "true" states, the positron going to Bob always has spin values opposite to the electron going to Alice, but the values are otherwise completely random. For example, the first pair emitted by the source might be "(+"z", −"x") to Alice and (−"z", +"x") to Bob", the next pair "(−"z", −"x") to Alice and (+"z", +"x") to Bob", and so forth. Therefore, if Bob's measurement axis is aligned with Alice's, he will necessarily get the opposite of whatever Alice gets; otherwise, he will get "+" and "−" with equal probability.
Bell showed, however, that such models can only reproduce the singlet correlations when Alice and Bob make measurements on the same axis or on perpendicular axes. As soon as other angles between their axes are allowed, local hidden-variable theories become unable to reproduce the quantum mechanical correlations. This difference, expressed using inequalities known as "Bell's inequalities", is in principle experimentally testable. After the publication of Bell's paper, a variety of experiments to test Bell's inequalities were carried out, notably by the group of Alain Aspect in the 1980s; all experiments conducted to date have found behavior in line with the predictions of quantum mechanics. The present view of the situation is that quantum mechanics flatly contradicts Einstein's philosophical postulate that any acceptable physical theory must fulfill "local realism". The fact that quantum mechanics violates Bell inequalities indicates that any hidden-variable theory underlying quantum mechanics must be non-local; whether this should be taken to imply that quantum mechanics "itself" is non-local is a matter of continuing debate.
Steering.
Inspired by Schrödinger's treatment of the EPR paradox back in 1935, Howard M. Wiseman et al. formalised it in 2007 as the phenomenon of quantum steering. They defined steering as the situation where Alice's measurements on a part of an entangled state "steer" Bob's part of the state. That is, Bob's observations cannot be explained by a "local hidden state" model, where Bob would have a fixed quantum state in his side, that is classically correlated but otherwise independent of Alice's.
Locality.
"Locality" has several different meanings in physics. EPR describe the principle of locality as asserting that physical processes occurring at one place should have no immediate effect on the elements of reality at another location. At first sight, this appears to be a reasonable assumption to make, as it seems to be a consequence of special relativity, which states that energy can never be transmitted faster than the speed of light without violating causality; however, it turns out that the usual rules for combining quantum mechanical and classical descriptions violate EPR's principle of locality without violating special relativity or causality. Causality is preserved because there is no way for Alice to transmit messages (i.e., information) to Bob by manipulating her measurement axis. Whichever axis she uses, she has a 50% probability of obtaining "+" and 50% probability of obtaining "−", completely at random; according to quantum mechanics, it is fundamentally impossible for her to influence what result she gets. Furthermore, Bob is able to perform his measurement only "once": there is a fundamental property of quantum mechanics, the no-cloning theorem, which makes it impossible for him to make an arbitrary number of copies of the electron he receives, perform a spin measurement on each, and look at the statistical distribution of the results. Therefore, in the one measurement he is allowed to make, there is a 50% probability of getting "+" and 50% of getting "−", regardless of whether or not his axis is aligned with Alice's.
As a summary, the results of the EPR thought experiment do not contradict the predictions of special relativity. Neither the EPR paradox nor any quantum experiment demonstrates that superluminal signaling is possible; however, the principle of locality appeals powerfully to physical intuition, and Einstein, Podolsky and Rosen were unwilling to abandon it. Einstein derided the quantum mechanical predictions as "spooky action at a distance". The conclusion they drew was that quantum mechanics is not a complete theory.
Mathematical formulation.
Bohm's variant of the EPR paradox can be expressed mathematically using the quantum mechanical formulation of spin. The spin degree of freedom for an electron is associated with a two-dimensional complex vector space "V", with each quantum state corresponding to a vector in that space. The operators corresponding to the spin along the "x", "y", and "z" direction, denoted "Sx", "Sy", and "Sz" respectively, can be represented using the Pauli matrices:
formula_0
where formula_1 is the reduced Planck constant (or the Planck constant divided by 2π).
The eigenstates of "Sz" are represented as
formula_2
and the eigenstates of "Sx" are represented as
formula_3
The vector space of the electron-positron pair is formula_4, the tensor product of the electron's and positron's vector spaces. The spin singlet state is
formula_5
where the two terms on the right hand side are what we have referred to as state I and state II above.
From the above equations, it can be shown that the spin singlet can also be written as
formula_6
where the terms on the right hand side are what we have referred to as state Ia and state IIa.
To illustrate the paradox, we need to show that after Alice's measurement of "Sz" (or "Sx"), Bob's value of "Sz" (or "Sx") is uniquely determined and Bob's value of "Sx" (or "Sz") is uniformly random. This follows from the principles of measurement in quantum mechanics. When "S"z is measured, the system state formula_7 collapses into an eigenvector of "S"z. If the measurement result is "+z", this means that immediately after measurement the system state collapses to
formula_8
Similarly, if Alice's measurement result is −"z", the state collapses to
formula_9
The left hand side of both equations show that the measurement of "S"z on Bob's positron is now determined, it will be −"z" in the first case or +"z" in the second case. The right hand side of the equations show that the measurement of "S"x on Bob's positron will return, in both cases, +"x" or -"x" with probability 1/2 each.
See also.
<templatestyles src="Div col/styles.css"/>
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\n S_x = \\frac{\\hbar}{2} \\begin{bmatrix} 0 & 1 \\\\ 1 & 0 \\end{bmatrix}, \\quad\n S_y = \\frac{\\hbar}{2} \\begin{bmatrix} 0 & -i \\\\ i & 0 \\end{bmatrix}, \\quad\n S_z = \\frac{\\hbar}{2} \\begin{bmatrix} 1 & 0 \\\\ 0 & -1 \\end{bmatrix},\n"
},
{
"math_id": 1,
"text": "\\hbar"
},
{
"math_id": 2,
"text": "\n \\left|+z\\right\\rangle \\leftrightarrow \\begin{bmatrix}1\\\\0\\end{bmatrix}, \\quad\n \\left|-z\\right\\rangle \\leftrightarrow \\begin{bmatrix}0\\\\1\\end{bmatrix}\n"
},
{
"math_id": 3,
"text": "\n \\left|+x\\right\\rangle \\leftrightarrow \\frac{1}{\\sqrt{2}} \\begin{bmatrix}1\\\\1\\end{bmatrix}, \\quad\n \\left|-x\\right\\rangle \\leftrightarrow \\frac{1}{\\sqrt{2}} \\begin{bmatrix}1\\\\-1\\end{bmatrix}.\n"
},
{
"math_id": 4,
"text": " V \\otimes V "
},
{
"math_id": 5,
"text": "\n \\left|\\psi\\right\\rangle = \\frac{1}{\\sqrt{2}} \\biggl(\n \\left|+z\\right\\rangle \\otimes \\left|-z\\right\\rangle -\n \\left|-z\\right\\rangle \\otimes \\left|+z\\right\\rang\n \\biggr),\n"
},
{
"math_id": 6,
"text": "\n \\left|\\psi\\right\\rangle = -\\frac{1}{\\sqrt{2}} \\biggl(\n \\left|+x\\right\\rangle \\otimes \\left|-x\\right\\rangle -\n \\left|-x\\right\\rangle \\otimes \\left|+x\\right\\rangle\n \\biggr),\n"
},
{
"math_id": 7,
"text": "|\\psi\\rangle"
},
{
"math_id": 8,
"text": " \\left| +z \\right\\rangle \\otimes \\left| -z \\right\\rangle = \\left| +z \\right\\rangle \\otimes \\frac{\\left| +x \\right\\rangle - \\left| -x \\right\\rangle}{\\sqrt2}."
},
{
"math_id": 9,
"text": " \\left|-z\\right\\rangle \\otimes \\left|+z\\right\\rangle = \\left| -z \\right\\rangle \\otimes \\frac{\\left| +x \\right\\rangle + \\left| -x \\right\\rangle}{\\sqrt2}."
}
] | https://en.wikipedia.org/wiki?curid=10296 |
10296766 | Gravity gradiometry | Measurement of variations in Earth's gravitational field
Gravity gradiometry is the study of variations ("anomalies") in the Earth's gravity field via measurements of the spatial gradient of gravitational acceleration. The gravity gradient tensor is a 3x3 tensor representing the partial derivatives, along each coordinate axis, of each of the three components of the acceleration vector (formula_0), totaling 9 scalar quantities:
formula_1
It has dimension of square reciprocal time, in units of s-2 (or m ⋅ m-1 ⋅ s-2).
Gravity gradiometry is used by oil and mineral prospectors to measure the density of the subsurface, effectively by measuring the rate of change of gravitational acceleration due to underlying rock properties. From this information it is possible to build a picture of subsurface anomalies which can then be used to more accurately target oil, gas and mineral deposits. It is also used to image water column density, when locating submerged objects, or determining water depth (bathymetry). Physical scientists use gravimeters to determine the exact size and shape of the earth and they contribute to the gravity compensations applied to inertial navigation systems.
Gravity gradient.
Gravity measurements are a reflection of the earth's gravitational attraction, its centripetal force, tidal accelerations due to the sun, moon, and planets, and other applied forces. Gravity gradiometers measure the spatial derivatives of the gravity vector. The most frequently used and intuitive component is the vertical gravity gradient, "Gzz", which represents the rate of change of vertical gravity ("gz") with height ("z"). It can be deduced by differencing the value of gravity at two points separated by a small vertical distance, l, and dividing by this distance.
formula_2
The two gravity measurements are provided by accelerometers which are matched and aligned to a high level of accuracy.
Units.
The unit of gravity gradient is the eotvos (abbreviated as E), which is equivalent to 10−9 s−2 (or 10−4 mGal/m). A person walking past at a distance of 2 metres would provide a gravity gradient signal approximately one E. Mountains can give signals of several hundred Eotvos.
Gravity gradient tensor.
Full tensor gradiometers measure the rate of change of the gravity vector in all three perpendicular directions giving rise to a gravity gradient tensor (Fig 1).
Comparison to gravity.
Being the derivatives of gravity, the spectral power of gravity gradient signals is pushed to higher frequencies. This generally makes the gravity gradient anomaly more localised to the source than the gravity anomaly. The table (below) and graph (Fig 2) compare the "gz" and "Gzz" responses from a point source.
Conversely, gravity measurements have more signal power at low frequency therefore making them more sensitive to regional signals and deeper sources.
Dynamic survey environments (airborne and marine).
The derivative measurement sacrifices the overall energy in the signal, but significantly reduces the noise due to motional disturbance. On a moving platform, the acceleration disturbance measured by the two accelerometers is the same so that when forming the difference, it cancels in the gravity gradient measurement. This is the principal reason for deploying gradiometers in airborne and marine surveys where the acceleration levels are orders of magnitude greater than the signals of interest. The signal to noise ratio benefits most at high frequency (above 0.01 Hz), where the airborne acceleration noise is largest.
Applications.
Gravity gradiometry has predominately been used to image subsurface geology to aid hydrocarbon and mineral exploration. Over 2.5 million line km has now been surveyed using the technique. The surveys highlight gravity anomalies that can be related to geological features such as Salt diapirs, Fault systems, Reef structures, Kimberlite pipes, etc. Other applications include tunnel and bunker detection
and the recent GOCE mission that aims to improve the knowledge of ocean circulation.
Gravity gradiometers.
Lockheed Martin gravity gradiometers.
During the 1970s, as an executive in the US Dept. of Defense, John Brett initiated the development of the gravity gradiometer to support the Trident 2 system. A committee was commissioned to seek commercial applications for the Full Tensor Gradient (FTG) system that was developed by Bell Aerospace (later acquired by Lockheed Martin) and was being deployed on US Navy Trident submarines designed to aid covert navigation. As the Cold War came to a close, the US Navy released the classified technology and opened the door for full commercialization of the technology. The existence of the gravity gradiometer was famously exposed in the film "The Hunt for Red October" released in 1990.
There are two types of Lockheed Martin gravity gradiometers currently in operation: the 3D Full Tensor Gravity Gradiometer (FTG; deployed in either a fixed wing aircraft or a ship) and the FALCON gradiometer (a partial tensor system with 8 accelerometers and deployed in a fixed wing aircraft or a helicopter). The 3D FTG system contains three gravity gradiometry instruments (GGIs), each consisting of two opposing pairs of accelerometers arranged on a spinning disc with measurement direction in the spin direction.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "g=[g_x g_y g_z]^T"
},
{
"math_id": 1,
"text": "\nG=\\nabla g = \\begin{bmatrix}\n\\partial{g_x}/\\partial{x} & \\partial{g_x}/\\partial{y} & \\partial{g_x}/\\partial{z}\\\\\n\\partial{g_y}/\\partial{x} & \\partial{g_y}/\\partial{y} & \\partial{g_y}/\\partial{z}\\\\\n\\partial{g_z}/\\partial{x} & \\partial{g_z}/\\partial{y} & \\partial{g_z}/\\partial{z}\n\\end{bmatrix}\n"
},
{
"math_id": 2,
"text": "G_{zz} = {\\partial g_z\\over \\partial z} \\approx {g_z \\bigl(z + \\tfrac \\ell 2 \\bigr) - g_z \\bigl(z - \\tfrac \\ell 2 \\bigr )\\over \\ell}"
}
] | https://en.wikipedia.org/wiki?curid=10296766 |
1029711 | Solar zenith angle | Angle between the zenith and the centre of the Sun's disc
The solar zenith angle is the zenith angle of the sun, i.e., the angle between the sun’s rays and the vertical direction. It is the complement to the solar altitude or solar elevation, which is the altitude angle or elevation angle between the sun’s rays and a horizontal plane. At solar noon, the zenith angle is at a minimum and is equal to latitude minus solar declination angle. This is the basis by which ancient mariners navigated the oceans.<templatestyles src="Template:TOC_right/styles.css" /> Solar zenith angle is normally used in combination with the solar azimuth angle to determine the position of the Sun as observed from a given location on the surface of the Earth.
Formula.
formula_0
where
Derivation of the formula using the subsolar point and vector analysis.
While the formula can be derived by applying the cosine law to the zenith-pole-Sun spherical triangle, the spherical trigonometry is a relatively esoteric subject.
By introducing the coordinates of the subsolar point and using vector analysis, the formula can be obtained straightforward without incurring the use of spherical trigonometry.
In the Earth-Centered Earth-Fixed (ECEF) geocentric Cartesian coordinate system, let formula_7 and formula_8 be the latitudes and longitudes, or coordinates, of the subsolar point and the observer's point, then the upward-pointing unit vectors at the two points, formula_9 and formula_10, are
formula_11
formula_12
where formula_13, formula_14 and formula_15 are the basis vectors in the ECEF coordinate system.
Now the cosine of the solar zenith angle, formula_16, is simply the dot product of the above two vectors
formula_17
Note that formula_18 is the same as formula_5, the declination of the Sun, and formula_19 is equivalent to formula_20, where formula_4 is the hour angle defined earlier. So the above format is mathematically identical to the one given earlier.
Additionally, Ref. also derived the formula for solar azimuth angle in a similar fashion without using spherical trigonometry.
Minimum and Maximum.
At any given location on any given day, the solar zenith angle, formula_16, reaches its minimum, formula_21, at local solar noon when the hour angle formula_22, or formula_23, namely, formula_24, or formula_25. If formula_26, it is polar night.
And at any given location on any given day, the solar zenith angle, formula_16, reaches its maximum, formula_27, at local midnight when the hour angle formula_28, or formula_29, namely, formula_30, or formula_31. If formula_32, it is polar day.
Caveats.
The calculated values are approximations due to the distinction between common/geodetic latitude and geocentric latitude. However, the two values differ by less than 12 minutes of arc, which is less than the apparent angular radius of the sun.
The formula also neglects the effect of atmospheric refraction.
Applications.
Sunrise/Sunset.
Sunset and sunrise occur (approximately) when the zenith angle is 90°, where the hour angle "h"0 satisfies
formula_33
Precise times of sunset and sunrise occur when the upper limb of the Sun appears, as refracted by the atmosphere, to be on the horizon.
Albedo.
A weighted daily average zenith angle, used in computing the local albedo of the Earth, is given by
formula_34
where "Q" is the instantaneous irradiance.
Summary of special angles.
For example, the solar elevation angle is :
An exact calculation is given in position of the Sun. Other approximations exist elsewhere.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\cos \\theta_s = \\sin \\alpha_s = \\sin \\Phi \\sin \\delta + \\cos \\Phi \\cos \\delta \\cos h"
},
{
"math_id": 1,
"text": "\\theta_s"
},
{
"math_id": 2,
"text": "\\alpha_s"
},
{
"math_id": 3,
"text": "\\alpha_s = 90^\\circ - \\theta_s"
},
{
"math_id": 4,
"text": "h"
},
{
"math_id": 5,
"text": "\\delta"
},
{
"math_id": 6,
"text": "\\Phi"
},
{
"math_id": 7,
"text": "(\\phi_{s}, \\lambda_{s})"
},
{
"math_id": 8,
"text": "(\\phi_{o}, \\lambda_{o})"
},
{
"math_id": 9,
"text": "\\mathbf{S}"
},
{
"math_id": 10,
"text": "\\mathbf{V}_{oz}"
},
{
"math_id": 11,
"text": "\\mathbf{S}=\\cos\\phi_{s}\\cos\\lambda_{s}{\\mathbf i}+\\cos\\phi_{s}\\sin\\lambda_{s}{\\mathbf j}+\\sin\\phi_{s}{\\mathbf k},"
},
{
"math_id": 12,
"text": "\\mathbf{V}_{oz}=\\cos\\phi_{o}\\cos\\lambda_{o}{\\mathbf i}+\\cos\\phi_{o}\\sin\\lambda_{o}{\\mathbf j}+\\sin\\phi_{o}{\\mathbf k}."
},
{
"math_id": 13,
"text": "{\\mathbf i}"
},
{
"math_id": 14,
"text": "{\\mathbf j}"
},
{
"math_id": 15,
"text": "{\\mathbf k}"
},
{
"math_id": 16,
"text": "\\theta_{s}"
},
{
"math_id": 17,
"text": "\\cos\\theta_{s} = \\mathbf{S}\\cdot\\mathbf{V}_{oz} = \\sin\\phi_{o}\\sin\\phi_{s} + \\cos\\phi_{o}\\cos\\phi_{s}\\cos(\\lambda_{s}-\\lambda_{o})."
},
{
"math_id": 18,
"text": "\\phi_{s}"
},
{
"math_id": 19,
"text": "\\lambda_{s}-\\lambda_{o}"
},
{
"math_id": 20,
"text": "-h"
},
{
"math_id": 21,
"text": "\\theta_\\text{min}"
},
{
"math_id": 22,
"text": "h = 0"
},
{
"math_id": 23,
"text": "\\lambda_{s}-\\lambda_{o}=0"
},
{
"math_id": 24,
"text": "\\cos\\theta_\\text{min} = \\cos(|\\phi_{o}-\\phi_{s}|)"
},
{
"math_id": 25,
"text": "\\theta_\\text{min} = |\\phi_{o}-\\phi_{s}|"
},
{
"math_id": 26,
"text": "\\theta_\\text{min} > 90^{\\circ}"
},
{
"math_id": 27,
"text": "\\theta_\\text{max}"
},
{
"math_id": 28,
"text": "h = -180^{\\circ}"
},
{
"math_id": 29,
"text": "\\lambda_{s}-\\lambda_{o}=-180^{\\circ}"
},
{
"math_id": 30,
"text": "\\cos\\theta_\\text{max} = \\cos(180^{\\circ}-|\\phi_{o}+\\phi_{s}|)"
},
{
"math_id": 31,
"text": "\\theta_\\text{max} = 180^{\\circ}-|\\phi_{o}+\\phi_{s}|"
},
{
"math_id": 32,
"text": "\\theta_\\text{max} < 90^{\\circ}"
},
{
"math_id": 33,
"text": "\\cos h_0 = -\\tan \\Phi \\tan \\delta."
},
{
"math_id": 34,
"text": "\\overline{\\cos \\theta_s} = \\frac{\\displaystyle \\int_{-h_0}^{h_0} Q \\cos \\theta_s \\, \\text{d}h}{\\displaystyle \\int_{-h_0}^{h_0} Q \\, \\text{d}h}"
}
] | https://en.wikipedia.org/wiki?curid=1029711 |
10297305 | Hodge structure | Algebraic structure
In mathematics, a Hodge structure, named after W. V. D. Hodge, is an algebraic structure at the level of linear algebra, similar to the one that Hodge theory gives to the cohomology groups of a smooth and compact Kähler manifold. Hodge structures have been generalized for all complex varieties (even if they are singular and non-complete) in the form of mixed Hodge structures, defined by Pierre Deligne (1970). A variation of Hodge structure is a family of Hodge structures parameterized by a manifold, first studied by Phillip Griffiths (1968). All these concepts were further generalized to mixed Hodge modules over complex varieties by Morihiko Saito (1989).
Hodge structures.
Definition of Hodge structures.
A pure Hodge structure of integer weight "n" consists of an abelian group formula_0 and a decomposition of its complexification formula_1 into a direct sum of complex subspaces formula_2, where formula_3, with the property that the complex conjugate of formula_2 is formula_4:
formula_5
formula_6
An equivalent definition is obtained by replacing the direct sum decomposition of formula_1 by the Hodge filtration, a finite decreasing filtration of formula_1 by complex subspaces formula_7 subject to the condition
formula_8
The relation between these two descriptions is given as follows:
formula_9
formula_10
For example, if formula_11 is a compact Kähler manifold, formula_12 is the formula_13-th cohomology group of "X" with integer coefficients, then formula_14 is its formula_13-th cohomology group with complex coefficients and Hodge theory provides the decomposition of formula_1 into a direct sum as above, so that these data define a pure Hodge structure of weight formula_13. On the other hand, the Hodge–de Rham spectral sequence supplies formula_15 with the decreasing filtration by formula_16 as in the second definition.
For applications in algebraic geometry, namely, classification of complex projective varieties by their periods, the set of all Hodge structures of weight formula_13 on formula_0 is too big. Using the Riemann bilinear relations, in this case called "Hodge Riemann bilinear relations", it can be substantially simplified. A polarized Hodge structure of weight "n" consists of a Hodge structure formula_17 and a non-degenerate integer bilinear form formula_18 on formula_0 (polarization), which is extended to formula_1 by linearity, and satisfying the conditions:
formula_19
In terms of the Hodge filtration, these conditions imply that
formula_20
where formula_21 is the "Weil operator" on formula_1, given by formula_22 on formula_2.
Yet another definition of a Hodge structure is based on the equivalence between the formula_23-grading on a complex vector space and the action of the circle group U(1). In this definition, an action of the multiplicative group of complex numbers formula_24 viewed as a two-dimensional real algebraic torus, is given on formula_1. This action must have the property that a real number "a" acts by "an". The subspace formula_2 is the subspace on which formula_26 acts as multiplication by formula_27
"A"-Hodge structure.
In the theory of motives, it becomes important to allow more general coefficients for the cohomology. The definition of a Hodge structure is modified by fixing a Noetherian subring A of the field formula_28 of real numbers, for which formula_29 is a field. Then a pure Hodge A-structure of weight "n" is defined as before, replacing formula_23 with A. There are natural functors of base change and restriction relating Hodge A-structures and B-structures for A a subring of B.
Mixed Hodge structures.
It was noticed by Jean-Pierre Serre in the 1960s based on the Weil conjectures that even singular (possibly reducible) and non-complete algebraic varieties should admit 'virtual Betti numbers'. More precisely, one should be able to assign to any algebraic variety "X" a polynomial "P""X"("t"), called its virtual Poincaré polynomial, with the properties
The existence of such polynomials would follow from the existence of an analogue of Hodge structure in the cohomologies of a general (singular and non-complete) algebraic variety. The novel feature is that the "n"th cohomology of a general variety looks as if it contained pieces of different weights. This led Alexander Grothendieck to his conjectural theory of motives and motivated a search for an extension of Hodge theory, which culminated in the work of Pierre Deligne. He introduced the notion of a mixed Hodge structure, developed techniques for working with them, gave their construction (based on Heisuke Hironaka's resolution of singularities) and related them to the weights on l-adic cohomology, proving the last part of the Weil conjectures.
Example of curves.
To motivate the definition, consider the case of a reducible complex algebraic curve "X" consisting of two nonsingular components, formula_32 and formula_33, which transversally intersect at the points formula_34 and formula_35. Further, assume that the components are not compact, but can be compactified by adding the points formula_36. The first cohomology group of the curve "X" (with compact support) is dual to the first homology group, which is easier to visualize. There are three types of one-cycles in this group. First, there are elements formula_37 representing small loops around the punctures formula_38. Then there are elements formula_39 that are coming from the first homology of the compactification of each of the components. The one-cycle in formula_40 (formula_41) corresponding to a cycle in the compactification of this component, is not canonical: these elements are determined modulo the span of formula_42. Finally, modulo the first two types, the group is generated by a combinatorial cycle formula_43 which goes from formula_34 to formula_35along a path in one component formula_32 and comes back along a path in the other component formula_33. This suggests that formula_44 admits an increasing filtration
formula_45
whose successive quotients "Wn"/"W""n"−1 originate from the cohomology of smooth complete varieties, hence admit (pure) Hodge structures, albeit of different weights. Further examples can be found in "A Naive Guide to Mixed Hodge Theory".
Definition of mixed Hodge structure.
A mixed Hodge structure on an abelian group formula_0 consists of a finite decreasing filtration "Fp" on the complex vector space "H" (the complexification of formula_0), called the Hodge filtration and a finite increasing filtration "Wi" on the rational vector space formula_46 (obtained by extending the scalars to rational numbers), called the weight filtration, subject to the requirement that the "n"-th associated graded quotient of formula_47 with respect to the weight filtration, together with the filtration induced by "F" on its complexification, is a pure Hodge structure of weight "n", for all integer "n". Here the induced filtration on
formula_48
is defined by
formula_49
One can define a notion of a morphism of mixed Hodge structures, which has to be compatible with the filtrations "F" and "W" and prove the following:
Theorem. "Mixed Hodge structures form an abelian category. The kernels and cokernels in this category coincide with the usual kernels and cokernels in the category of vector spaces, with the induced filtrations."
The total cohomology of a compact Kähler manifold has a mixed Hodge structure, where the "n"th space of the weight filtration "Wn" is the direct sum of the cohomology groups (with rational coefficients) of degree less than or equal to "n". Therefore, one can think of classical Hodge theory in the compact, complex case as providing a double grading on the complex cohomology group, which defines an increasing filtration "Fp" and a decreasing filtration "Wn" that are compatible in certain way. In general, the total cohomology space still has these two filtrations, but they no longer come from a direct sum decomposition. In relation with the third definition of the pure Hodge structure, one can say that a mixed Hodge structure cannot be described using the action of the group formula_50 An important insight of Deligne is that in the mixed case there is a more complicated noncommutative proalgebraic group that can be used to the same effect using Tannakian formalism.
Moreover, the category of (mixed) Hodge structures admits a good notion of tensor product, corresponding to the product of varieties, as well as related concepts of "inner Hom" and "dual object", making it into a Tannakian category. By Tannaka–Krein philosophy, this category is equivalent to the category of finite-dimensional representations of a certain group, which Deligne, Milne and et el. has explicitly described, see and . The description of this group was recast in more geometrical terms by . The corresponding (much more involved) analysis for rational pure polarizable Hodge structures was done by .
Mixed Hodge structure in cohomology (Deligne's theorem).
Deligne has proved that the "n"th cohomology group of an arbitrary algebraic variety has a canonical mixed Hodge structure. This structure is functorial, and compatible with the products of varieties ("Künneth isomorphism") and the product in cohomology. For a complete nonsingular variety "X" this structure is pure of weight "n", and the Hodge filtration can be defined through the hypercohomology of the truncated de Rham complex.
The proof roughly consists of two parts, taking care of noncompactness and singularities. Both parts use the resolution of singularities (due to Hironaka) in an essential way. In the singular case, varieties are replaced by simplicial schemes, leading to more complicated homological algebra, and a technical notion of a Hodge structure on complexes (as opposed to cohomology) is used.
Using the theory of motives, it is possible to refine the weight filtration on the cohomology with rational coefficients to one with integral coefficients.
Applications.
The machinery based on the notions of Hodge structure and mixed Hodge structure forms a part of still largely conjectural theory of motives envisaged by Alexander Grothendieck. Arithmetic information for nonsingular algebraic variety "X", encoded by eigenvalue of Frobenius elements acting on its l-adic cohomology, has something in common with the Hodge structure arising from "X" considered as a complex algebraic variety. Sergei Gelfand and Yuri Manin remarked around 1988 in their "Methods of homological algebra", that unlike Galois symmetries acting on other cohomology groups, the origin of "Hodge symmetries" is very mysterious, although formally, they are expressed through the action of the fairly uncomplicated group formula_76 on the de Rham cohomology. Since then, the mystery has deepened with the discovery and mathematical formulation of mirror symmetry.
Variation of Hodge structure.
A variation of Hodge structure (, , ) is a family of Hodge structures
parameterized by a complex manifold "X". More precisely a variation of Hodge structure of weight "n" on a complex manifold "X" consists of a locally constant sheaf "S" of finitely generated abelian groups on "X", together with a decreasing Hodge filtration "F" on "S" ⊗ "O""X", subject to the following two conditions:
Here the natural (flat) connection on "S" ⊗ "OX" induced by the flat connection on "S" and the flat connection "d" on "O""X", and "OX" is the sheaf of holomorphic functions on "X", and formula_79 is the sheaf of 1-forms on "X". This natural flat connection is a Gauss–Manin connection ∇ and can be described by the Picard–Fuchs equation.
A variation of mixed Hodge structure can be defined in a similar way, by adding a grading or filtration "W" to "S". Typical examples can be found from algebraic morphisms formula_80. For example,
formula_81
has fibers
formula_82
which are smooth plane curves of genus 10 for formula_83 and degenerate to a singular curve at formula_84 Then, the cohomology sheaves
formula_85
give variations of mixed hodge structures.
Hodge modules.
Hodge modules are a generalization of variation of Hodge structures on a complex manifold. They can be thought of informally as something like sheaves of Hodge structures on a manifold; the precise definition is rather technical and complicated. There are generalizations to mixed Hodge modules, and to manifolds with singularities.
For each smooth complex variety, there is an abelian category of mixed Hodge modules associated with it. These behave formally like the categories of sheaves over the manifolds; for example, morphisms "f" between manifolds induce functors "f"∗, "f*", "f"!, "f"! between (derived categories of) mixed Hodge modules similar to the ones for sheaves.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "H_{\\Z}"
},
{
"math_id": 1,
"text": "H"
},
{
"math_id": 2,
"text": "H^{p,q}"
},
{
"math_id": 3,
"text": "p+q=n"
},
{
"math_id": 4,
"text": "H^{q,p}"
},
{
"math_id": 5,
"text": "H := H_{\\Z}\\otimes_{\\Z} \\Complex = \\bigoplus\\nolimits_{p+q=n}H^{p,q},"
},
{
"math_id": 6,
"text": "\\overline{H^{p,q}}=H^{q,p}."
},
{
"math_id": 7,
"text": "F^pH (p \\in \\Z),"
},
{
"math_id": 8,
"text": "\\forall p, q \\ : \\ p + q = n+1, \\qquad F^p H\\cap\\overline{F^q H}=0 \\quad \\text{and} \\quad F^p H \\oplus \\overline{F^q H}=H."
},
{
"math_id": 9,
"text": " H^{p,q}=F^p H\\cap \\overline{F^q H},"
},
{
"math_id": 10,
"text": "F^p H= \\bigoplus\\nolimits_{i\\geq p} H^{i,n-i}. "
},
{
"math_id": 11,
"text": "X"
},
{
"math_id": 12,
"text": "H_{\\Z} = H^n (X, \\Z)"
},
{
"math_id": 13,
"text": "n"
},
{
"math_id": 14,
"text": "H = H^n (X, \\Complex)"
},
{
"math_id": 15,
"text": "H^n"
},
{
"math_id": 16,
"text": "F^p H"
},
{
"math_id": 17,
"text": "(H_{\\Z}, H^{p,q})"
},
{
"math_id": 18,
"text": "Q"
},
{
"math_id": 19,
"text": "\\begin{align}\nQ(\\varphi,\\psi) &= (-1)^n Q(\\psi, \\varphi) &&\\text{ for }\\varphi\\in H^{p,q}, \\psi\\in H^{p',q'}; \\\\\nQ(\\varphi,\\psi) &=0 && \\text{ for }\\varphi\\in H^{p,q}, \\psi\\in H^{p',q'}, p\\ne q'; \\\\\ni^{p-q}Q \\left(\\varphi,\\bar{\\varphi} \\right) &>0 && \\text{ for }\\varphi\\in H^{p,q},\\ \\varphi\\ne 0.\n\\end{align}"
},
{
"math_id": 20,
"text": "\\begin{align}\nQ \\left (F^p, F^{n-p+1} \\right ) &=0, \\\\\nQ \\left (C\\varphi,\\bar{\\varphi} \\right ) &>0 && \\text{ for }\\varphi\\ne 0,\n\\end{align}"
},
{
"math_id": 21,
"text": "C"
},
{
"math_id": 22,
"text": "C = i^{p-q}"
},
{
"math_id": 23,
"text": "\\Z"
},
{
"math_id": 24,
"text": "\\Complex^*"
},
{
"math_id": 25,
"text": "\\Complex"
},
{
"math_id": 26,
"text": "z \\in \\Complex^*"
},
{
"math_id": 27,
"text": "z^{\\,p}{\\bar{z}}^{\\,q}."
},
{
"math_id": 28,
"text": "\\R"
},
{
"math_id": 29,
"text": "\\mathbf{A} \\otimes_{\\Z} \\R"
},
{
"math_id": 30,
"text": "P_X(t) = \\sum \\operatorname{rank}(H^n(X))t^n"
},
{
"math_id": 31,
"text": "P_X(t)=P_Y(t)+P_U(t)"
},
{
"math_id": 32,
"text": "X_1"
},
{
"math_id": 33,
"text": "X_2"
},
{
"math_id": 34,
"text": "Q_1"
},
{
"math_id": 35,
"text": "Q_2"
},
{
"math_id": 36,
"text": "P_1, \\dots ,P_n"
},
{
"math_id": 37,
"text": "\\alpha_i"
},
{
"math_id": 38,
"text": "P_i"
},
{
"math_id": 39,
"text": "\\beta_j"
},
{
"math_id": 40,
"text": "X_k \\subset X"
},
{
"math_id": 41,
"text": "k=1,2"
},
{
"math_id": 42,
"text": "\\alpha_1, \\dots ,\\alpha_n"
},
{
"math_id": 43,
"text": "\\gamma"
},
{
"math_id": 44,
"text": "H_1(X)"
},
{
"math_id": 45,
"text": " 0\\subset W_0\\subset W_1 \\subset W_2=H^1(X), "
},
{
"math_id": 46,
"text": "H_{\\Q} = H_{\\Z} \\otimes_{\\Z} \\Q"
},
{
"math_id": 47,
"text": "H_{\\Q}"
},
{
"math_id": 48,
"text": "\\operatorname{gr}_n^{W} H = W_n\\otimes\\Complex /W_{n-1}\\otimes\\Complex"
},
{
"math_id": 49,
"text": " F^p \\operatorname{gr}_n^W H = \\left (F^p\\cap W_n\\otimes\\Complex +W_{n-1} \\otimes \\Complex \\right )/W_{n-1}\\otimes\\Complex."
},
{
"math_id": 50,
"text": "\\Complex^*."
},
{
"math_id": 51,
"text": "\\Z(1)"
},
{
"math_id": 52,
"text": "2\\pi i\\Z"
},
{
"math_id": 53,
"text": "\\Z(1) \\otimes \\Complex = H^{-1,-1}."
},
{
"math_id": 54,
"text": "\\Z(n);"
},
{
"math_id": 55,
"text": "X\\subset \\mathbb{P}^{n+1}"
},
{
"math_id": 56,
"text": "d"
},
{
"math_id": 57,
"text": "f\\in \\Complex [x_0,\\ldots,x_{n+1}]"
},
{
"math_id": 58,
"text": "R(f) = \\frac{\\Complex[x_0,\\ldots,x_{n+1}]}{\\left( \\frac{\\partial f}{\\partial x_0}, \\ldots, \\frac{\\partial f}{\\partial x_{n+1}}\\right)}"
},
{
"math_id": 59,
"text": "H^{p,n-p}(X)_\\text{prim} \\cong R(f)_{(n+1-p)d - n -2}"
},
{
"math_id": 60,
"text": "g = x_0^4 + \\cdots + x_3^4"
},
{
"math_id": 61,
"text": "d = 4"
},
{
"math_id": 62,
"text": "n = 2"
},
{
"math_id": 63,
"text": "\\frac{\\Complex [x_0,x_1,x_2,x_3]}{(x_0^3,x_1^3,x_2^3,x_3^3)}"
},
{
"math_id": 64,
"text": "H^{p,n-p}(X)_{prim} \\cong R(g)_{(2+1 - p)4 - 2 - 2} = R(g)_{4(3-p) - 4}"
},
{
"math_id": 65,
"text": "\n\\begin{align}\nH^{0,2}(X)_\\text{prim} &\\cong R(g)_8 = \\Complex \\cdot x_0^2x_1^2x_2^2x_3^2 \\\\\nH^{1,1}(X)_\\text{prim} &\\cong R(g)_4\\\\\nH^{2,0}(X)_\\text{prim} &\\cong R(g)_0 = \\Complex \\cdot 1\n\\end{align}\n"
},
{
"math_id": 66,
"text": "R(g)_4"
},
{
"math_id": 67,
"text": "\\begin{array}{rrrrrrrr}\nx_0^2 x_1^2, & x_0^2 x_1 x_2, & x_0^2x_1x_3, & x_0^2x_2^2, & x_0^2x_2x_3, & x_0^2x_3^2, & x_0x_1^2x_2, & x_0x_1^2x_3, \\\\\nx_0 x_1 x_2^2, & x_0 x_1 x_2 x_3, & x_0x_1x_3^2, & x_0x_2^2x_3, & x_0x_2x_3^2, & x_1^2x_2^2, & x_1^2x_2x_3, & x_1^2x_3^2, \\\\\nx_1 x_2^2 x_3, & x_1 x_2 x_3^2, & x_2^2x_3^2\n\\end{array}"
},
{
"math_id": 68,
"text": "H^{1,1}(X)"
},
{
"math_id": 69,
"text": "[L]"
},
{
"math_id": 70,
"text": "H^{k,k}(X)"
},
{
"math_id": 71,
"text": "1"
},
{
"math_id": 72,
"text": "x^d + y^d + z^d"
},
{
"math_id": 73,
"text": "g"
},
{
"math_id": 74,
"text": "H^{1,0} \\cong R(f)_{d-3} \\cong \\Complex [x,y,z]_{d-3}"
},
{
"math_id": 75,
"text": " {2 + d - 3 \\choose 2} = {d-1 \\choose 2} = \\frac{(d-1)(d-2)}{2}"
},
{
"math_id": 76,
"text": "R_{\\mathbf {C/R}}{\\mathbf C}^*"
},
{
"math_id": 77,
"text": "F^n"
},
{
"math_id": 78,
"text": "F^{n-1} \\otimes \\Omega^1_X."
},
{
"math_id": 79,
"text": "\\Omega^1_X"
},
{
"math_id": 80,
"text": "f:\\Complex ^n \\to \\Complex "
},
{
"math_id": 81,
"text": "\\begin{cases}\nf:\\Complex ^2 \\to \\Complex \\\\\nf(x,y) = y^6 - x^6\n\\end{cases} "
},
{
"math_id": 82,
"text": "X_t = f^{-1}(\\{t\\}) = \\left \\{(x,y)\\in\\Complex ^2: y^6 - x^6 = t \\right \\}"
},
{
"math_id": 83,
"text": "t\\neq 0"
},
{
"math_id": 84,
"text": "t=0."
},
{
"math_id": 85,
"text": "\\R f_*^i \\left( \\underline{\\Q}_{\\Complex ^2} \\right)"
}
] | https://en.wikipedia.org/wiki?curid=10297305 |
1029755 | Solar azimuth angle | Azimuth angle of the Sun's position
The solar azimuth angle is the azimuth (horizontal angle with respect to north) of the Sun's position. This horizontal coordinate defines the Sun's relative direction along the local horizon, whereas the solar zenith angle (or its complementary angle solar elevation) defines the Sun's apparent altitude.
Conventional sign and origin.
There are several conventions for the solar azimuth; however, it is traditionally defined as the angle between a line due south and the shadow cast by a vertical rod on Earth. This convention states the angle is positive if the shadow is east of south and negative if it is west of south. For example, due east would be 90° and due west would be -90°. Another convention is the reverse; it also has the origin at due south, but measures angles clockwise, so that due east is now negative and west now positive.
However, despite tradition, the most commonly accepted convention for analyzing solar irradiation, e.g. for solar energy applications, is clockwise from due north, so east is 90°, south is 180°, and west is 270°. This is the definition used by NREL in their solar position calculators and is also the convention used in the formulas presented here. However, Landsat photos and other USGS products, while also defining azimuthal angles relative to due north, take counterclockwise angles as negative.
Conventional Trigonometric Formulas.
The following formulas assume the north-clockwise convention. The solar azimuth angle can be calculated to a good approximation with the following formula, however angles should be interpreted with care because the inverse sine, i.e. "x" = sin−1 y or "x" = arcsin "y", has multiple solutions, only one of which will be correct.
formula_0
The following formulas can also be used to approximate the solar azimuth angle, but these formulas use cosine, so the azimuth angle as shown by a calculator will always be positive, and should be interpreted as the angle between zero and 180 degrees when the hour angle, h, is negative (morning) and the angle between 180 and 360 degrees when the hour angle, h, is positive (afternoon). (These two formulas are equivalent if one assumes the "solar elevation angle" approximation formula).
formula_1
So practically speaking, the compass azimuth which is the practical value used everywhere (in example in airlines as the so called course) on a compass (where North is 0 degrees, East is 90 degrees, South is 180 degrees and West is 270 degrees) can be calculated as
formula_2
The formulas use the following terminology:
In addition, dividing the above sine formula by the first cosine formula gives one the tangent formula as is used in "The Nautical Almanac".
The formula based on the "subsolar point" and the atan2 function.
A 2021 publication presents a method that uses a solar azimuth formula based on the subsolar point and the atan2 function, as defined in Fortran 90, that gives an unambiguous solution without the need for circumstantial treatment. The subsolar point is the point on the surface of the Earth where the Sun is overhead.
The method first calculates the declination of the Sun and equation of time using equations from The Astronomical Almanac, then it gives the x-, y- and z-components of the unit vector pointing toward the Sun, through vector analysis rather than spherical trigonometry, as follows:
formula_11
where
It can be shown that formula_19. With the above mathematical setup, the solar zenith angle and solar azimuth angle are simply
formula_20,
formula_21. (South-Clockwise Convention)
where
If one prefers North-Clockwise Convention, or East-Counterclockwise Convention, the formulas are
formula_24, (North-Clockwise Convention)
formula_25. (East-Counterclockwise Convention)
Finally, the values of formula_8, formula_9 and formula_10 at 1-hour step for an entire year can be presented in a 3D plot of "wreath of analemmas" as a graphic depiction of all possible positions of the Sun in terms of solar zenith angle and solar azimuth angle for any given location. Refer to sun path for similar plots for other locations.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\sin \\phi_\\mathrm{s} = \\frac{-\\sin h \\cos \\delta}{\\sin \\theta_\\mathrm{s}}."
},
{
"math_id": 1,
"text": "\\begin{align}\n \\cos \\phi_\\mathrm{s} &= \\frac{\\sin \\delta \\cos \\Phi - \\cos h \\cos \\delta \\sin \\Phi}{\\sin \\theta_\\mathrm{s}} \\\\[5pt]\n \\cos \\phi_\\mathrm{s} &= \\frac{\\sin \\delta - \\cos \\theta_\\mathrm{s}\\sin \\Phi}{\\sin \\theta_\\mathrm{s}\\cos \\Phi}.\n \\end{align}"
},
{
"math_id": 2,
"text": "\\text{compass } \\phi_\\mathrm{s} = 360 - \\phi_\\mathrm{s}."
},
{
"math_id": 3,
"text": "\\phi_\\mathrm{s}"
},
{
"math_id": 4,
"text": "\\theta_\\mathrm{s}"
},
{
"math_id": 5,
"text": "h"
},
{
"math_id": 6,
"text": "\\delta"
},
{
"math_id": 7,
"text": "\\Phi"
},
{
"math_id": 8,
"text": "S_{x}"
},
{
"math_id": 9,
"text": "S_{y}"
},
{
"math_id": 10,
"text": "S_{z}"
},
{
"math_id": 11,
"text": "\\begin{align}\n \\phi_{s} &= \\delta, \\\\\n \\lambda_{s} &= -15(T_\\mathrm{GMT}-12+E_\\mathrm{min}/60), \\\\\n S_{x} &= \\cos \\phi_{s} \\sin (\\lambda_{s}-\\lambda_{o}), \\\\\n S_{y} &= \\cos \\phi_{o} \\sin \\phi_{s} - \\sin \\phi_{o} \\cos \\phi_{s} \\cos (\\lambda_{s}-\\lambda_{o}), \\\\\n S_{z} &= \\sin \\phi_{o} \\sin \\phi_{s} + \\cos \\phi_{o} \\cos \\phi_{s} \\cos (\\lambda_{s}-\\lambda_{o}).\n \\end{align}"
},
{
"math_id": 12,
"text": "\\phi_{s}"
},
{
"math_id": 13,
"text": "\\lambda_{s}"
},
{
"math_id": 14,
"text": "T_\\mathrm{GMT}"
},
{
"math_id": 15,
"text": "E_\\mathrm{min}"
},
{
"math_id": 16,
"text": "\\phi_{o}"
},
{
"math_id": 17,
"text": "\\lambda_{o}"
},
{
"math_id": 18,
"text": "S_{x}, S_{y}, S_{z}"
},
{
"math_id": 19,
"text": "S_{x}^{2}+S_{y}^{2}+S_{z}^{2}=1"
},
{
"math_id": 20,
"text": "Z = \\mathrm{acos}(S_{z})"
},
{
"math_id": 21,
"text": "\\gamma_{s} = \\mathrm{atan2}(-S_{x}, -S_{y})"
},
{
"math_id": 22,
"text": "Z"
},
{
"math_id": 23,
"text": "\\gamma_{s}"
},
{
"math_id": 24,
"text": "\\gamma_{s} = \\mathrm{atan2}(S_{x}, S_{y})"
},
{
"math_id": 25,
"text": "\\gamma_{s} = \\mathrm{atan2}(S_{y}, S_{x})"
}
] | https://en.wikipedia.org/wiki?curid=1029755 |
10299080 | Reduced residue system | Set of residue classes modulo n, relatively prime to n
In mathematics, a subset "R" of the integers is called a reduced residue system modulo "n" if:
Here φ denotes Euler's totient function.
A reduced residue system modulo "n" can be formed from a complete residue system modulo "n" by removing all integers not relatively prime to "n". For example, a complete residue system modulo 12 is {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11}. The so-called totatives 1, 5, 7 and 11 are the only integers in this set which are relatively prime to 12, and so the corresponding reduced residue system modulo 12 is {1, 5, 7, 11}. The cardinality of this set can be calculated with the totient function: φ(12) = 4. Some other reduced residue systems modulo 12 are: | [
{
"math_id": 0,
"text": "\\sum r_i \\equiv 0\\!\\!\\!\\!\\mod n"
}
] | https://en.wikipedia.org/wiki?curid=10299080 |
1029967 | Hypernucleus | Nucleus which contains at least one hyperon
A hypernucleus is similar to a conventional atomic nucleus, but contains at least one hyperon in addition to the normal protons and neutrons. Hyperons are a category of baryon particles that carry non-zero strangeness quantum number, which is conserved by the strong and electromagnetic interactions.
A variety of reactions give access to depositing one or more units of strangeness in a nucleus. Hypernuclei containing the lightest hyperon, the lambda (Λ), tend to be more tightly bound than normal nuclei, though they can decay via the weak force with a mean lifetime of around . Sigma (Σ) hypernuclei have been sought, as have doubly-strange nuclei containing xi baryons (Ξ) or two Λ's.
Nomenclature.
Hypernuclei are named in terms of their atomic number and baryon number, as in normal nuclei, plus the hyperon(s) which are listed in a left subscript of the symbol, with the caveat that atomic number is interpreted as the total charge of the hypernucleus, including charged hyperons such as the xi minus (Ξ−) as well as protons. For example, the hypernucleus O contains 8 protons, 7 neutrons, and one Λ (which carries no charge).
History.
The first was discovered by Marian Danysz and Jerzy Pniewski in 1952 using a nuclear emulsion plate exposed to cosmic rays, based on their energetic but delayed decay. This event was inferred to be due to a nuclear fragment containing a Λ baryon. Experiments until the 1970s would continue to study hypernuclei produced in emulsions using cosmic rays, and later using pion (π) and kaon (K) beams from particle accelerators.
Since the 1980s, more efficient production methods using pion and kaon beams have allowed further investigation at various accelerator facilities, including CERN, Brookhaven National Laboratory, KEK, DAφNE, and JPARC. In the 2010s, heavy ion experiments such as ALICE and STAR first allowed the production and measurement of light hypernuclei formed through hadronization from quark–gluon plasma.
Properties.
Hypernuclear physics differs from that of normal nuclei because a hyperon is distinguishable from the four nucleon spin and isospin. That is, a single hyperon is not restricted by the Pauli exclusion principle, and can sink to the lowest energy level. As such, hypernuclei are often smaller and more tightly bound than normal nuclei; for example, the lithium hypernucleus Li is 19% smaller than the normal nucleus 6Li. However, the hyperons can decay via the weak force; the mean lifetime of a free Λ is , and that of a Λ hypernucleus is usually slightly shorter.
A generalized mass formula developed for both the non-strange normal nuclei and strange hypernuclei can estimate masses of hypernuclei containing Λ, ΛΛ, Σ, and Ξ hyperon(s). The neutron and proton driplines for hypernuclei are predicted and existence of some exotic hypernuclei beyond the normal neutron and proton driplines are suggested. This generalized mass formula was named the "Samanta formula" by Botvina and Pochodzalla and used to predict relative yields of hypernuclei in heavy-ion collisions.
Types.
Λ hypernuclei.
The simplest, and most well understood, type of hypernucleus includes only the lightest hyperon, the Λ.
While two nucleons can interact through the nuclear force mediated by a virtual pion, the Λ becomes a Σ baryon upon emitting a pion, so the Λ–nucleon interaction is mediated solely by more massive mesons such as the η and ω mesons, or through the simultaneous exchange of two or more mesons. This means that the Λ–nucleon interaction is weaker and has a shorter range than the standard nuclear force, and the potential well of a Λ in the nucleus is shallower than that of a nucleon; in hypernuclei, the depth of the Λ potential is approximately 30 MeV. However, one-pion exchange in the Λ–nucleon interaction does cause quantum-mechanical mixing of the Λ and Σ baryons in hypernuclei (which does not happen in free space), especially in neutron-rich hypernuclei. Additionally, the three-body force between a Λ and two nucleons is expected to be more important than the three-body interaction in nuclei, since the Λ can exchange two pions with a virtual Σ intermediate, while the equivalent process in nucleons requires a relatively heavy delta baryon (Δ) intermediate.
Like all hyperons, Λ hypernuclei can decay through the weak interaction, which changes it to a lighter baryon and emits a meson or a lepton–antilepton pair. In free space, the Λ usually decays via the weak force to a proton and a π– meson, or a neutron and a π0, with a total half-life of . A nucleon in the hypernucleus can cause the Λ to decay via the weak force without emitting a pion; this process becomes dominant in heavy hypernuclei, due to suppression of the pion-emitting decay mode. The half-life of the Λ in a hypernucleus is considerably shorter, plateauing to about near Fe, but some empirical measurements substantially disagree with each other or with theoretical predictions.
Hypertriton.
The simplest hypernucleus is the hypertriton (H), which consists of one proton, one neutron, and one Λ hyperon. The Λ in this system is very loosely bound, having a separation energy of 130 keV and a large radius of 10.6 fm, compared to about for the deuteron.
This loose binding would imply a lifetime similar to a free Λ. However, the measured hypertriton lifetime averaged across all experiments (about ) is substantially shorter than predicted by theory, as the non-mesonic decay mode is expected to be relatively minor; some experimental results are substantially shorter or longer than this average.
Σ hypernuclei.
The existence of hypernuclei containing a Σ baryon is less clear. Several experiments in the early 1980s reported bound hypernuclear states above the Λ separation energy and presumed to contain one of the slightly heavier Σ baryons, but experiments later in the decade ruled out the existence of such states. Results from exotic atoms containing a Σ− bound to a nucleus by the electromagnetic force have found a net repulsive Σ–nucleon interaction in medium-sized and large hypernuclei, which means that no Σ hypernuclei exist in such mass range. However, an experiment in 1998 definitively observed the light Σ hypernucleus He.
ΛΛ and Ξ hypernuclei.
Hypernuclei containing two Λ baryons have been made. However, such hypernuclei are much harder to produce due to containing two strange quarks, and As of 2016[ [update]], only seven candidate ΛΛ hypernuclei have been observed. Like the Λ–nucleon interaction, empirical and theoretical models predict that the Λ–Λ interaction is mildly attractive.
Hypernuclei containing a Ξ baryon are known. Empirical studies and theoretical models indicate that the Ξ––proton interaction is attractive, but weaker than the Λ–nucleon interaction. Like the Σ– and other negatively charged particles, the Ξ– can also form an exotic atom. When a Ξ– is bound in an exotic atom or a hypernucleus, it quickly decays to a ΛΛ hypernucleus or to two Λ hypernuclei by exchanging a strange quark with a proton, which releases about 29 MeV of energy in free space:
Ξ− + p → Λ + Λ
Ω hypernuclei.
Hypernuclei containing the omega baryon (Ω) were predicted using lattice QCD in 2018; in particular, the proton–Ω and Ω–Ω dibaryons (bound systems containing two baryons) are expected to be stable. As of 2022[ [update]], no such hypernuclei have been observed under any conditions, but the lightest such species could be produced in heavy-ion collisions, and measurements by the STAR experiment are consistent with the existence of the proton–Ω dibaryon.
Hypernuclei with higher strangeness.
Since the Λ is electrically neutral and its nuclear force interactions are attractive, there are predicted to be arbitrarily large hypernuclei with high strangeness and small net charge, including species with no nucleons. Binding energy per baryon in multi-strange hypernuclei can reach up to 21 MeV/"A" under certain conditions, compared to 8.80 MeV/"A" for the ordinary nucleus 62Ni. Additionally, formation of Ξ baryons should quickly become energetically favorable, unlike when there are no Λ's, because the exchange of strangeness with a nucleon would be impossible due to the Pauli exclusion principle.
Production.
Several modes of production have been devised to make hypernuclei through bombardment of normal nuclei.
Strangeness exchange and production.
One method of producing a K− meson exchanges a strange quark with a nucleon and changes it to a Λ:
p + K− → Λ + π0
n + K− → Λ + π−
The cross section for the formation of a hypernucleus is maximized when the momentum of the kaon beam is approximately 500 MeV/"c". Several variants of this setup exist, including ones where the incident kaons are either brought to rest before colliding with a nucleus.
In rare cases, the incoming K− can instead produce a Ξ hypernucleus via the reaction:
p + K− → Ξ− + K+
The equivalent strangeness production reaction involves a π+ meson reacts with a neutron to change it to a Λ:
n + π+ → Λ + K+
This reaction has a maximum cross section at a beam momentum of 1.05 GeV/"c", and is the most efficient production route for Λ hypernuclei, but requires larger targets than strangeness exchange methods.
Elastic scattering.
Electron scattering off of a proton can change it to a Λ and produce a K+:
p + e− → Λ + e−′ + K+
where the prime symbol denotes a scattered electron. The energy of an electron beam can be more easily tuned than pion or kaon beams, making it easier to measure and calibrate hypernuclear energy levels. Initially theoretically predicted in the 1980s, this method was first used experimentally in the early 2000s.
Hyperon capture.
The capture of a Ξ− baryon by a nucleus can make a Ξ− exotic atom or hypernucleus. Upon capture, it changes to a ΛΛ hypernucleus or two Λ hypernuclei. The disadvantage is that the Ξ− baryon is harder to make into a beam than singly strange hadrons. However, an experiment at J-PARC begun in 2020 will compile data on Ξ and ΛΛ hypernuclei using a similar, non-beam setup where scattered Ξ− baryons rain onto an emulsion target.
Similar species.
Kaonic nuclei.
The K– meson can orbit a nucleus in an exotic atom, such as in kaonic hydrogen. Although the K–-proton strong interaction in kaonic hydrogen is repulsive, the K––nucleus interaction is attractive for larger systems, so this meson can enter a strongly bound state closely related to a hypernucleus; in particular, the K––proton–proton system is experimentally known and more tightly bound than a normal nucleus.
Charmed hypernuclei.
Nuclei containing a charm quark have been predicted theoretically since 1977, and are described as charmed hypernuclei despite the possible absence of strange quarks. In particular, the lightest charmed baryons, the Λc and Σc baryons, are predicted to exist in bound states in charmed hypernuclei, and could be created in processes analogous to those used to make hypernuclei. The depth of the Λc potential in nuclear matter is predicted to be 58 MeV, but unlike Λ hypernuclei, larger hypernuclei containing the positively charged Λc would be less stable than the corresponding Λ hypernuclei due to Coulomb repulsion. The mass difference between the Λc and the Σ is too large for appreciable mixing of these baryons to occur in hypernuclei. Weak decays of charmed hypernuclei have strong relativistic corrections compared to those in ordinary hypernuclei, as the energy released in the decay process is comparable to the mass of the Λ baryon.
Antihypernuclei.
In August 2024 the STAR Collaboration reported the observation of the heaviest antimatter nucleus known, antihyperhydrogen-4 formula_0 consisting of one antiproton, two antineutrons and an antihyperon.
The anti-lambda hyperon formula_1 and the antihypertriton formula_2 have also been previously observed.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "{}_{\\bar{{\\boldsymbol{\\Lambda }}}}{}^{{\\bf{4}}}\\bar{{\\bf{H}}}"
},
{
"math_id": 1,
"text": "\\bar{\\Lambda }"
},
{
"math_id": 2,
"text": "{}_{\\bar{\\Lambda }}{}^{3}\\bar{{\\rm{H}}}"
}
] | https://en.wikipedia.org/wiki?curid=1029967 |
10303 | Evaporation | Vaporization of a liquid from its surface
Evaporation is a type of vaporization that occurs on the surface of a liquid as it changes into the gas phase. A high concentration of the evaporating substance in the surrounding gas significantly slows down evaporation, such as when humidity affects rate of evaporation of water. When the molecules of the liquid collide, they transfer energy to each other based on how they collide. When a molecule near the surface absorbs enough energy to overcome the vapor pressure, it will escape and enter the surrounding air as a gas. When evaporation occurs, the energy removed from the vaporized liquid will reduce the temperature of the liquid, resulting in evaporative cooling.
On average, only a fraction of the molecules in a liquid have enough heat energy to escape from the liquid. The evaporation will continue until an equilibrium is reached when the evaporation of the liquid is equal to its condensation. In an enclosed environment, a liquid will evaporate until the surrounding air is saturated.
Evaporation is an essential part of the water cycle. The sun (solar energy) drives evaporation of water from oceans, lakes, moisture in the soil, and other sources of water. In hydrology, evaporation and transpiration (which involves evaporation within plant stomata) are collectively termed evapotranspiration. Evaporation of water occurs when the surface of the liquid is exposed, allowing molecules to escape and form water vapor; this vapor can then rise up and form clouds. With sufficient energy, the liquid will turn into vapor.
Theory.
For molecules of a liquid to evaporate, they must be located near the surface, they have to be moving in the proper direction, and have sufficient kinetic energy to overcome liquid-phase intermolecular forces. When only a small proportion of the molecules meet these criteria, the rate of evaporation is low. Since the kinetic energy of a molecule is proportional to its temperature, evaporation proceeds more quickly at higher temperatures. As the faster-moving molecules escape, the remaining molecules have lower average kinetic energy, and the temperature of the liquid decreases. This phenomenon is also called evaporative cooling. This is why evaporating sweat cools the human body.
Evaporation also tends to proceed more quickly with higher flow rates between the gaseous and liquid phase and in liquids with higher vapor pressure. For example, laundry on a clothes line will dry (by evaporation) more rapidly on a windy day than on a still day. Three key parts to evaporation are heat, atmospheric pressure (determines the percent humidity), and air movement.
On a molecular level, there is no strict boundary between the liquid state and the vapor state. Instead, there is a Knudsen layer, where the phase is undetermined. Because this layer is only a few molecules thick, at a macroscopic scale a clear phase transition interface cannot be seen.
Liquids that do not evaporate visibly at a given temperature in a given gas (e.g., cooking oil at room temperature) have molecules that do not tend to transfer energy to each other in a pattern sufficient to frequently give a molecule the heat energy necessary to turn into vapor. However, these liquids "are" evaporating. It is just that the process is much slower and thus significantly less visible.
Evaporative equilibrium.
If evaporation takes place in an enclosed area, the escaping molecules accumulate as a vapor above the liquid. Many of the molecules return to the liquid, with returning molecules becoming more frequent as the density and pressure of the vapor increases. When the process of escape and return reaches an equilibrium, the vapor is said to be "saturated", and no further change in either vapor pressure and density or liquid temperature will occur. For a system consisting of vapor and liquid of a pure substance, this equilibrium state is directly related to the vapor pressure of the substance, as given by the Clausius–Clapeyron relation:
formula_0
where "P"1, "P"2 are the vapor pressures at temperatures "T"1, "T"2 respectively, Δ"H"vap is the enthalpy of vaporization, and "R" is the universal gas constant. The rate of evaporation in an open system is related to the vapor pressure found in a closed system. If a liquid is heated, when the vapor pressure reaches the ambient pressure the liquid will boil.
The ability for a molecule of a liquid to evaporate is based largely on the amount of kinetic energy an individual particle may possess. Even at lower temperatures, individual molecules of a liquid can evaporate if they have more than the minimum amount of kinetic energy required for vaporization.
Factors influencing the rate of evaporation.
Note: Air is used here as a common example of the surrounding gas; however, other gases may hold that role.
In the US, the National Weather Service measures, at various outdoor locations nationwide, the actual rate of evaporation from a standardized "pan" open water surface. Others do likewise around the world. The US data is collected and compiled into an annual evaporation map. The measurements range from under 30 to over per year.
Because it typically takes place in a complex environment, where 'evaporation is an extremely rare event', the mechanism for the evaporation of water is not completely understood. Theoretical calculations require prohibitively long and large computer simulations. 'The rate of evaporation of liquid water is one of the principal uncertainties in modern climate modeling.'
Thermodynamics.
Evaporation is an endothermic process, since heat is absorbed during evaporation.
Applications.
Combustion vaporization.
Fuel droplets vaporize as they receive heat by mixing with the hot gases in the combustion chamber. Heat (energy) can also be received by radiation from any hot refractory wall of the combustion chamber.
Pre-combustion vaporization.
Internal combustion engines rely upon the vaporization of the fuel in the cylinders to form a fuel/air mixture in order to burn well.
The chemically correct air/fuel mixture for total burning of gasoline has been determined to be 15 parts air to one part gasoline or 15/1 by weight. Changing this to a volume ratio yields 8000 parts air to one part gasoline or 8,000/1 by volume.
Film deposition.
Thin films may be deposited by evaporating a substance and condensing it onto a substrate, or by dissolving the substance in a solvent, spreading the resulting solution thinly over a substrate, and evaporating the solvent. The Hertz–Knudsen equation is often used to estimate the rate of evaporation in these instances.
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" />
External links.
Media related to at Wikimedia Commons | [
{
"math_id": 0,
"text": "\\ln \\left( \\frac{ P_2 }{ P_1 } \\right) = - \\frac{ \\Delta H_{\\rm vap } }{ R } \\left( \\frac{ 1 }{ T_2 } - \\frac{ 1 }{ T_1 } \\right)"
}
] | https://en.wikipedia.org/wiki?curid=10303 |
1030420 | Equidistributed sequence | In mathematics, a sequence ("s"1, "s"2, "s"3, ...) of real numbers is said to be equidistributed, or uniformly distributed, if the proportion of terms falling in a subinterval is proportional to the length of that subinterval. Such sequences are studied in Diophantine approximation theory and have applications to Monte Carlo integration.
Definition.
A sequence ("s"1, "s"2, "s"3, ...) of real numbers is said to be "equidistributed" on a non-degenerate interval ["a", "b"] if for every subinterval ["c", "d"
] of ["a", "b"] we have
formula_0
(Here, the notation |{"s"1...,"s""n"} ∩ ["c", "d"
]| denotes the number of elements, out of the first "n" elements of the sequence, that are between "c" and "d".)
For example, if a sequence is equidistributed in [0, 2], since the interval [0.5, 0.9] occupies 1/5 of the length of the interval [0, 2], as "n" becomes large, the proportion of the first "n" members of the sequence which fall between 0.5 and 0.9 must approach 1/5. Loosely speaking, one could say that each member of the sequence is equally likely to fall anywhere in its range. However, this is not to say that ("s""n") is a sequence of random variables; rather, it is a determinate sequence of real numbers.
Discrepancy.
We define the discrepancy "D""N" for a sequence ("s"1, "s"2, "s"3, ...) with respect to the interval ["a", "b"] as
formula_1
A sequence is thus equidistributed if the discrepancy "D""N" tends to zero as "N" tends to infinity.
Equidistribution is a rather weak criterion to express the fact that a sequence fills the segment leaving no gaps. For example, the drawings of a random variable uniform over a segment will be equidistributed in the segment, but there will be large gaps compared to a sequence which first enumerates multiples of ε in the segment, for some small ε, in an appropriately chosen way, and then continues to do this for smaller and smaller values of ε. For stronger criteria and for constructions of sequences that are more evenly distributed, see low-discrepancy sequence.
Riemann integral criterion for equidistribution.
Recall that if "f" is a function having a Riemann integral in the interval ["a", "b"], then its integral is the limit of Riemann sums taken by sampling the function "f" in a set of points chosen from a fine partition of the interval. Therefore, if some sequence is equidistributed in ["a", "b"], it is expected that this sequence can be used to calculate the integral of a Riemann-integrable function. This leads to the following criterion for an equidistributed sequence:
Suppose ("s"1, "s"2, "s"3, ...) is a sequence contained in the interval ["a", "b"]. Then the following conditions are equivalent:
formula_3
This criterion leads to the idea of Monte-Carlo integration, where integrals are computed by sampling the function over a sequence of random variables equidistributed in the interval.
It is not possible to generalize the integral criterion to a class of functions bigger than just the Riemann-integrable ones. For example, if the Lebesgue integral is considered and "f" is taken to be in "L"1, then this criterion fails. As a counterexample, take "f" to be the indicator function of some equidistributed sequence. Then in the criterion, the left hand side is always 1, whereas the right hand side is zero, because the sequence is countable, so "f" is zero almost everywhere.
In fact, the de Bruijn–Post Theorem states the converse of the above criterion: If "f" is a function such that the criterion above holds for any equidistributed sequence in ["a", "b"], then "f" is Riemann-integrable in ["a", "b"].
Equidistribution modulo 1.
A sequence ("a"1, "a"2, "a"3, ...) of real numbers is said to be equidistributed modulo 1 or uniformly distributed modulo 1 if the sequence of the fractional parts of "a""n", denoted by ("a""n") or by "a""n" − ⌊"a""n"⌋, is equidistributed in the interval [0, 1].
0, "α", 2"α", 3"α", 4"α", ...
is equidistributed modulo 1.
Examples.
This was proven by Weyl and is an application of van der Corput's difference theorem.
2"α", 3"α", 5"α", 7"α", 11"α", ...
is equidistributed modulo 1. This is a famous theorem of analytic number theory, published by I. M. Vinogradov in 1948.
Weyl's criterion.
Weyl's criterion states that the sequence "a""n" is equidistributed modulo 1 if and only if for all non-zero integers ℓ,
formula_4
The criterion is named after, and was first formulated by, Hermann Weyl. It allows equidistribution questions to be reduced to bounds on exponential sums, a fundamental and general method.
Generalizations.
The sequence "v""n" of vectors in R"k" is equidistributed modulo 1 if and only if for any non-zero vector ℓ ∈ Z"k",
formula_5
Example of usage.
Weyl's criterion can be used to easily prove the equidistribution theorem, stating that the sequence of multiples 0, "α", 2"α", 3"α", ... of some real number "α" is equidistributed modulo 1 if and only if "α" is irrational.
Suppose "α" is irrational and denote our sequence by "a""j" = "jα" (where "j" starts from 0, to simplify the formula later). Let "ℓ" ≠ 0 be an integer. Since "α" is irrational, "ℓα" can never be an integer, so formula_6 can never be 1. Using the formula for the sum of a finite geometric series,
formula_7
a finite bound that does not depend on "n". Therefore, after dividing by "n" and letting "n" tend to infinity, the left hand side tends to zero, and Weyl's criterion is satisfied.
Conversely, notice that if "α" is rational then this sequence is not equidistributed modulo 1, because there are only a finite number of options for the fractional part of "a""j" = "jα".
Complete uniform distribution.
A sequence formula_8 of real numbers is said to be "k-uniformly distributed mod 1" if not only the sequence of fractional parts formula_9 is uniformly distributed in formula_10 but also the sequence formula_11, where formula_12 is defined as formula_13, is uniformly distributed in formula_14.
A sequence formula_8 of real numbers is said to be "completely uniformly distributed mod 1" it is formula_15-uniformly distributed for each natural number formula_16.
For example, the sequence formula_17 is uniformly distributed mod 1 (or 1-uniformly distributed) for any irrational number formula_18, but is never even 2-uniformly distributed. In contrast, the sequence formula_19 is completely uniformly distributed for almost all formula_20 (i.e., for all formula_18 except for a set of measure 0).
van der Corput's difference theorem.
A theorem of Johannes van der Corput states that if for each "h" the sequence "s""n"+"h" − "s""n" is uniformly distributed modulo 1, then so is "s""n".
A van der Corput set is a set "H" of integers such that if for each "h" in "H" the sequence "s""n"+"h" − "s""n" is uniformly distributed modulo 1, then so is s"n".
Metric theorems.
Metric theorems describe the behaviour of a parametrised sequence for almost all values of some parameter "α": that is, for values of "α" not lying in some exceptional set of Lebesgue measure zero.
"n") is equidistributed mod 1 for almost all values of "α" > 1.
It is not known whether the sequences ("e""n"
) or (π
"n"
) are equidistributed mod 1. However it is known that the sequence ("α""n") is "not" equidistributed mod 1 if "α" is a PV number.
Well-distributed sequence.
A sequence ("s"1, "s"2, "s"3, ...) of real numbers is said to be well-distributed on ["a", "b"] if for any subinterval ["c", "d"
] of ["a", "b"] we have
formula_21
"uniformly" in "k". Clearly every well-distributed sequence is uniformly distributed, but the converse does not hold. The definition of well-distributed modulo 1 is analogous.
Sequences equidistributed with respect to an arbitrary measure.
For an arbitrary probability measure space formula_22, a sequence of points formula_23 is said to be equidistributed with respect to formula_24 if the mean of point measures converges weakly to formula_24:
formula_25
In any Borel probability measure on a separable, metrizable space, there exists an equidistributed sequence with respect to the measure; indeed, this follows immediately from the fact that such a space is standard.
The general phenomenon of equidistribution comes up a lot for dynamical systems associated with Lie groups, for example in Margulis' solution to the Oppenheim conjecture.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\lim_{n\\to\\infty}{ \\left|\\{\\,s_1,\\dots,s_n \\,\\} \\cap [c,d] \\right| \\over n}={d-c \\over b-a} . "
},
{
"math_id": 1,
"text": "D_N = \\sup_{a \\le c \\le d \\le b} \\left\\vert \\frac{\\left|\\{\\,s_1,\\dots,s_N \\,\\} \\cap [c,d] \\right|}{N} - \\frac{d-c}{b-a} \\right\\vert ."
},
{
"math_id": 2,
"text": "\\mathbb{C}"
},
{
"math_id": 3,
"text": "\\lim_{N \\to \\infty}\\frac{1}{N}\\sum_{n=1}^{N} f\\left(s_n\\right) = \\frac{1}{b-a}\\int_a^b f(x)\\,dx"
},
{
"math_id": 4,
"text": "\\lim_{n\\to\\infty}\\frac{1}{n}\\sum_{j=1}^{n}e^{2\\pi i \\ell a_j}=0."
},
{
"math_id": 5,
"text": "\\lim_{n\\to\\infty}\\frac{1}{n}\\sum_{j=0}^{n-1}e^{2\\pi i \\ell \\cdot v_j}=0."
},
{
"math_id": 6,
"text": "e^{2\\pi i \\ell \\alpha}"
},
{
"math_id": 7,
"text": "\\left|\\sum_{j=0}^{n-1}e^{2\\pi i \\ell j \\alpha}\\right| = \\left|\\sum_{j=0}^{n-1}\\left(e^{2\\pi i \\ell \\alpha}\\right)^j\\right| = \\left| \\frac{1 - e^{2\\pi i \\ell n \\alpha}} {1 - e^{2\\pi i \\ell \\alpha}}\\right| \\le \\frac 2 { \\left|1 - e^{2\\pi i \\ell \\alpha}\\right|},"
},
{
"math_id": 8,
"text": "(a_1,a_2,\\dots) "
},
{
"math_id": 9,
"text": "a_n':=a_n-[a_n]"
},
{
"math_id": 10,
"text": "[0,1]"
},
{
"math_id": 11,
"text": "(b_1,b_2,\\dots)"
},
{
"math_id": 12,
"text": "b_n"
},
{
"math_id": 13,
"text": "b_n:= (a'_{n+1}, \\dots, a'_{n+k})\\in [0,1]^k"
},
{
"math_id": 14,
"text": "[0,1]^k"
},
{
"math_id": 15,
"text": "k"
},
{
"math_id": 16,
"text": "k\\ge 1"
},
{
"math_id": 17,
"text": "(\\alpha, 2\\alpha,\\dots)"
},
{
"math_id": 18,
"text": "\\alpha"
},
{
"math_id": 19,
"text": "(\\alpha, \\alpha^2,\\alpha^3,\\dots)"
},
{
"math_id": 20,
"text": "\\alpha>1"
},
{
"math_id": 21,
"text": "\\lim_{n\\to\\infty}{ \\left|\\{\\,s_{k+1},\\dots,s_{k+n} \\,\\} \\cap [c,d] \\right| \\over n}={d-c \\over b-a} "
},
{
"math_id": 22,
"text": "(X,\\mu)"
},
{
"math_id": 23,
"text": "(x_n)"
},
{
"math_id": 24,
"text": "\\mu"
},
{
"math_id": 25,
"text": "\\frac{\\sum_{k=1}^n \\delta_{x_k}}{n}\\Rightarrow \\mu \\ . "
}
] | https://en.wikipedia.org/wiki?curid=1030420 |
10307 | Equal temperament | Musical tuning system with constant ratios between notes
An equal temperament is a musical temperament or tuning system that approximates just intervals by dividing an octave (or other interval) into steps such that the ratio of the frequencies of any adjacent pair of notes is the same. This system yields pitch steps perceived as equal in size, due to the logarithmic changes in pitch frequency.
In classical music and Western music in general, the most common tuning system since the 18th century has been 12 equal temperament (also known as "12 tone equal temperament", "" or "", informally abbreviated as "12 equal"), which divides the octave into 12 parts, all of which are equal on a logarithmic scale, with a ratio equal to the 12th root of 2, ( 12√2 ≈ 1.05946 ). That resulting smallest interval, the width of an octave, is called a semitone or half step. In Western countries the term "equal temperament", without qualification, generally means "".
In modern times, is usually tuned relative to a standard pitch of 440 Hz, called A 440, meaning one note, A, is tuned to 440 hertz and all other notes are defined as some multiple of semitones away from it, either higher or lower in frequency. The standard pitch has not always been 440 Hz; it has varied considerably and generally risen over the past few hundred years.
Other equal temperaments divide the octave differently. For example, some music has been written in and , while the Arab tone system uses
Instead of dividing an octave, an equal temperament can also divide a different interval, like the equal-tempered version of the Bohlen–Pierce scale, which divides the just interval of an octave and a fifth (ratio 3:1), called a "tritave" or a "pseudo-octave" in that system, into 13 equal parts.
For tuning systems that divide the octave equally, but are not approximations of just intervals, the term equal division of the octave, or "EDO" can be used.
Unfretted string ensembles, which can adjust the tuning of all notes except for open strings, and vocal groups, who have no mechanical tuning limitations, sometimes use a tuning much closer to just intonation for acoustic reasons. Other instruments, such as some wind, keyboard, and fretted instruments, often only approximate equal temperament, where technical limitations prevent exact tunings.
Some wind instruments that can easily and spontaneously bend their tone, most notably trombones, use tuning similar to string ensembles and vocal groups.
General properties.
In an equal temperament, the distance between two adjacent steps of the scale is the same interval. Because the perceived identity of an interval depends on its ratio, this scale in even steps is a geometric sequence of multiplications. (An arithmetic sequence of intervals would not sound evenly spaced and would not permit transposition to different keys.) Specifically, the smallest interval in an equal-tempered scale is the ratio:
formula_0
formula_1
where the ratio r divides the ratio p (typically the octave, which is 2:1) into n equal parts. ("See Twelve-tone equal temperament below.")
Scales are often measured in cents, which divide the octave into 1200 equal intervals (each called a cent). This logarithmic scale makes comparison of different tuning systems easier than comparing ratios, and has considerable use in ethnomusicology. The basic step in cents for any equal temperament can be found by taking the width of p above in cents (usually the octave, which is 1200 cents wide), called below w, and dividing it into n parts:
formula_2
In musical analysis, material belonging to an equal temperament is often given an integer notation, meaning a single integer is used to represent each pitch. This simplifies and generalizes discussion of pitch material within the temperament in the same way that taking the logarithm of a multiplication reduces it to addition. Furthermore, by applying the modular arithmetic where the modulus is the number of divisions of the octave (usually 12), these integers can be reduced to pitch classes, which removes the distinction (or acknowledges the similarity) between pitches of the same name, e.g., c is 0 regardless of octave register. The MIDI encoding standard uses integer note designations.
Twelve-tone equal temperament.
12 tone equal temperament, which divides the octave into 12 intervals of equal size, is the musical system most widely used today, especially in Western music.
History.
The two figures frequently credited with the achievement of exact calculation of equal temperament are Zhu Zaiyu (also romanized as Chu-Tsaiyu. Chinese: ) in 1584 and Simon Stevin in 1585. According to F.A. Kuttner, a critic of giving credit to Zhu, it is known that Zhu "presented a highly precise, simple and ingenious method for arithmetic calculation of equal temperament mono-chords in 1584" and that Stevin "offered a mathematical definition of equal temperament plus a somewhat less precise computation of the corresponding numerical values in 1585 or later."
The developments occurred independently.(p200)
Kenneth Robinson credits the invention of equal temperament to Zhu and provides textual quotations as evidence. In 1584 Zhu wrote:
I have founded a new system. I establish one foot as the number from which the others are to be extracted, and using proportions I extract them. Altogether one has to find the exact figures for the pitch-pipers in twelve operations.
Kuttner disagrees and remarks that his claim "cannot be considered correct without major qualifications". Kuttner proposes that neither Zhu nor Stevin achieved equal temperament and that neither should be considered its inventor.
China.
Chinese theorists had previously come up with approximations for , but Zhu was the first person to mathematically solve 12 tone equal temperament, which he described in two books, published in 1580 and 1584. Needham also gives an extended account.
Zhu obtained his result by dividing the length of string and pipe successively by and for pipe length by such that after 12 divisions (an octave), the length was halved.
Zhu created several instruments tuned to his system, including bamboo pipes.
Europe.
Some of the first Europeans to advocate equal temperament were lutenists Vincenzo Galilei, Giacomo Gorzanis, and Francesco Spinacino, all of whom wrote music in it.
Simon Stevin was the first to develop 12 TET based on the twelfth root of two, which he described in "van de Spiegheling der singconst" (c. 1605), published posthumously in 1884.
Plucked instrument players (lutenists and guitarists) generally favored equal temperament, while others were more divided. In the end, 12-tone equal temperament won out. This allowed enharmonic modulation, new styles of symmetrical tonality and polytonality, atonal music such as that written with the 12-tone technique or serialism, and jazz (at least its piano component) to develop and flourish.
Mathematics.
In 12 tone equal temperament, which divides the octave into 12 equal parts, the width of a semitone, i.e. the frequency ratio of the interval between two adjacent notes, is the twelfth root of two:
formula_3
This interval is divided into 100 cents.
Calculating absolute frequencies.
To find the frequency, "Pn", of a note in 12 TET, the following formula may be used:
formula_4
In this formula "Pn" represents the pitch, or frequency (usually in hertz), you are trying to find. "Pa" is the frequency of a reference pitch. The indes numbers n and a are the labels assigned to the desired pitch (n) and the reference pitch (a). These two numbers are from a list of consecutive integers assigned to consecutive semitones. For example, A4 (the reference pitch) is the 49th key from the left end of a piano (tuned to 440 Hz), and C4 (middle C), and F♯4 are the 40th and 46th keys, respectively. These numbers can be used to find the frequency of C4 and F♯4:
formula_5
formula_6
Converting frequencies to their equal temperament counterparts.
To convert a frequency (in Hz) to its equal 12 TET counterpart, the following formula can be used:
formula_7 where in general formula_8
"En" is the frequency of a pitch in equal temperament, and "Ea" is the frequency of a reference pitch. For example, if we let the reference pitch equal 440 Hz, we can see that E5 and C♯5 have the following frequencies, respectively:
formula_9 where in this case formula_10
formula_11 where in this case formula_12
Comparison with just intonation.
The intervals of 12 TET closely approximate some intervals in just intonation.
The fifths and fourths are almost indistinguishably close to just intervals, while thirds and sixths are further away.
In the following table, the sizes of various just intervals are compared to their equal-tempered counterparts, given as a ratio as well as cents.
Seven-tone equal division of the fifth.
Violins, violas, and cellos are tuned in perfect fifths (G D A E for violins and C G D A for violas and cellos), which suggests that their semitone ratio is slightly higher than in conventional 12 tone equal temperament. Because a perfect fifth is in 3:2 relation with its base tone, and this interval comprises seven steps, each tone is in the ratio of √7 to the next (100.28 cents), which provides for a perfect fifth with ratio of 3:2, but a slightly widened octave with a ratio of ≈ 517:258 or ≈ 2.00388:1 rather than the usual 2:1, because 12 perfect fifths do not equal seven octaves. During actual play, however, violinists choose pitches by ear, and only the four unstopped pitches of the strings are guaranteed to exhibit this 3:2 ratio.
Other equal temperaments.
Five-, seven-, and nine-tone temperaments in ethnomusicology.
Five- and seven-tone equal temperament ("" and "" ), with 240 cent and 171 cent steps, respectively, are fairly common.
and mark the endpoints of the syntonic temperament's valid tuning range, as shown in Figure 1.
5 tone and 9 tone equal temperament.
According to Kunst (1949), Indonesian gamelans are tuned to but according to Hood (1966) and McPhee (1966) their tuning varies widely, and according to Tenzer (2000) they contain stretched octaves. It is now accepted that of the two primary tuning systems in gamelan music, slendro and pelog, only slendro somewhat resembles five-tone equal temperament, while pelog is highly unequal; however, in 1972 Surjodiningrat, Sudarjana and Susanto analyze pelog as equivalent to 9-TET (133-cent steps ).
7-tone equal temperament.
A Thai xylophone measured by Morton in 1974 "varied only plus or minus 5 cents" from . According to Morton,
"Thai instruments of fixed pitch are tuned to an equidistant system of seven pitches per octave ... As in Western traditional music, however, all pitches of the tuning system are not used in one mode (often referred to as 'scale'); in the Thai system five of the seven are used in principal pitches in any mode, thus establishing a pattern of nonequidistant intervals for the mode."
A South American Indian scale from a pre-instrumental culture measured by Boiles in 1969 featured 175 cent seven-tone equal temperament, which stretches the octave slightly, as with instrumental gamelan music.
Chinese music has traditionally used .
Various equal temperaments.
Other equal divisions of the octave that have found occasional use include 13 EDO, 15 EDO and 17 EDO.
2, 5, 12, 41, 53, 306, 665 and 15601 are denominators of first convergents of log2(3), so 2, 5, 12, 41, 53, 306, 665 and 15601 twelfths (and fifths), being in correspondent equal temperaments equal to an integer number of octaves, are better approximations of 2, 5, 12, 41, 53, 306, 665 and 15601 just twelfths/fifths than in any equal temperament with fewer tones.
1, 2, 3, 5, 7, 12, 29, 41, 53, 200, ... (sequence in the OEIS) is the sequence of divisions of octave that provides better and better approximations of the perfect fifth. Related sequences containing divisions approximating other just intervals are listed in a footnote.
Equal temperaments of non-octave intervals.
The equal-tempered version of the Bohlen–Pierce scale consists of the ratio 3:1 (1902 cents) conventionally a perfect fifth plus an octave (that is, a perfect twelfth), called in this theory a tritave (), and split into 13 equal parts. This provides a very close match to justly tuned ratios consisting only of odd numbers. Each step is 146.3 cents (), or 13√3.
Wendy Carlos created three unusual equal temperaments after a thorough study of the properties of possible temperaments with step size between 30 and 120 cents. These were called "alpha", "beta", and "gamma". They can be considered equal divisions of the perfect fifth. Each of them provides a very good approximation of several just intervals. Their step sizes:
Alpha and beta may be heard on the title track of Carlos's 1986 album "Beauty in the Beast".
Proportions between semitone and whole tone.
In this section, "semitone" and "whole tone" may not have their usual 12 EDO meanings, as it discusses how they may be tempered in different ways from their just versions to produce desired relationships. Let the number of steps in a semitone be s, and the number of steps in a tone be t.
There is exactly one family of equal temperaments that fixes the semitone to any proper fraction of a whole tone, while keeping the notes in the right order (meaning that, for example, C, D, E, F, and F♯ are in ascending order if they preserve their usual relationships to C). That is, fixing q to a proper fraction in the relationship also defines a unique family of one equal temperament and its multiples that fulfil this relationship.
For example, where k is an integer, sets sets and sets The smallest multiples in these families (e.g. 12, 19 and 31 above) has the additional property of having no notes outside the circle of fifths. (This is not true in general; in 24 EDO, the half-sharps and half-flats are not in the circle of fifths generated starting from C.) The extreme cases are where and the semitone becomes a unison, and , where and the semitone and tone are the same interval.
Once one knows how many steps a semitone and a tone are in this equal temperament, one can find the number of steps it has in the octave. An equal temperament with the above properties (including having no notes outside the circle of fifths) divides the octave into and the perfect fifth into If there are notes outside the circle of fifths, one must then multiply these results by n, the number of nonoverlapping circles of fifths required to generate all the notes (e.g., two in 24 EDO, six in 72 EDO). (One must take the small semitone for this purpose: 19 EDO has two semitones, one being tone and the other being . Similarly, 31 EDO has two semitones, one being tone and the other being ).
The smallest of these families is and in particular, 12 EDO is the smallest equal temperament with the above properties. Additionally, it makes the semitone exactly half a whole tone, the simplest possible relationship. These are some of the reasons 12 EDO has become the most commonly used equal temperament. (Another reason is that 12 EDO is the smallest equal temperament to closely approximate 5 limit harmony, the next-smallest being 19 EDO.)
Each choice of fraction q for the relationship results in exactly one equal temperament family, but the converse is not true: 47 EDO has two different semitones, where one is tone and the other is , which are not complements of each other like in 19 EDO ( and ). Taking each semitone results in a different choice of perfect fifth.
Related tuning systems.
Regular diatonic tunings.
The diatonic tuning in "12 tone equal temperament" can be generalized to any regular diatonic tuning dividing the octave as a sequence of steps (or some circular shift or "rotation" of it). To be called a "regular" diatonic tuning, each of the two semitones (s) must be smaller than either of the tones (greater tone, T, and lesser tone, t). The comma κ is implicit as the size ratio between the greater and lesser tones: Expressed as frequencies or as cents .
The notes in a regular diatonic tuning are connected in a "spiral of fifths" that does "not" close (unlike the circle of fifths in Starting on the subdominant F (in the key of C) there are three perfect fifths in a row—F–C, C–G, and G–D—each a composite of some permutation of the smaller intervals The three in-tune fifths are interrupted by the grave fifth D–A
("grave" means "flat by a comma"), followed by another perfect fifth, E–B, and another grave fifth, B–F♯, and then restarting in the sharps with F♯–C♯; the same pattern repeats through the sharp notes, then the double-sharps, and so on, indefinitely. But each octave of all-natural or all-sharp or all-double-sharp notes flattens by two commas with every transition from naturals to sharps, or single sharps to double sharps, etc. The pattern is also reverse-symmetric in the flats: Descending by fourths the pattern reciprocally sharpens notes by two commas with every transition from natural notes to flattened notes, or flats to double flats, etc. If left unmodified, the two grave fifths in each block of all-natural notes, or all-sharps, or all-flat notes, are "wolf" intervals: Each of the grave fifths out of tune by a diatonic comma.
Since the comma, κ, expands the lesser tone into the greater tone, a just octave can be broken up into a sequence (or a circular shift of it) of diatonic semitones s, chromatic semitones c, and commas Various equal temperaments alter the interval sizes, usually breaking apart the three commas and then redistributing their parts into the seven diatonic semitones s, or into the five chromatic semitones c, or into both s and c, with some fixed proportion for each type of semitone.
The sequence of intervals s, c, and κ can be repeatedly appended to itself into a greater spiral of 12 fifths, and made to connect at its far ends by slight adjustments to the size of one or several of the intervals, or left unmodified with occasional less-than-perfect fifths, flat by a comma.
Morphing diatonic tunings into EDO.
An equal temperament can be created if the sizes of the major and minor tones (T, t) are altered to be the same (say, by setting , with the others expanded to still fill out the octave), and both semitones (s and c) the same size, then twelve equal semitones, two per tone, result. In [[12 equal temperament|, the semitone, s, is exactly half the size of the same-size whole tones T = t.
Some of the intermediate sizes of tones and semitones can also be generated in equal temperament systems, by modifying the sizes of the comma and semitones. One obtains in the limit as the size of c and κ tend to zero, with the octave kept fixed, and in the limit as s and κ tend to zero; is of course, the case and For instance:
See also.
<templatestyles src="Div col/styles.css"/>
Footnotes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
Sources.
<templatestyles src="Refbegin/styles.css" />
External links.
[[Category:Equal temperaments| ]]
[[Category:Chinese discoveries]] | [
{
"math_id": 0,
"text": "\\ r^n = p\\ "
},
{
"math_id": 1,
"text": "\\ r = \\sqrt[n]{p\\ }\\ "
},
{
"math_id": 2,
"text": "\\ c = \\frac{\\ w\\ }{ n }\\ "
},
{
"math_id": 3,
"text": " \\sqrt[12]{2\\ } = 2^{\\tfrac{1}{12}} \\approx 1.059463 "
},
{
"math_id": 4,
"text": "\\ P_n = P_a\\ \\cdot\\ \\Bigl(\\ \\sqrt[12]{2\\ }\\ \\Bigr)^{ n-a }\\ "
},
{
"math_id": 5,
"text": "P_{40} = 440\\ \\mathsf{Hz}\\ \\cdot\\ \\Bigl( \\sqrt[12]{2}\\ \\Bigr)^{(40-49)} \\approx 261.626\\ \\mathsf{Hz}\\ "
},
{
"math_id": 6,
"text": "P_{46} = 440\\ \\mathsf{Hz}\\ \\cdot\\ \\Bigl( \\sqrt[12]{2}\\ \\Bigr)^{(46-49)} \\approx 369.994\\ \\mathsf{Hz}\\ "
},
{
"math_id": 7,
"text": "\\ E_n = E_a\\ \\cdot\\ 2^{\\ x}\\ \\quad "
},
{
"math_id": 8,
"text": " \\quad\\ x\\ \\equiv\\ \\frac{ 1 }{\\ 12\\ }\\ \\operatorname{round}\\!\\Biggl( 12\\log_{2} \\left(\\frac{\\ n\\ }{ a }\\right) \\Biggr) ~."
},
{
"math_id": 9,
"text": "E_{660} = 440\\ \\mathsf{Hz}\\ \\cdot\\ 2^{\\left(\\frac{ 7 }{\\ 12\\ }\\right)}\\ \\approx\\ 659.255\\ \\mathsf{Hz}\\ \\quad "
},
{
"math_id": 10,
"text": " \\quad x = \\frac{ 1 }{\\ 12\\ }\\ \\operatorname{round}\\!\\Biggl(\\ 12 \\log_{2}\\left(\\frac{\\ 660\\ }{ 440 }\\right)\\ \\Biggr) = \\frac{ 7 }{\\ 12\\ } ~."
},
{
"math_id": 11,
"text": "E_{550} = 440\\ \\mathsf{Hz}\\ \\cdot\\ 2^{\\left(\\frac{ 1 }{\\ 3\\ }\\right)}\\ \\approx\\ 554.365\\ \\mathsf{Hz}\\ \\quad "
},
{
"math_id": 12,
"text": " \\quad x = \\frac{ 1 }{\\ 12\\ }\\ \\operatorname{round}\\!\\Biggl( 12 \\log_{2}\\left(\\frac{\\ 550\\ }{ 440 }\\right)\\Biggr) = \\frac{ 4 }{\\ 12\\ } = \\frac{ 1 }{\\ 3\\ } ~."
}
] | https://en.wikipedia.org/wiki?curid=10307 |
103077 | Turbofan | Airbreathing jet engine designed to provide thrust by driving a fan
A turbofan or fanjet is a type of airbreathing jet engine that is widely used in aircraft propulsion. The word "turbofan" is a combination of references to the preceding generation engine technology of the turbojet and the additional fan stage. It consists of a gas turbine engine which achieves mechanical energy from combustion, and a ducted fan that uses the mechanical energy from the gas turbine to force air rearwards. Thus, whereas all the air taken in by a turbojet passes through the combustion chamber and turbines, in a turbofan some of that air bypasses these components. A turbofan thus can be thought of as a turbojet being used to drive a ducted fan, with both of these contributing to the thrust.
The ratio of the mass-flow of air bypassing the engine core to the mass-flow of air passing through the core is referred to as the bypass ratio. The engine produces thrust through a combination of these two portions working together. Engines that use more jet thrust relative to fan thrust are known as "low-bypass turbofans"; conversely those that have considerably more fan thrust than jet thrust are known as "high-bypass". Most commercial aviation jet engines in use are of the high-bypass type, and most modern fighter engines are low-bypass. Afterburners are used on low-bypass turbofan engines with bypass and core mixing before the afterburner.
Modern turbofans have either a large single-stage fan or a smaller fan with several stages. An early configuration combined a low-pressure turbine and fan in a single rear-mounted unit.
Principles.
The turbofan was invented to improve the fuel consumption of the turbojet. It achieves this by pushing more air, thus increasing the mass and lowering the speed of the propelling jet compared to that of the turbojet. This is done mechanically by adding a ducted fan rather than using viscous forces. A vacuum ejector is used in conjunction with the fan as first envisaged by inventor Frank Whittle.
Whittle envisioned flight speeds of 500 mph in his March 1936 UK patent 471,368 "Improvements relating to the propulsion of aircraft", in which he describes the principles behind the turbofan, although not called as such at that time. While the turbojet uses the gas from its thermodynamic cycle as its propelling jet, for aircraft speeds below 500 mph there are two penalties to this design which are addressed by the turbofan.
Firstly, energy is wasted as the propelling jet is going much faster rearwards than the aircraft is going forwards, leaving a very fast wake. This wake contains kinetic energy that reflects the fuel used to produce it, rather than the fuel used to move the aircraft forwards. A turbofan harvests that wasted velocity and uses it to power a ducted fan that blows air in bypass channels around the rest of the turbine. This reduces the speed of the propelling jet while pushing more air, and thus more mass.
The other penalty is that combustion is less efficient at lower speeds. Any action to reduce the fuel consumption of the engine by increasing its pressure ratio or turbine temperature to achieve better combustion causes a corresponding increase in pressure and temperature in the exhaust duct which in turn cause a higher gas speed from the propelling nozzle (and higher KE and wasted fuel). Although the engine would use less fuel to produce a pound of thrust, more fuel is wasted in the faster propelling jet. In other words, the independence of thermal and propulsive efficiencies, as exists with the piston engine/propeller combination which preceded the turbojet, is lost. In contrast, Roth considers regaining this independence the single most important feature of the turbofan which allows specific thrust to be chosen independently of the gas generator cycle.
The working substance of the thermodynamic cycle is the only mass accelerated to produce thrust in a turbojet which is a serious limitation (high fuel consumption) for aircraft speeds below supersonic. For subsonic flight speeds the speed of the propelling jet has to be reduced because there is a price to be paid in producing the thrust. The energy required to accelerate the gas inside the engine (increase in kinetic energy) is expended in two ways, by producing a change in momentum ( i.e. a force), and a wake which is an unavoidable consequence of producing thrust by an airbreathing engine (or propeller). The wake velocity, and fuel burned to produce it, can be reduced and the required thrust still maintained by increasing the mass accelerated. A turbofan does this by transferring energy available inside the engine, from the gas generator, to a ducted fan which produces a second, additional mass of accelerated air.
The transfer of energy from the core to bypass air results in lower pressure and temperature gas entering the core nozzle (lower exhaust velocity) and fan-produced temperature and pressure entering the fan nozzle. The amount of energy transferred depends on how much pressure rise the fan is designed to produce (fan pressure ratio). The best energy exchange (lowest fuel consumption) between the two flows, and how the jet velocities compare, depends on how efficiently the transfer takes place which depends on the losses in the fan-turbine and fan.
The fan flow has lower exhaust velocity, giving much more thrust per unit energy (lower specific thrust). Both airstreams contribute to the gross thrust of the engine. The additional air for the bypass stream increases the ram drag in the air intake stream-tube, but there is still a significant increase in net thrust. The overall effective exhaust velocity of the two exhaust jets can be made closer to a normal subsonic aircraft's flight speed and gets closer to the ideal Froude efficiency. A turbofan accelerates a larger mass of air more slowly, compared to a turbojet which accelerates a smaller amount more quickly, which is a less efficient way to generate the same thrust (see the efficiency section below).
The ratio of the mass-flow of air bypassing the engine core compared to the mass-flow of air passing through the core is referred to as the bypass ratio. Engines with more jet thrust relative to fan thrust are known as "low-bypass turbofans", those that have considerably more fan thrust than jet thrust are known as "high-bypass". Most commercial aviation jet engines in use are high-bypass, and most modern fighter engines are low-bypass. Afterburners are used on low-bypass turbofans on combat aircraft.
Bypass ratio.
The "bypass ratio (BPR)" of a turbofan engine is the ratio between the mass flow rate of the bypass stream to the mass flow rate entering the core. A bypass ratio of 6, for example, means that 6 times more air passes through the bypass duct than the amount that passes through the combustion chamber.
Turbofan engines are usually described in terms of BPR, which together with overall pressure ratio, turbine inlet temperature and fan pressure ratio are important design parameters. In addition BPR is quoted for turboprop and unducted fan installations because their high propulsive efficiency gives them the overall efficiency characteristics of very high bypass turbofans. This allows them to be shown together with turbofans on plots which show trends of reducing specific fuel consumption (SFC) with increasing BPR. BPR can also be quoted for lift fan installations where the fan airflow is remote from the engine and doesn't flow past the engine core.
Considering a constant core (i.e. fixed pressure ratio and turbine inlet temperature), core and bypass jet velocities equal and a particular flight condition (i.e. Mach number and altitude) the fuel consumption per lb of thrust (sfc) decreases with increase in BPR. At the same time gross and net thrusts increase, but by different amounts. There is considerable potential for reducing fuel consumption for the same core cycle by increasing BPR.This is achieved because of the reduction in pounds of thrust per lb/sec of airflow (specific thrust) and the resultant reduction in lost kinetic energy in the jets (increase in propulsive efficiency).
If all the gas power from a gas turbine is converted to kinetic energy in a propelling nozzle, the aircraft is best suited to high supersonic speeds. If it is all transferred to a separate big mass of air with low kinetic energy, the aircraft is best suited to zero speed (hovering). For speeds in between, the gas power is shared between a separate airstream and the gas turbine's own nozzle flow in a proportion which gives the aircraft performance required. The trade off between mass flow and velocity is also seen with propellers and helicopter rotors by comparing disc loading and power loading. For example, the same helicopter weight can be supported by a high power engine and small diameter rotor or, for less fuel, a lower power engine and bigger rotor with lower velocity through the rotor.
Bypass usually refers to transferring gas power from a gas turbine to a bypass stream of air to reduce fuel consumption and jet noise. Alternatively, there may be a requirement for an afterburning engine where the sole requirement for bypass is to provide cooling air. This sets the lower limit for BPR and these engines have been called "leaky" or continuous bleed turbojets (General Electric YJ-101 BPR 0.25) and low BPR turbojets (Pratt & Whitney PW1120). Low BPR (0.2) has also been used to provide surge margin as well as afterburner cooling for the Pratt & Whitney J58.
Efficiency.
Propeller engines are most efficient for low speeds, turbojet engines for high speeds, and turbofan engines between the two. Turbofans are the most efficient engines in the range of speeds from about , the speed at which most commercial aircraft operate.
In a turbojet (zero-bypass) engine, the high temperature and high pressure exhaust gas is accelerated when it undergoes expansion through a propelling nozzle and produces all the thrust. The compressor absorbs the mechanical power produced by the turbine. In a bypass design, extra turbines drive a ducted fan that accelerates air rearward from the front of the engine. In a high-bypass design, the ducted fan and nozzle produce most of the thrust. Turbofans are closely related to turboprops in principle because both transfer some of the gas turbine's gas power, using extra machinery, to a bypass stream leaving less for the hot nozzle to convert to kinetic energy. Turbofans represent an intermediate stage between turbojets, which derive all their thrust from exhaust gases, and turbo-props which derive minimal thrust from exhaust gases (typically 10% or less). Extracting shaft power and transferring it to a bypass stream introduces extra losses which are more than made up by the improved propulsive efficiency. The turboprop at its best flight speed gives significant fuel savings over a turbojet even though an extra turbine, a gearbox and a propeller are added to the turbojet's low-loss propelling nozzle. The turbofan has additional losses from its greater number of compressor stages/blades, fan and bypass duct.
Froude, or propulsive, efficiency can be defined as:
formula_0
where:
Thrust.
While a turbojet engine uses all of the engine's output to produce thrust in the form of a hot high-velocity exhaust gas jet, a turbofan's cool low-velocity bypass air yields between 30% and 70% of the total thrust produced by a turbofan system.
The thrust (FN) generated by a turbofan depends on the effective exhaust velocity of the total exhaust, as with any jet engine, but because two exhaust jets are present the thrust equation can be expanded as:
formula_1
where:
Nozzles.
The cold duct and core duct's nozzle systems are relatively complex due to the use of two separate exhaust flows. In high bypass engines, the fan is situated in a short duct near the front of the engine and typically has a convergent cold nozzle, with the tail of the duct forming a low pressure ratio nozzle that under normal conditions will choke creating supersonic flow patterns around the core. The core nozzle is more conventional, but generates less of the thrust, and depending on design choices, such as noise considerations, may conceivably not choke. In low bypass engines the two flows may combine within the ducts, and share a common nozzle, which can be fitted with afterburner.
Noise.
Most of the air flow through a high-bypass turbofan is lower-velocity bypass flow: even when combined with the much-higher-velocity engine exhaust, the average exhaust velocity is considerably lower than in a pure turbojet. Turbojet engine noise is predominately jet noise from the high exhaust velocity. Therefore, turbofan engines are significantly quieter than a pure-jet of the same thrust, and jet noise is no longer the predominant source. Turbofan engine noise propagates both upstream via the inlet and downstream via the primary nozzle and the by-pass duct. Other noise sources are the fan, compressor and turbine.
Fan noise may come from the interaction of the fan-blade wakes with the pressure field of the downstream fan-exit stator vanes. It may be minimized by adequate axial spacing between blade trailing edge and stator entrance.
At high engine speeds, as at takeoff, shock waves from the supersonic fan tips, because of their unequal nature, produce noise of a discordant nature known as "buzz saw" noise.
All modern turbofan engines have acoustic liners in the nacelle to damp their noise. They extend as much as possible to cover the largest surface area. The acoustic performance of the engine can be experimentally evaluated by means of ground tests or in dedicated experimental test rigs.
In the aerospace industry, chevrons are the "saw-tooth" patterns on the trailing edges of some jet engine nozzles that are used for noise reduction. The shaped edges smooth the mixing of hot air from the engine core and cooler air flowing through the engine fan, which reduces noise-creating turbulence. Chevrons were developed by GE under a NASA contract. Some notable examples of such designs are Boeing 787 and Boeing 747-8 – on the Rolls-Royce Trent 1000 and General Electric GEnx engines.
History.
Early turbojet engines were not very fuel-efficient because their overall pressure ratio and turbine inlet temperature were severely limited by the technology and materials available at the time.
The first turbofan engine, which was only run on a test bed, was the German Daimler-Benz DB 670, designated the 109-007 by the German RLM (Ministry of Aviation), with a first run date of 27 May 1943, after the testing of the turbomachinery using an electric motor, which had been undertaken on 1 April 1943. Development of the engine was abandoned with its problems unsolved, as the war situation worsened for Germany.
Later in 1943, the British ground tested the Metrovick F.3 turbofan, which used the Metrovick F.2 turbojet as a gas generator with the exhaust discharging into a close-coupled aft-fan module comprising a contra-rotating LP turbine system driving two co-axial contra-rotating fans.
Improved materials, and the introduction of twin compressors, such as in the Bristol Olympus, and Pratt & Whitney JT3C engines, increased the overall pressure ratio and thus the thermodynamic efficiency of engines. They also had poor propulsive efficiency, because pure turbojets have a high specific thrust/high velocity exhaust, which is better suited to supersonic flight.
The original low-bypass turbofan engines were designed to improve propulsive efficiency by reducing the exhaust velocity to a value closer to that of the aircraft. The Rolls-Royce Conway, the world's first production turbofan, had a bypass ratio of 0.3, similar to the modern General Electric F404 fighter engine. Civilian turbofan engines of the 1960s, such as the Pratt & Whitney JT8D and the Rolls-Royce Spey, had bypass ratios closer to 1 and were similar to their military equivalents.
The first Soviet airliner powered by turbofan engines was the Tupolev Tu-124 introduced in 1962. It used the Soloviev D-20. 164 aircraft were produced between 1960 and 1965 for Aeroflot and other Eastern Bloc airlines, with some operating until the early 1990s.
The first General Electric turbofan was the aft-fan CJ805-23, based on the CJ805-3 turbojet. It was followed by the aft-fan General Electric CF700 engine, with a 2.0 bypass ratio. This was derived from the General Electric J85/CJ610 turbojet to power the larger Rockwell Sabreliner 75/80 model aircraft, as well as the Dassault Falcon 20, with about a 50% increase in thrust to . The CF700 was the first small turbofan to be certified by the Federal Aviation Administration (FAA). There were at one time over 400 CF700 aircraft in operation around the world, with an experience base of over 10 million service hours. The CF700 turbofan engine was also used to train Moon-bound astronauts in Project Apollo as the powerplant for the Lunar Landing Research Vehicle.
Common types.
Low-bypass turbofan.
A high-specific-thrust/low-bypass-ratio turbofan normally has a multi-stage fan behind inlet guide vanes, developing a relatively high pressure ratio and, thus, yielding a high (mixed or cold) exhaust velocity. The core airflow needs to be large enough to ensure there is sufficient core power to drive the fan. A smaller core flow/higher bypass ratio cycle can be achieved by raising the inlet temperature of the high-pressure (HP) turbine rotor.
To illustrate one aspect of how a turbofan differs from a turbojet, comparisons can be made at the same airflow (to keep a common intake for example) and the same net thrust (i.e. same specific thrust). A bypass flow can be added only if the turbine inlet temperature is not too high to compensate for the smaller core flow. Future improvements in turbine cooling/material technology can allow higher turbine inlet temperature, which is necessary because of increased cooling air temperature, resulting from an overall pressure ratio increase.
The resulting turbofan, with reasonable efficiencies and duct loss for the added components, would probably operate at a higher nozzle pressure ratio than the turbojet, but with a lower exhaust temperature to retain net thrust. Since the temperature rise across the whole engine (intake to nozzle) would be lower, the (dry power) fuel flow would also be reduced, resulting in a better specific fuel consumption (SFC).
Some low-bypass ratio military turbofans (e.g. F404, JT8D) have variable inlet guide vanes to direct air onto the first fan rotor stage. This improves the fan surge margin (see compressor map).
Afterburning turbofan.
Since the 1970s, most jet fighter engines have been low/medium bypass turbofans with a mixed exhaust, afterburner and variable area exit nozzle. An afterburner is a combustor located downstream of the turbine blades and directly upstream of the nozzle, which burns fuel from afterburner-specific fuel injectors. When lit, large volumes of fuel are burnt in the afterburner, raising the temperature of exhaust gases by a significant degree, resulting in a higher exhaust velocity/engine specific thrust. The variable geometry nozzle must open to a larger throat area to accommodate the extra volume and increased flow rate when the afterburner is lit. Afterburning is often designed to give a significant thrust boost for take off, transonic acceleration and combat maneuvers, but is very fuel intensive. Consequently, afterburning can be used only for short portions of a mission.
Unlike in the main engine, where stoichiometric temperatures in the combustor have to be reduced before they reach the turbine, an afterburner at maximum fuelling is designed to produce stoichiometric temperatures at entry to the nozzle, about . At a fixed total applied fuel:air ratio, the total fuel flow for a given fan airflow will be the same, regardless of the dry specific thrust of the engine. However, a high specific thrust turbofan will, by definition, have a higher nozzle pressure ratio, resulting in a higher afterburning net thrust and, therefore, a lower afterburning specific fuel consumption (SFC). However, high specific thrust engines have a high dry SFC. The situation is reversed for a medium specific thrust afterburning turbofan: i.e., poor afterburning SFC/good dry SFC. The former engine is suitable for a combat aircraft which must remain in afterburning combat for a fairly long period, but has to fight only fairly close to the airfield (e.g. cross border skirmishes). The latter engine is better for an aircraft that has to fly some distance, or loiter for a long time, before going into combat. However, the pilot can afford to stay in afterburning only for a short period, before aircraft fuel reserves become dangerously low.
The first production afterburning turbofan engine was the Pratt & Whitney TF30, which initially powered the F-111 Aardvark and F-14 Tomcat. Low-bypass military turbofans include the Pratt & Whitney F119, the Eurojet EJ200, the General Electric F110, the Klimov RD-33, and the Saturn AL-31, all of which feature a mixed exhaust, afterburner and variable area propelling nozzle.
High-bypass turbofan.
To further improve fuel economy and reduce noise, almost all jet airliners and most military transport aircraft (e.g., the C-17) are powered by low-specific-thrust/high-bypass-ratio turbofans. These engines evolved from the high-specific-thrust/low-bypass-ratio turbofans used in such aircraft in the 1960s. Modern combat aircraft tend to use low-bypass ratio turbofans, and some military transport aircraft use turboprops.
Low specific thrust is achieved by replacing the multi-stage fan with a single-stage unit. Unlike some military engines, modern civil turbofans lack stationary inlet guide vanes in front of the fan rotor. The fan is scaled to achieve the desired net thrust.
The core (or gas generator) of the engine must generate enough power to drive the fan at its rated mass flow and pressure ratio. Improvements in turbine cooling/material technology allow for a higher (HP) turbine rotor inlet temperature, which allows a smaller (and lighter) core, potentially improving the core thermal efficiency. Reducing the core mass flow tends to increase the load on the LP turbine, so this unit may require additional stages to reduce the average stage loading and to maintain LP turbine efficiency. Reducing core flow also increases bypass ratio. Bypass ratios greater than 5:1 are increasingly common; the Pratt & Whitney PW1000G, which entered commercial service in 2016, attains 12.5:1.
Further improvements in core thermal efficiency can be achieved by raising the overall pressure ratio of the core. Improvements in blade aerodynamics can reduce the number of extra compressor stages required, and variable geometry stators enable high-pressure-ratio compressors to work surge-free at all throttle settings.
The first (experimental) high-bypass turbofan engine was the AVCO-Lycoming PLF1A-2, a Honeywell T55 turboshaft-derived engine that was first run in February 1962. The PLF1A-2 had a geared fan stage, produced a static thrust of , and had a bypass ratio of 6:1. The General Electric TF39 became the first production model, designed to power the Lockheed C-5 Galaxy military transport aircraft. The civil General Electric CF6 engine used a derived design. Other high-bypass turbofans are the Pratt & Whitney JT9D, the three-shaft Rolls-Royce RB211 and the CFM International CFM56; also the smaller TF34. More recent large high-bypass turbofans include the Pratt & Whitney PW4000, the three-shaft Rolls-Royce Trent, the General Electric GE90/GEnx and the GP7000, produced jointly by GE and P&W. The Pratt & Whitney JT9D engine was the first high bypass ratio jet engine to power a wide-body airliner.
The lower the specific thrust of a turbofan, the lower the mean jet outlet velocity, which in turn translates into a high thrust lapse rate (i.e. decreasing thrust with increasing flight speed). See technical discussion below, item 2. Consequently, an engine sized to propel an aircraft at high subsonic flight speed (e.g., Mach 0.83) generates a relatively high thrust at low flight speed, thus enhancing runway performance. Low specific thrust engines tend to have a high bypass ratio, but this is also a function of the temperature of the turbine system.
The turbofans on twin-engined transport aircraft produce enough take-off thrust to continue a take-off on one engine if the other engine shuts down after a critical point in the take-off run. From that point on the aircraft has less than half the thrust compared to two operating engines because the non-functioning engine is a source of drag. Modern twin engined airliners normally climb very steeply immediately after take-off. If one engine shuts down, the climb-out is much shallower, but sufficient to clear obstacles in the flightpath.
The Soviet Union's engine technology was less advanced than the West's, and its first wide-body aircraft, the Ilyushin Il-86, was powered by low-bypass engines. The Yakovlev Yak-42, a medium-range, rear-engined aircraft seating up to 120 passengers, introduced in 1980, was the first Soviet aircraft to use high-bypass engines.
Turbofan configurations.
Turbofan engines come in a variety of engine configurations. For a given engine cycle (i.e., same airflow, bypass ratio, fan pressure ratio, overall pressure ratio and HP turbine rotor inlet temperature), the choice of turbofan configuration has little impact upon the design point performance (e.g., net thrust, SFC), as long as overall component performance is maintained. Off-design performance and stability is, however, affected by engine configuration.
The basic element of a turbofan is a spool, a single combination of fan/compressor, turbine and shaft rotating at a single speed. For a given pressure ratio, the surge margin can be increased by two different design paths:
Most modern western civil turbofans employ a relatively high-pressure-ratio high-pressure (HP) compressor, with many rows of variable stators to control surge margin at low rpm. In the three-spool RB211/Trent the core compression system is split into two, with the IP compressor, which supercharges the HP compressor, being on a different coaxial shaft and driven by a separate (IP) turbine. As the HP compressor has a modest pressure ratio its speed can be reduced surge-free, without employing variable geometry. However, because a shallow IP compressor working line is inevitable, the IPC has one stage of variable geometry on all variants except the −535, which has none.
Single-shaft turbofan.
Although far from common, the single-shaft turbofan is probably the simplest configuration, comprising a fan and high-pressure compressor driven by a single turbine unit, all on the same spool. The Snecma M53, which powers Dassault Mirage 2000 fighter aircraft, is an example of a single-shaft turbofan. Despite the simplicity of the turbomachinery configuration, the M53 requires a variable area mixer to facilitate part-throttle operation.
Aft-fan turbofan.
One of the earliest turbofans was a derivative of the General Electric J79 turbojet, known as the CJ805-23, which featured an integrated aft fan/low-pressure (LP) turbine unit located in the turbojet exhaust jetpipe. Hot gas from the turbojet turbine exhaust expanded through the LP turbine, the fan blades being a radial extension of the turbine blades. This arrangement introduces an additional gas leakage path compared to a front-fan configuration and was a problem with this engine with higher-pressure turbine gas leaking into the fan airflow. An aft-fan configuration was later used for the General Electric GE36 UDF (propfan) demonstrator of the early 1980s.
In 1971 a concept was put forward by the NASA Lewis Research Center for a supersonic transport engine which operated as an aft-fan turbofan at take-off and subsonic speeds and a turbojet at higher speeds. This would give the low noise and high thrust characteristics of a turbofan at take-off, together with turbofan high propulsive efficiency at subsonic flight speeds. It would have the high propulsive efficiency of a turbojet at supersonic cruise speeds.
Basic two-spool.
Many turbofans have at least basic two-spool configuration where the fan is on a separate low pressure (LP) spool, running concentrically with the compressor or high pressure (HP) spool; the LP spool runs at a lower angular velocity, while the HP spool turns faster and its compressor further compresses part of the air for combustion. The BR710 is typical of this configuration. At the smaller thrust sizes, instead of all-axial blading, the HP compressor configuration may be axial-centrifugal (e.g., CFE CFE738), double-centrifugal or even diagonal/centrifugal (e.g. Pratt & Whitney Canada PW600).
Boosted two-spool.
Higher overall pressure ratios can be achieved by either raising the HP compressor pressure ratio or adding compressor (non-bypass) stages to the LP spool, between the fan and the HP compressor, to boost the latter. All of the large American turbofans (e.g. General Electric CF6, GE90, GE9X and GEnx plus Pratt & Whitney JT9D and PW4000) use booster stages. The Rolls-Royce BR715 is another example. The high bypass ratios used in modern civil turbofans tend to reduce the relative diameter of the booster stages, reducing their mean tip speed. Consequently, more booster stages are required to develop the necessary pressure rise.
Three-spool.
Rolls-Royce chose a three-spool configuration for their large civil turbofans (i.e. the RB211 and Trent families), where the booster stages of a boosted two-spool configuration are separated into an intermediate pressure (IP) spool, driven by its own turbine. The first three-spool engine was the earlier Rolls-Royce RB.203 Trent of 1967.
The Garrett ATF3, powering the Dassault Falcon 20 business jet, has an unusual three spool layout with an aft spool not concentric with the two others.
Ivchenko Design Bureau chose the same configuration as Rolls-Royce for their Lotarev D-36 engine, followed by Lotarev/Progress D-18T and Progress D-436.
The Turbo-Union RB199 military turbofan also has a three-spool configuration, as do the military Kuznetsov NK-25 and NK-321.
Geared fan.
As bypass ratio increases, the fan blade tip speed increases relative to the LPT blade speed. This will reduce the LPT blade speed, requiring more turbine stages to extract enough energy to drive the fan. Introducing a (planetary) reduction gearbox, with a suitable gear ratio, between the LP shaft and the fan enables both the fan and LP turbine to operate at their optimum speeds. Examples of this configuration are the long-established Garrett TFE731, the Honeywell ALF 502/507, and the recent Pratt & Whitney PW1000G.
Military turbofans.
Most of the configurations discussed above are used in civilian turbofans, while modern military turbofans (e.g., Snecma M88) are usually basic two-spool.
High-pressure turbine.
Most civil turbofans use a high-efficiency, 2-stage HP turbine to drive the HP compressor. The CFM International CFM56 uses an alternative approach: a single-stage, high-work unit. While this approach is probably less efficient, there are savings on cooling air, weight and cost.
In the RB211 and Trent 3-spool engine series, the HP compressor pressure ratio is modest so only a single HP turbine stage is required. Modern military turbofans also tend to use a single HP turbine stage and a modest HP compressor.
Low-pressure turbine.
Modern civil turbofans have multi-stage LP turbines (anywhere from 3 to 7). The number of stages required depends on the engine cycle bypass ratio and the boost (on boosted two-spools). A geared fan may reduce the number of required LPT stages in some applications. Because of the much lower bypass ratios employed, military turbofans require only one or two LP turbine stages.
Overall performance.
Cycle improvements.
Consider a mixed turbofan with a fixed bypass ratio and airflow. Increasing the overall pressure ratio of the compression system raises the combustor entry temperature. Therefore, at a fixed fuel flow there is an increase in (HP) turbine rotor inlet temperature. Although the higher temperature rise across the compression system implies a larger temperature drop over the turbine system, the mixed nozzle temperature is unaffected, because the same amount of heat is being added to the system. There is, however, a rise in nozzle pressure, because overall pressure ratio increases faster than the turbine expansion ratio, causing an increase in the hot mixer entry pressure. Consequently, net thrust increases, whilst specific fuel consumption (fuel flow/net thrust) decreases. A similar trend occurs with unmixed turbofans.
Turbofan engines can be made more fuel efficient by raising overall pressure ratio and turbine rotor inlet temperature in unison. However, better turbine materials or improved vane/blade cooling are required to cope with increases in both turbine rotor inlet temperature and compressor delivery temperature. Increasing the latter may require better compressor materials.
The overall pressure ratio can be increased by improving fan (or) LP compressor pressure ratio or HP compressor pressure ratio. If the latter is held constant, the increase in (HP) compressor delivery temperature (from raising overall pressure ratio) implies an increase in HP mechanical speed. However, stressing considerations might limit this parameter, implying, despite an increase in overall pressure ratio, a reduction in HP compressor pressure ratio.
According to simple theory, if the ratio of turbine rotor inlet temperature/(HP) compressor delivery temperature is maintained, the HP turbine throat area can be retained. However, this assumes that cycle improvements are obtained, while retaining the datum (HP) compressor exit flow function (non-dimensional flow). In practice, changes to the non-dimensional speed of the (HP) compressor and cooling bleed extraction would probably make this assumption invalid, making some adjustment to HP turbine throat area unavoidable. This means the HP turbine nozzle guide vanes would have to be different from the original. In all probability, the downstream LP turbine nozzle guide vanes would have to be changed anyway.
Thrust growth.
Thrust growth is obtained by increasing core power. There are two basic routes available:
Both routes require an increase in the combustor fuel flow and, therefore, the heat energy added to the core stream.
The hot route may require changes in turbine blade/vane materials or better blade/vane cooling. The cold route can be obtained by one of the following:
all of which increase both overall pressure ratio and core airflow.
Alternatively, the core size can be increased, to raise core airflow, without changing overall pressure ratio. This route is expensive, since a new (upflowed) turbine system (and possibly a larger IP compressor) is also required.
Changes must also be made to the fan to absorb the extra core power. On a civil engine, jet noise considerations mean that any significant increase in take-off thrust must be accompanied by a corresponding increase in fan mass flow (to maintain a T/O specific thrust of about 30 lbf/lb/s).
Improvements.
Aerodynamic modelling.
Aerodynamics is a mix of subsonic, transonic and supersonic airflow on a single fan/gas compressor blade in a modern turbofan. The airflow past the blades must be maintained within close angular limits to keep the air flowing against an increasing pressure. Otherwise air will be rejected back out of the intake.
The Full Authority Digital Engine Control (FADEC) needs accurate data for controlling the engine. The critical turbine inlet temperature (TIT) is too harsh an environment, at and , for reliable sensors. Therefore, during development of a new engine type a relation is established between a more easily measured temperature like exhaust gas temperature and the TIT. Monitoring the exhaust gas temperature is then used to make sure the engine does not run too hot.
Blade technology.
A turbine blade is subjected to , at and a centrifugal force of , well above the point of plastic deformation and even above the melting point.
Exotic alloys, sophisticated air cooling schemes and special mechanical design are needed to keep the physical stresses within the strength of the material.
Rotating seals must withstand harsh conditions for 10 years, 20,000 missions and rotating at 10 to 20,000 rpm.
Fan blades.
Fan blades have been growing as jet engines have been getting bigger: each fan blade carries the equivalent of nine double-decker buses and swallows air the equivalent volume of a squash court every second.
Advances in computational fluid dynamics (CFD) modelling have permitted complex, 3D curved shapes with very wide chord, keeping the fan capabilities while minimizing the blade count to lower costs.
Coincidentally, the bypass ratio grew to achieve higher propulsive efficiency and the fan diameter increased.
Rolls-Royce pioneered the hollow, titanium wide-chord fan blade in the 1980s for aerodynamic efficiency and foreign object damage resistance in the RB211 then for the Trent.
GE Aviation introduced carbon fiber composite fan blades on the GE90 in 1995, manufactured since 2017 with a carbon-fiber tape-layer process.
GE partner Safran developed a 3D woven technology with Albany Composites for the CFM56 and CFM LEAP engines.
Future progress.
Engine cores are shrinking as they operate at higher pressure ratios and become more efficient and smaller compared to the fan as bypass ratios increase.
Blade tip clearances are more difficult to maintain at the exit of the high-pressure compressor where blades are high or less; backbone bending further affects clearance control as the core is proportionately longer and thinner and the fan to low-pressure turbine driveshaft space is constrained within the core.
Pratt & Whitney VP technology and environment Alan Epstein argued "Over the history of commercial aviation, we have gone from 20% to 40% [cruise efficiency], and there is a consensus among the engine community that we can probably get to 60%".
Geared turbofans and further fan pressure ratio reductions may continue to improve propulsive efficiency.
The second phase of the FAA's Continuous Lower Energy, Emissions and Noise (CLEEN) program is targeting for the late 2020s reductions of 33% fuel burn, 60% emissions and 32 dB EPNdb noise compared with the 2000s state-of-the-art.
In summer 2017 at NASA Glenn Research Center in Cleveland, Ohio, Pratt has finished testing a very-low-pressure-ratio fan on a PW1000G, resembling an open rotor with fewer blades than the PW1000G's 20.
The weight and size of the nacelle would be reduced by a short duct inlet, imposing higher aerodynamic turning loads on the blades and leaving less space for soundproofing, but a lower-pressure-ratio fan is slower.
UTC Aerospace Systems Aerostructures will have a full-scale ground test in 2019 of its low-drag Integrated Propulsion System with a thrust reverser, improving fuel burn by 1% and with 2.5-3 EPNdB lower noise.
Safran expects to deliver another 10–15% in fuel efficiency through the mid-2020s before reaching an asymptote, and next will have to increase the bypass ratio to 35:1 instead of 11:1 for the CFM LEAP. It is demonstrating a counterrotating open rotor unducted fan (propfan) in Istres, France, under the European Clean Sky technology program.
Modeling advances and high specific strength materials may help it succeed where previous attempts failed.
When noise levels are within existing standards and similar to the LEAP engine, 15% lower fuel burn will be available and for that Safran is testing its controls, vibration and operation, while airframe integration is still challenging.
For GE Aviation, the energy density of jet fuel still maximises the Breguet range equation and higher pressure ratio cores; lower pressure ratio fans, low-loss inlets and lighter structures can further improve thermal, transfer and propulsive efficiency.
Under the U.S. Air Force's Adaptive Engine Transition Program, adaptive thermodynamic cycles will be used for the sixth-generation jet fighter, based on a modified Brayton cycle and Constant volume combustion.
Additive manufacturing in the advanced turboprop will reduce weight by 5% and fuel burn by 20%.
Rotating and static ceramic matrix composite (CMC) parts operates hotter than metal and are one-third its weight.
With $21.9 million from the Air Force Research Laboratory, GE is investing $200 million in a CMC facility in Huntsville, Alabama, in addition to its Asheville, North Carolina site, mass-producing silicon carbide matrix with silicon-carbide fibers in 2018.
CMCs will be used ten times more by the mid-2020s: the CFM LEAP requires 18 CMC turbine shrouds per engine and the GE9X will use it in the combustor and for 42 HP turbine nozzles.
Rolls-Royce Plc aim for a 60:1 pressure ratio core for the 2020s Ultrafan and began ground tests of its gear for and 15:1 bypass ratios.
Nearly stoichiometric turbine entry temperature approaches the theoretical limit and its impact on emissions has to be balanced with environmental performance goals.
Open rotors, lower pressure ratio fans and potentially distributed propulsion offer more room for better propulsive efficiency.
Exotic cycles, heat exchangers and pressure gain/constant volume combustion may improve thermodynamic efficiency.
Additive manufacturing could be an enabler for intercooler and recuperators.
Closer airframe integration and hybrid or electric aircraft can be combined with gas turbines.
Rolls-Royce engines have a 72–82% propulsive efficiency and 42–49% thermal efficiency for a TSFC at Mach 0.8, and aim for theoretical limits of 95% for open rotor propulsive efficiency and 60% for thermal efficiency with stoichiometric turbine entry temperature and 80:1 overall pressure ratio for a TSFC
As teething troubles may not show up until several thousand hours, the latest turbofans' technical problems disrupt airlines operations and manufacturers deliveries while production rates rise sharply.
Trent 1000 cracked blades grounded almost 50 Boeing 787s and reduced ETOPS to 2.3 hours down from 5.5, costing Rolls-Royce plc almost $950 million.
PW1000G knife-edge seal fractures have caused Pratt & Whitney to fall behind in deliveries, leaving about 100 engineless A320neos waiting for their powerplants.
The CFM LEAP introduction had been smoother but a ceramic composite HP Turbine coating was prematurely lost, necessitating a new design, causing 60 A320neo engine removals for modification and delaying deliveries by up to six weeks late.
On a widebody, Safran estimates 5–10% of fuel could be saved by reducing power intake for hydraulic systems, while swapping to electrical power could save 30% of weight, as initiated on the Boeing 787, while Rolls-Royce plc hopes for up to 5%.
Manufacturers.
The turbofan engine market is dominated by General Electric, Rolls-Royce plc and Pratt & Whitney, in order of market share. General Electric and Safran of France have a joint venture, CFM International. Pratt & Whitney also have a joint venture, International Aero Engines with Japanese Aero Engine Corporation and MTU Aero Engines of Germany, specializing in engines for the Airbus A320 family. Pratt & Whitney and General Electric have a joint venture, Engine Alliance selling a range of engines for aircraft such as the Airbus A380.
For airliners and cargo aircraft, the in-service fleet in 2016 is 60,000 engines and should grow to 103,000 in 2035 with 86,500 deliveries according to Flight Global. A majority will be medium-thrust engines for narrow-body aircraft with 54,000 deliveries, for a fleet growing from 28,500 to 61,000. High-thrust engines for wide-body aircraft, worth 40–45% of the market by value, will grow from 12,700 engines to over 21,000 with 18,500 deliveries. The regional jet engines below 20,000 lb (89 kN) fleet will grow from 7,500 to 9,000 and the fleet of turboprops for airliners will increase from 9,400 to 10,200. The manufacturers market share should be led by CFM with 44% followed by Pratt & Whitney with 29% and then Rolls-Royce and General Electric with 10% each.
Extreme bypass jet engines.
In the 1970s, Rolls-Royce/SNECMA tested a M45SD-02 turbofan fitted with variable-pitch fan blades to improve handling at ultralow fan pressure ratios and to provide thrust reverse down to zero aircraft speed. The engine was aimed at ultraquiet STOL aircraft operating from city-centre airports.
In a bid for increased efficiency with speed, a development of the "turbofan" and "turboprop" known as a propfan engine was created that had an unducted fan. The fan blades are situated outside of the duct, so that it appears like a turboprop with wide scimitar-like blades. Both General Electric and Pratt & Whitney/Allison demonstrated propfan engines in the 1980s. Excessive cabin noise and relatively cheap jet fuel prevented the engines being put into service. The Progress D-27 propfan, developed in the U.S.S.R., was the only propfan engine equipped on a production aircraft.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\eta_f = \\frac {2}{1 + \\frac {V_j}{V_a}}"
},
{
"math_id": 1,
"text": "F_N = \\dot{m}_e v_{he} - \\dot{m}_o v_o + BPR\\, (\\dot{m}_c) v_f"
},
{
"math_id": 2,
"text": "F_n = m \\cdot (V_{jfe} - V_a)."
}
] | https://en.wikipedia.org/wiki?curid=103077 |
10308785 | Differentiation rules | Rules for computing derivatives of functions
This is a summary of differentiation rules, that is, rules for computing the derivative of a function in calculus.
Elementary rules of differentiation.
Unless otherwise stated, all functions are functions of real numbers (R) that return real values; although more generally, the formulae below apply wherever they are well defined — including the case of complex numbers (C).
Constant term rule.
For any value of formula_0, where formula_1, if formula_2 is the constant function given by formula_3, then formula_4.
Proof.
Let formula_1 and formula_3. By the definition of the derivative,
formula_5
This shows that the derivative of any constant function is 0.
Intuitive (geometric) explanation.
The derivative of the function at a point is the slope of the line tangent to the curve at the point. Slope of the constant function is zero, because the tangent line to the constant function is horizontal and its angle is zero.
In other words, the value of the constant function, y, will not change as the value of x increases or decreases.
Differentiation is linear.
For any functions formula_6 and formula_7 and any real numbers formula_8 and formula_9, the derivative of the function formula_10 with respect to formula_11 is: formula_12
In Leibniz's notation this is written as:
formula_13
Special cases include:
The product rule.
For the functions formula_6 and formula_7, the derivative of the function formula_17 with respect to formula_11 is
formula_18
In Leibniz's notation this is written
formula_19
The chain rule.
The derivative of the function formula_20 is
formula_21
In Leibniz's notation, this is written as:
formula_22
often abridged to
formula_23
Focusing on the notion of maps, and the differential being a map formula_24, this is written in a more concise way as:
formula_25
The inverse function rule.
If the function f has an inverse function g, meaning that formula_26 and formula_27 then
formula_28
In Leibniz notation, this is written as
formula_29
Power laws, polynomials, quotients, and reciprocals.
The polynomial or elementary power rule.
If formula_30, for any real number formula_31 then
formula_32
When formula_33 this becomes the special case that if formula_34 then formula_35
Combining the power rule with the sum and constant multiple rules permits the computation of the derivative of any polynomial.
The reciprocal rule.
The derivative of formula_36for any (nonvanishing) function "f" is:
formula_37 wherever "f" is non-zero.
In Leibniz's notation, this is written
formula_38
The reciprocal rule can be derived either from the quotient rule, or from the combination of power rule and chain rule.
The quotient rule.
If "f" and "g" are functions, then:
formula_39 wherever "g" is nonzero.
This can be derived from the product rule and the reciprocal rule.
Generalized power rule.
The elementary power rule generalizes considerably. The most general power rule is the functional power rule: for any functions "f" and "g",
formula_40
wherever both sides are well defined.
Special cases
formula_44
Derivatives of exponential and logarithmic functions.
the equation above is true for all c, but the derivative for formula_45 yields a complex number.
formula_46
formula_47
the equation above is also true for all "c", but yields a complex number if formula_48.
formula_49
formula_50
formula_51where formula_52 is the Lambert W function
formula_53
formula_54
formula_55 formula_56
Logarithmic derivatives.
The logarithmic derivative is another way of stating the rule for differentiating the logarithm of a function (using the chain rule):
formula_57 wherever "f" is positive.
Logarithmic differentiation is a technique which uses logarithms and its differentiation rules to simplify certain expressions before actually applying the derivative.
Logarithms can be used to remove exponents, convert products into sums, and convert division into subtraction — each of which may lead to a simplified expression for taking derivatives.
Derivatives of trigonometric functions.
The derivatives in the table above are for when the range of the inverse secant is formula_58 and when the range of the inverse cosecant is formula_59
It is common to additionally define an inverse tangent function with two arguments, formula_60 Its value lies in the range formula_61 and reflects the quadrant of the point formula_62 For the first and fourth quadrant (i.e. formula_63) one has formula_64 Its partial derivatives are
formula_65
Derivatives of hyperbolic functions.
See Hyperbolic functions for restrictions on these derivatives.
formula_66
formula_67 with formula_68 being the digamma function, expressed by the parenthesized expression to the right of formula_69 in the line above.
formula_70
formula_71
Derivatives of integrals.
Suppose that it is required to differentiate with respect to "x" the function
formula_72
where the functions formula_73 and formula_74 are both continuous in both formula_75 and formula_11 in some region of the formula_76 plane, including formula_77 formula_78, and the functions formula_79 and formula_80 are both continuous and both have continuous derivatives for formula_78. Then for formula_81:
formula_82
This formula is the general form of the Leibniz integral rule and can be derived using the
fundamental theorem of calculus.
Derivatives to "n"th order.
Some rules exist for computing the n-th derivative of functions, where n is a positive integer. These include:
Faà di Bruno's formula.
If f and g are n-times differentiable, then
formula_83
where formula_84 and the set formula_85 consists of all non-negative integer solutions of the Diophantine equation formula_86.
General Leibniz rule.
If f and g are n-times differentiable, then
formula_87
References.
<templatestyles src="Reflist/styles.css" />
Sources and further reading.
These rules are given in many books, both on elementary and advanced calculus, in pure and applied mathematics. Those in this article (in addition to the above references) can be found in: | [
{
"math_id": 0,
"text": "c"
},
{
"math_id": 1,
"text": "c \\in \\mathbb{R}"
},
{
"math_id": 2,
"text": "f(x)"
},
{
"math_id": 3,
"text": "f(x) = c"
},
{
"math_id": 4,
"text": "\\frac{df}{dx} = 0"
},
{
"math_id": 5,
"text": "\\begin{align}\nf'(x) &= \\lim_{h \\to 0}\\frac{f(x + h) - f(x)}{h} \\\\\n&= \\lim_{h \\to 0} \\frac{(c) - (c)}{h} \\\\\n&= \\lim_{h \\to 0} \\frac{0}{h} \\\\\n&= \\lim_{h \\to 0} 0 \\\\\n&= 0\n\\end{align}"
},
{
"math_id": 6,
"text": "f"
},
{
"math_id": 7,
"text": "g"
},
{
"math_id": 8,
"text": "a"
},
{
"math_id": 9,
"text": "b"
},
{
"math_id": 10,
"text": "h(x) = af(x) + bg(x)"
},
{
"math_id": 11,
"text": "x"
},
{
"math_id": 12,
"text": " h'(x) = a f'(x) + b g'(x)."
},
{
"math_id": 13,
"text": " \\frac{d(af+bg)}{dx} = a\\frac{df}{dx} +b\\frac{dg}{dx}."
},
{
"math_id": 14,
"text": "(af)' = af' "
},
{
"math_id": 15,
"text": "(f + g)' = f' + g'"
},
{
"math_id": 16,
"text": "(f - g)' = f' - g'."
},
{
"math_id": 17,
"text": "h(x) = f(x) g(x)"
},
{
"math_id": 18,
"text": " h'(x) = (fg)'(x) = f'(x) g(x) + f(x) g'(x)."
},
{
"math_id": 19,
"text": "\\frac{d(fg)}{dx} = g \\frac{df}{dx} + f \\frac{dg}{dx}."
},
{
"math_id": 20,
"text": "h(x) = f(g(x))"
},
{
"math_id": 21,
"text": " h'(x) = f'(g(x))\\cdot g'(x)."
},
{
"math_id": 22,
"text": "\\frac{d}{dx}h(x) = \\left.\\frac{d}{dz}f(z)\\right|_{z=g(x)}\\cdot \\frac{d}{dx}g(x),"
},
{
"math_id": 23,
"text": "\\frac{dh(x)}{dx} = \\frac{df(g(x))}{dg(x)} \\cdot \\frac{dg(x)}{dx}."
},
{
"math_id": 24,
"text": "\\text{D}"
},
{
"math_id": 25,
"text": " [\\text{D} (f\\circ g)]_x = [\\text{D} f]_{g(x)} \\cdot [\\text{D}g]_x\\,."
},
{
"math_id": 26,
"text": "g(f(x)) = x"
},
{
"math_id": 27,
"text": "f(g(y)) = y,"
},
{
"math_id": 28,
"text": "g' = \\frac{1}{f'\\circ g}."
},
{
"math_id": 29,
"text": " \\frac{dx}{dy} = \\frac{1}{\\frac{dy}{dx}}."
},
{
"math_id": 30,
"text": "f(x) = x^r"
},
{
"math_id": 31,
"text": "r \\neq 0,"
},
{
"math_id": 32,
"text": "f'(x) = rx^{r-1}."
},
{
"math_id": 33,
"text": "r = 1,"
},
{
"math_id": 34,
"text": "f(x) = x,"
},
{
"math_id": 35,
"text": "f'(x) = 1."
},
{
"math_id": 36,
"text": "h(x)=\\frac{1}{f(x)}"
},
{
"math_id": 37,
"text": " h'(x) = -\\frac{f'(x)}{(f(x))^2}"
},
{
"math_id": 38,
"text": " \\frac{d(1/f)}{dx} = -\\frac{1}{f^2}\\frac{df}{dx}."
},
{
"math_id": 39,
"text": "\\left(\\frac{f}{g}\\right)' = \\frac{f'g - g'f}{g^2}\\quad"
},
{
"math_id": 40,
"text": "(f^g)' = \\left(e^{g\\ln f}\\right)' = f^g\\left(f'{g \\over f} + g'\\ln f\\right),\\quad"
},
{
"math_id": 41,
"text": "f(x)=x^a\\!"
},
{
"math_id": 42,
"text": "f'(x)=ax^{a-1}"
},
{
"math_id": 43,
"text": "g(x)=-1\\!"
},
{
"math_id": 44,
"text": " \\frac{d}{dx}\\left(c^{ax}\\right) = {ac^{ax} \\ln c } ,\\qquad c > 0"
},
{
"math_id": 45,
"text": "c<0"
},
{
"math_id": 46,
"text": " \\frac{d}{dx}\\left(e^{ax}\\right) = ae^{ax}"
},
{
"math_id": 47,
"text": " \\frac{d}{dx}\\left( \\log_c x\\right) = {1 \\over x \\ln c} , \\qquad c > 1"
},
{
"math_id": 48,
"text": "c<0\\!"
},
{
"math_id": 49,
"text": " \\frac{d}{dx}\\left( \\ln x\\right) = {1 \\over x} ,\\qquad x > 0."
},
{
"math_id": 50,
"text": " \\frac{d}{dx}\\left( \\ln |x|\\right) = {1 \\over x} ,\\qquad x \\neq 0."
},
{
"math_id": 51,
"text": " \\frac{d}{dx}\\left( W(x)\\right) = {1 \\over {x+e^{W(x)}}} ,\\qquad x > -{1 \\over e}.\\qquad"
},
{
"math_id": 52,
"text": "W(x)"
},
{
"math_id": 53,
"text": " \\frac{d}{dx}\\left( x^x \\right) = x^x(1+\\ln x)."
},
{
"math_id": 54,
"text": " \\frac{d}{dx}\\left( f(x)^{ g(x) } \\right ) = g(x)f(x)^{g(x)-1} \\frac{df}{dx} + f(x)^{g(x)}\\ln{( f(x) )}\\frac{dg}{dx}, \\qquad \\text{if }f(x) > 0, \\text{ and if } \\frac{df}{dx} \\text{ and } \\frac{dg}{dx} \\text{ exist.}"
},
{
"math_id": 55,
"text": " \\frac{d}{dx}\\left( f_{1}(x)^{f_{2}(x)^{\\left ( ... \\right )^{f_{n}(x)}}} \\right ) = \\left [\\sum\\limits_{k=1}^{n} \\frac{\\partial }{\\partial x_{k}} \\left( f_{1}(x_1)^{f_{2}(x_2)^{\\left ( ... \\right )^{f_{n}(x_n)}}} \\right ) \\right ] \\biggr\\vert_{x_1 = x_2 = ... =x_n = x}, \\text{ if } f_{i<n}(x) > 0 \\text{ and }"
},
{
"math_id": 56,
"text": " \\frac{df_{i}}{dx} \\text{ exists. }"
},
{
"math_id": 57,
"text": " (\\ln f)'= \\frac{f'}{f} \\quad"
},
{
"math_id": 58,
"text": "[0,\\pi]\\!"
},
{
"math_id": 59,
"text": "\\left[-\\frac{\\pi}{2},\\frac{\\pi}{2}\\right]."
},
{
"math_id": 60,
"text": "\\arctan(y,x)."
},
{
"math_id": 61,
"text": "[-\\pi,\\pi]"
},
{
"math_id": 62,
"text": "(x,y)."
},
{
"math_id": 63,
"text": "x > 0"
},
{
"math_id": 64,
"text": "\\arctan(y, x>0) = \\arctan(y/x)."
},
{
"math_id": 65,
"text": " \\frac{\\partial \\arctan(y,x)}{\\partial y} = \\frac{x}{x^2 + y^2} \\qquad\\text{and}\\qquad \\frac{\\partial \\arctan(y,x)}{\\partial x} = \\frac{-y}{x^2 + y^2}."
},
{
"math_id": 66,
"text": "\\Gamma(x) = \\int_0^\\infty t^{x-1} e^{-t}\\, dt"
},
{
"math_id": 67,
"text": "\\begin{align}\n\\Gamma'(x) & = \\int_0^\\infty t^{x-1} e^{-t} \\ln t\\,dt \\\\\n& = \\Gamma(x) \\left(\\sum_{n=1}^\\infty \\left(\\ln\\left(1 + \\dfrac{1}{n}\\right) - \\dfrac{1}{x + n}\\right) - \\dfrac{1}{x}\\right) \\\\\n& = \\Gamma(x) \\psi(x)\n\\end{align}"
},
{
"math_id": 68,
"text": "\\psi(x)"
},
{
"math_id": 69,
"text": "\\Gamma(x)"
},
{
"math_id": 70,
"text": "\\zeta(x) = \\sum_{n=1}^\\infty \\frac{1}{n^x}"
},
{
"math_id": 71,
"text": "\\begin{align}\n\\zeta'(x) & = -\\sum_{n=1}^\\infty \\frac{\\ln n}{n^x}\n=-\\frac{\\ln 2}{2^x} - \\frac{\\ln 3}{3^x} - \\frac{\\ln 4}{4^x} - \\cdots \\\\\n& = -\\sum_{p \\text{ prime}} \\frac{p^{-x} \\ln p}{(1-p^{-x})^2} \\prod_{q \\text{ prime}, q \\neq p} \\frac{1}{1-q^{-x}}\n\\end{align}"
},
{
"math_id": 72,
"text": "F(x)=\\int_{a(x)}^{b(x)}f(x,t)\\,dt,"
},
{
"math_id": 73,
"text": "f(x,t)"
},
{
"math_id": 74,
"text": "\\frac{\\partial}{\\partial x}\\,f(x,t)"
},
{
"math_id": 75,
"text": "t"
},
{
"math_id": 76,
"text": "(t,x)"
},
{
"math_id": 77,
"text": "a(x)\\leq t\\leq b(x),"
},
{
"math_id": 78,
"text": "x_0\\leq x\\leq x_1"
},
{
"math_id": 79,
"text": "a(x)"
},
{
"math_id": 80,
"text": "b(x)"
},
{
"math_id": 81,
"text": "\\,x_0\\leq x\\leq x_1"
},
{
"math_id": 82,
"text": " F'(x) = f(x,b(x))\\,b'(x) - f(x,a(x))\\,a'(x) + \\int_{a(x)}^{b(x)} \\frac{\\partial}{\\partial x}\\, f(x,t)\\; dt\\,. "
},
{
"math_id": 83,
"text": " \\frac{d^n}{d x^n} [f(g(x))]= n! \\sum_{\\{k_m\\}} f^{(r)}(g(x)) \\prod_{m=1}^n \\frac{1}{k_m!} \\left(g^{(m)}(x) \\right)^{k_m}"
},
{
"math_id": 84,
"text": " r = \\sum_{m=1}^{n-1} k_m"
},
{
"math_id": 85,
"text": " \\{k_m\\}"
},
{
"math_id": 86,
"text": " \\sum_{m=1}^{n} m k_m = n"
},
{
"math_id": 87,
"text": " \\frac{d^n}{dx^n}[f(x)g(x)] = \\sum_{k=0}^{n} \\binom{n}{k} \\frac{d^{n-k}}{d x^{n-k}} f(x) \\frac{d^k}{d x^k} g(x)"
}
] | https://en.wikipedia.org/wiki?curid=10308785 |
103094 | Graded ring | Type of algebraic structure
In mathematics, in particular abstract algebra, a graded ring is a ring such that the underlying additive group is a direct sum of abelian groups formula_0 such that &NoBreak;}&NoBreak;. The index set is usually the set of nonnegative integers or the set of integers, but can be any monoid. The direct sum decomposition is usually referred to as gradation or grading.
A graded module is defined similarly (see below for the precise definition). It generalizes graded vector spaces. A graded module that is also a graded ring is called a graded algebra. A graded ring could also be viewed as a graded &NoBreak;&NoBreak;-algebra.
The associativity is not important (in fact not used at all) in the definition of a graded ring; hence, the notion applies to non-associative algebras as well; e.g., one can consider a graded Lie algebra.
First properties.
Generally, the index set of a graded ring is assumed to be the set of nonnegative integers, unless otherwise explicitly specified. This is the case in this article.
A graded ring is a ring that is decomposed into a direct sum
formula_1
of
additive groups, such that
formula_2
for all nonnegative integers formula_3 and &NoBreak;&NoBreak;.
A nonzero element of formula_4 is said to be "homogeneous" of "degree" &NoBreak;&NoBreak;. By definition of a direct sum, every nonzero element formula_5 of formula_6 can be uniquely written as a sum formula_7 where each formula_8 is either 0 or homogeneous of degree &NoBreak;&NoBreak;. The nonzero formula_8 are the "homogeneous components" of &NoBreak;&NoBreak;.
Some basic properties are:
An ideal formula_12 is "homogeneous", if for every &NoBreak;&NoBreak;, the homogeneous components of formula_5 also belong to &NoBreak;&NoBreak;. (Equivalently, if it is a graded submodule of &NoBreak;&NoBreak;; see .) The intersection of a homogeneous ideal formula_13 with formula_4 is an &NoBreak;&NoBreak;-submodule of formula_4 called the "homogeneous part" of degree formula_11 of &NoBreak;&NoBreak;. A homogeneous ideal is the direct sum of its homogeneous parts.
If formula_13 is a two-sided homogeneous ideal in &NoBreak;&NoBreak;, then formula_14 is also a graded ring, decomposed as
formula_15
where formula_16 is the homogeneous part of degree formula_11 of &NoBreak;&NoBreak;.
Graded module.
The corresponding idea in module theory is that of a graded module, namely a left module "M" over a graded ring "R" such that
formula_22
and
formula_23
for every i and j.
Example: a graded vector space is an example of a graded module over a field (with the field having trivial grading).
Example: a graded ring is a graded module over itself. An ideal in a graded ring is homogeneous if and only if it is a graded submodule. The annihilator of a graded module is a homogeneous ideal.
Example: Given an ideal "I" in a commutative ring "R" and an "R"-module "M", the direct sum formula_24 is a graded module over the associated graded ring formula_25.
A "morphism" formula_26 of graded modules, called a graded morphism or "graded homomorphism" , is a homomorphism of the underlying modules that respects grading; i.e., &NoBreak;&NoBreak;. A graded submodule is a submodule that is a graded module in own right and such that the set-theoretic inclusion is a morphism of graded modules. Explicitly, a graded module "N" is a graded submodule of "M" if and only if it is a submodule of "M" and satisfies &NoBreak;&NoBreak;. The kernel and the image of a morphism of graded modules are graded submodules.
Remark: To give a graded morphism from a graded ring to another graded ring with the image lying in the center is the same as to give the structure of a graded algebra to the latter ring.
Given a graded module formula_27, the formula_28-twist of formula_27 is a graded module defined by formula_29 (cf. Serre's twisting sheaf in algebraic geometry).
Let "M" and "N" be graded modules. If formula_30 is a morphism of modules, then "f" is said to have degree "d" if formula_31. An exterior derivative of differential forms in differential geometry is an example of such a morphism having degree 1.
Invariants of graded modules.
Given a graded module "M" over a commutative graded ring "R", one can associate the formal power series &NoBreak;&NoBreak;:
formula_32
(assuming formula_33 are finite.) It is called the Hilbert–Poincaré series of "M".
A graded module is said to be finitely generated if the underlying module is finitely generated. The generators may be taken to be homogeneous (by replacing the generators by their homogeneous parts.)
Suppose "R" is a polynomial ring &NoBreak;&NoBreak;, "k" a field, and "M" a finitely generated graded module over it. Then the function formula_34 is called the Hilbert function of "M". The function coincides with the integer-valued polynomial for large "n" called the Hilbert polynomial of "M".
Graded algebra.
An associative algebra "A" over a ring "R" is a graded algebra if it is graded as a ring.
In the usual case where the ring "R" is not graded (in particular if "R" is a field), it is given the trivial grading (every element of "R" is of degree 0). Thus, formula_35 and the graded pieces formula_36 are "R"-modules.
In the case where the ring "R" is also a graded ring, then one requires that
formula_37
In other words, we require "A" to be a graded left module over "R".
Examples of graded algebras are common in mathematics:
Graded algebras are much used in commutative algebra and algebraic geometry, homological algebra, and algebraic topology. One example is the close relationship between homogeneous polynomials and projective varieties (cf. Homogeneous coordinate ring.)
"G"-graded rings and algebras.
The above definitions have been generalized to rings graded using any monoid "G" as an index set. A "G"-graded ring "R" is a ring with a direct sum decomposition
formula_42
such that
formula_43
Elements of "R" that lie inside formula_0 for some formula_44 are said to be homogeneous of grade "i".
The previously defined notion of "graded ring" now becomes the same thing as an formula_45-graded ring, where formula_45 is the monoid of natural numbers under addition. The definitions for graded modules and algebras can also be extended this way replacing the indexing set formula_45 with any monoid "G".
Remarks:
Examples:
Anticommutativity.
Some graded rings (or algebras) are endowed with an anticommutative structure. This notion requires a homomorphism of the monoid of the gradation into the additive monoid of formula_47, the field with two elements. Specifically, a signed monoid consists of a pair formula_48 where formula_49 is a monoid and formula_50 is a homomorphism of additive monoids. An anticommutative formula_49-graded ring is a ring "A" graded with respect to formula_49 such that:
formula_51
for all homogeneous elements "x" and "y".
Graded monoid.
Intuitively, a graded monoid is the subset of a graded ring, formula_55, generated by the formula_4's, without using the additive part. That is, the set of elements of the graded monoid is formula_56.
Formally, a graded monoid is a monoid formula_57, with a gradation function formula_58 such that formula_59. Note that the gradation of formula_60 is necessarily 0. Some authors request furthermore that formula_61
when "m" is not the identity.
Assuming the gradations of non-identity elements are non-zero, the number of elements of gradation "n" is at most formula_62 where "g" is the cardinality of a generating set "G" of the monoid. Therefore the number of elements of gradation "n" or less is at most formula_63 (for formula_64) or formula_65 else. Indeed, each such element is the product of at most "n" elements of "G", and only formula_65 such products exist. Similarly, the identity element can not be written as the product of two non-identity elements. That is, there is no unit divisor in such a graded monoid.
Power series indexed by a graded monoid.
These notions allow us to extend the notion of power series ring. Instead of the indexing family being formula_66, the indexing family could be any graded monoid, assuming that the number of elements of degree "n" is finite, for each integer "n".
More formally, let formula_67 be an arbitrary semiring and formula_68 a graded monoid. Then formula_69 denotes the semiring of power series with coefficients in "K" indexed by "R". Its elements are functions from "R" to "K". The sum of two elements formula_70 is defined pointwise, it is the function sending formula_71 to formula_72, and the product is the function sending formula_71 to the infinite sum formula_73. This sum is correctly defined (i.e., finite) because, for each "m", there are only a finite number of pairs ("p", "q") such that "pq" = "m".
Example.
In formal language theory, given an alphabet "A", the free monoid of words over "A" can be considered as a graded monoid, where the gradation of a word is its length.
Notes.
Citations.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "R_i"
},
{
"math_id": 1,
"text": "R = \\bigoplus_{n=0}^\\infty R_n = R_0 \\oplus R_1 \\oplus R_2 \\oplus \\cdots"
},
{
"math_id": 2,
"text": "R_mR_n \\subseteq R_{m+n}"
},
{
"math_id": 3,
"text": "m"
},
{
"math_id": 4,
"text": "R_n"
},
{
"math_id": 5,
"text": "a"
},
{
"math_id": 6,
"text": "R"
},
{
"math_id": 7,
"text": "a=a_0+a_1+\\cdots +a_n"
},
{
"math_id": 8,
"text": "a_i"
},
{
"math_id": 9,
"text": "R_0"
},
{
"math_id": 10,
"text": "1"
},
{
"math_id": 11,
"text": "n"
},
{
"math_id": 12,
"text": "I\\subseteq R"
},
{
"math_id": 13,
"text": "I"
},
{
"math_id": 14,
"text": "R/I"
},
{
"math_id": 15,
"text": "R/I = \\bigoplus_{n=0}^\\infty R_n/I_n,"
},
{
"math_id": 16,
"text": "I_n"
},
{
"math_id": 17,
"text": "R_i=0"
},
{
"math_id": 18,
"text": "R = k[t_1, \\ldots, t_n]"
},
{
"math_id": 19,
"text": "\\Z"
},
{
"math_id": 20,
"text": "\\bigoplus_{n=0}^{\\infty} I^n/I^{n+1}"
},
{
"math_id": 21,
"text": "\\bigoplus_{i = 0}^\\infty H^i(X; R)"
},
{
"math_id": 22,
"text": "M = \\bigoplus_{i\\in \\mathbb{N}}M_i ,"
},
{
"math_id": 23,
"text": "R_iM_j \\subseteq M_{i+j}"
},
{
"math_id": 24,
"text": "\\bigoplus_{n=0}^{\\infty} I^n M/I^{n+1} M"
},
{
"math_id": 25,
"text": "\\bigoplus_0^{\\infty} I^n/I^{n+1}"
},
{
"math_id": 26,
"text": "f: N \\to M"
},
{
"math_id": 27,
"text": "M"
},
{
"math_id": 28,
"text": "\\ell"
},
{
"math_id": 29,
"text": "M(\\ell)_n = M_{n+\\ell}"
},
{
"math_id": 30,
"text": "f\\colon M \\to N"
},
{
"math_id": 31,
"text": "f(M_n) \\subseteq N_{n+d}"
},
{
"math_id": 32,
"text": "P(M, t) = \\sum \\ell(M_n) t^n"
},
{
"math_id": 33,
"text": "\\ell(M_n)"
},
{
"math_id": 34,
"text": "n \\mapsto \\dim_k M_n"
},
{
"math_id": 35,
"text": "R\\subseteq A_0"
},
{
"math_id": 36,
"text": "A_i"
},
{
"math_id": 37,
"text": "R_iA_j \\subseteq A_{i+j}"
},
{
"math_id": 38,
"text": "T^{\\bullet} V"
},
{
"math_id": 39,
"text": "\\textstyle\\bigwedge\\nolimits^{\\bullet} V"
},
{
"math_id": 40,
"text": "S^{\\bullet} V"
},
{
"math_id": 41,
"text": "H^{\\bullet} "
},
{
"math_id": 42,
"text": "R = \\bigoplus_{i\\in G}R_i "
},
{
"math_id": 43,
"text": " R_i R_j \\subseteq R_{i \\cdot j}. "
},
{
"math_id": 44,
"text": "i \\in G"
},
{
"math_id": 45,
"text": "\\N"
},
{
"math_id": 46,
"text": "\\Z_2"
},
{
"math_id": 47,
"text": "\\Z/2\\Z"
},
{
"math_id": 48,
"text": "(\\Gamma, \\varepsilon)"
},
{
"math_id": 49,
"text": "\\Gamma"
},
{
"math_id": 50,
"text": "\\varepsilon \\colon \\Gamma \\to\\Z/2\\Z"
},
{
"math_id": 51,
"text": "xy=(-1)^{\\varepsilon (\\deg x) \\varepsilon (\\deg y)}yx ,"
},
{
"math_id": 52,
"text": "(\\Z, \\varepsilon)"
},
{
"math_id": 53,
"text": "\\varepsilon \\colon \\Z \\to\\Z/2\\Z"
},
{
"math_id": 54,
"text": "\\varepsilon"
},
{
"math_id": 55,
"text": "\\bigoplus_{n\\in \\mathbb N_0}R_n"
},
{
"math_id": 56,
"text": "\\bigcup_{n\\in\\mathbb N_0}R_n"
},
{
"math_id": 57,
"text": "(M,\\cdot)"
},
{
"math_id": 58,
"text": "\\phi:M\\to\\mathbb N_0"
},
{
"math_id": 59,
"text": "\\phi(m\\cdot m')=\\phi(m)+\\phi(m')"
},
{
"math_id": 60,
"text": "1_M"
},
{
"math_id": 61,
"text": "\\phi(m)\\ne 0"
},
{
"math_id": 62,
"text": "g^n"
},
{
"math_id": 63,
"text": "n+1"
},
{
"math_id": 64,
"text": "g=1"
},
{
"math_id": 65,
"text": "\\frac{g^{n+1}-1}{g-1}"
},
{
"math_id": 66,
"text": "\\mathbb N"
},
{
"math_id": 67,
"text": "(K,+_K,\\times_K)"
},
{
"math_id": 68,
"text": "(R,\\cdot,\\phi)"
},
{
"math_id": 69,
"text": "K\\langle\\langle R\\rangle\\rangle"
},
{
"math_id": 70,
"text": "s,s'\\in K\\langle\\langle R\\rangle\\rangle"
},
{
"math_id": 71,
"text": "m\\in R"
},
{
"math_id": 72,
"text": "s(m)+_Ks'(m)"
},
{
"math_id": 73,
"text": "\\sum_{p,q \\in R \\atop p \\cdot q=m}s(p)\\times_K s'(q)"
}
] | https://en.wikipedia.org/wiki?curid=103094 |
103109 | Outer product | Vector operation
In linear algebra, the outer product of two coordinate vectors is the matrix whose entries are all products of an element in the first vector with an element in the second vector. If the two coordinate vectors have dimensions "n" and "m", then their outer product is an "n" × "m" matrix. More generally, given two tensors (multidimensional arrays of numbers), their outer product is a tensor. The outer product of tensors is also referred to as their tensor product, and can be used to define the tensor algebra.
The outer product contrasts with:
Definition.
Given two vectors of size formula_0 and formula_1 respectively
formula_2
their outer product, denoted formula_3 is defined as the formula_4 matrix formula_5 obtained by multiplying each element of formula_6 by each element of formula_7:
formula_8
Or, in index notation:
formula_9
Denoting the dot product by formula_10 if given an formula_1 vector formula_11 then formula_12 If given a formula_13 vector formula_14 then formula_15
If formula_6 and formula_7 are vectors of the same dimension bigger than 1, then formula_16.
The outer product formula_17 is equivalent to a matrix multiplication formula_18 provided that formula_6 is represented as a formula_0 column vector and formula_7 as a formula_1 column vector (which makes formula_19 a row vector). For instance, if formula_20 and formula_21 then
formula_22
For complex vectors, it is often useful to take the conjugate transpose of formula_23 denoted formula_24 or formula_25:
formula_26
Contrast with Euclidean inner product.
If formula_27 then one can take the matrix product the other way, yielding a scalar (or formula_28 matrix):
formula_29
which is the standard inner product for Euclidean vector spaces, better known as the dot product. The dot product is the trace of the outer product. Unlike the dot product, the outer product is not commutative.
Multiplication of a vector formula_30 by the matrix formula_17 can be written in terms of the inner product, using the relation formula_31.
The outer product of tensors.
Given two tensors formula_32 with dimensions formula_33 and formula_34, their outer product formula_17 is a tensor with dimensions formula_35 and entries
formula_36
For example, if formula_5 is of order 3 with dimensions formula_37 and formula_38 is of order 2 with dimensions formula_39 then their outer product formula_40 is of order 5 with dimensions formula_41 If formula_5 has a component "A"[2, 2, 4] = 11 and formula_38 has a component "B"[8, 88] = 13, then the component of formula_40 formed by the outer product is "C"[2, 2, 4, 8, 88] = 143.
Connection with the Kronecker product.
The outer product and Kronecker product are closely related; in fact the same symbol is commonly used to denote both operations.
If formula_42 and formula_43, we have:
formula_44
In the case of column vectors, the Kronecker product can be viewed as a form of vectorization (or flattening) of the outer product. In particular, for two column vectors formula_6 and formula_7, we can write:
formula_45
Another similar identity that further highlights the similarity between the operations is
formula_46
where the order of vectors needs not be flipped. The middle expression uses matrix multiplication, where the vectors are considered as column/row matrices.
Connection with the matrix product.
Given a pair of matrices formula_5 of size formula_47 and formula_38 of size formula_48, consider the matrix product formula_49 defined as usual as a matrix of size formula_50.
Now let formula_51 be the formula_52-th column vector of formula_53 and let formula_54 be the formula_52-th row vector of formula_55. Then formula_40 can be expressed as a sum of column-by-row outer products:
formula_56
This expression has duality with the more common one as a matrix built with row-by-column inner product entries (or dot product): formula_57
This relation is relevant in the application of the Singular Value Decomposition (SVD) (and Spectral Decomposition as a special case). In particular, the decomposition can be interpreted as the sum of outer products of each left (formula_58) and right (formula_59) singular vectors, scaled by the corresponding nonzero singular value formula_60:
formula_61
This result implies that formula_5 can be expressed as a sum of rank-1 matrices with spectral norm formula_60 in decreasing order. This explains the fact why, in general, the last terms contribute less, which motivates the use of the truncated SVD as an approximation. The first term is the least squares fit of a matrix to an outer product of vectors.
Properties.
The outer product of vectors satisfies the following properties:
formula_62
The outer product of tensors satisfies the additional associativity property:
formula_63
Rank of an outer product.
If u and v are both nonzero, then the outer product matrix uvT always has matrix rank 1. Indeed, the columns of the outer product are all proportional to the first column. Thus they are all linearly dependent on that one column, hence the matrix is of rank one.
Definition (abstract).
Let V and W be two vector spaces. The outer product of formula_64 and formula_65 is the element formula_66.
If V is an inner product space, then it is possible to define the outer product as a linear map "V" → "W". In this case, the linear map formula_67 is an element of the dual space of V, as this maps linearly a vector into its underlying field, of which formula_68 is an element. The outer product "V" → "W" is then given by
formula_69
This shows why a conjugate transpose of v is commonly taken in the complex case.
In programming languages.
In some programming languages, given a two-argument function codice_0 (or a binary operator), the outer product, codice_0, of two one-dimensional arrays, codice_2 and codice_3, is a two-dimensional array codice_4 such that codice_5. This is syntactically represented in various ways: in APL, as the infix binary operator ∘.f; in J, as the postfix adverb f/; in R, as the function outer(A, B, f) or the special %o%; in Mathematica, as Outer[f, A, B]. In MATLAB, the function kron(A, B) is used for this product. These often generalize to multi-dimensional arguments, and more than two arguments.
In the Python library NumPy, the outer product can be computed with function codice_6. In contrast, codice_7 results in a flat array. The outer product of multidimensional arrays can be computed using codice_8.
Applications.
As the outer product is closely related to the Kronecker product, some of the applications of the Kronecker product use outer products. These applications are found in quantum theory, signal processing, and image compression.
Spinors.
Suppose "s", "t", "w", "z" ∈ C so that ("s", "t") and ("w", "z") are in C2. Then the outer product of these complex 2-vectors is an element of M(2, C), the 2 × 2 complex matrices:
formula_70
The determinant of this matrix is "swtz" − "sztw" = 0 because of the commutative property of C.
In the theory of spinors in three dimensions, these matrices are associated with isotropic vectors due to this null property. Élie Cartan described this construction in 1937, but it was introduced by Wolfgang Pauli in 1927 so that M(2,C) has come to be called Pauli algebra.
Concepts.
The block form of outer products is useful in classification. Concept analysis is a study that depends on certain outer products:
When a vector has only zeros and ones as entries, it is called a "logical vector", a special case of a logical matrix. The logical operation "and" takes the place of multiplication. The outer product of two logical vectors ("u"i) and ("v"j) is given by the logical matrix formula_71. This type of matrix is used in the study of binary relations, and is called a "rectangular relation" or a cross-vector.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "m \\times 1"
},
{
"math_id": 1,
"text": "n \\times 1"
},
{
"math_id": 2,
"text": "\\mathbf{u} = \\begin{bmatrix} u_1 \\\\ u_2 \\\\ \\vdots \\\\ u_m \\end{bmatrix},\n\\quad\n\\mathbf{v} = \\begin{bmatrix} v_1 \\\\ v_2 \\\\ \\vdots \\\\ v_n \\end{bmatrix}"
},
{
"math_id": 3,
"text": "\\mathbf{u} \\otimes \\mathbf{v},"
},
{
"math_id": 4,
"text": "m \\times n"
},
{
"math_id": 5,
"text": "\\mathbf{A}"
},
{
"math_id": 6,
"text": "\\mathbf{u}"
},
{
"math_id": 7,
"text": "\\mathbf{v}"
},
{
"math_id": 8,
"text": "\n \\mathbf{u} \\otimes \\mathbf{v} = \\mathbf{A} =\n \\begin{bmatrix}\n u_1v_1 & u_1v_2 & \\dots & u_1v_n \\\\\n u_2v_1 & u_2v_2 & \\dots & u_2v_n \\\\\n \\vdots & \\vdots & \\ddots & \\vdots \\\\\n u_mv_1 & u_mv_2 & \\dots & u_mv_n\n \\end{bmatrix}\n"
},
{
"math_id": 9,
"text": "(\\mathbf{u} \\otimes \\mathbf{v})_{ij} = u_i v_j"
},
{
"math_id": 10,
"text": "\\,\\cdot,\\,"
},
{
"math_id": 11,
"text": "\\mathbf{w},"
},
{
"math_id": 12,
"text": "(\\mathbf{u} \\otimes \\mathbf{v}) \\mathbf{w} = (\\mathbf{v} \\cdot \\mathbf{w}) \\mathbf{u}."
},
{
"math_id": 13,
"text": "1 \\times m"
},
{
"math_id": 14,
"text": "\\mathbf{x},"
},
{
"math_id": 15,
"text": "\\mathbf{x} (\\mathbf{u} \\otimes \\mathbf{v}) = (\\mathbf{x} \\cdot \\mathbf{u}) \\mathbf{v}^{\\operatorname{T}}."
},
{
"math_id": 16,
"text": "\\det (\\mathbf{u} \\otimes\\mathbf{v}) = 0"
},
{
"math_id": 17,
"text": "\\mathbf{u} \\otimes \\mathbf{v}"
},
{
"math_id": 18,
"text": "\\mathbf{u} \\mathbf{v}^{\\operatorname{T}},"
},
{
"math_id": 19,
"text": "\\mathbf{v}^{\\operatorname{T}}"
},
{
"math_id": 20,
"text": "m = 4"
},
{
"math_id": 21,
"text": "n = 3,"
},
{
"math_id": 22,
"text": "\n \\mathbf{u} \\otimes \\mathbf{v} = \\mathbf{u}\\mathbf{v}^\\textsf{T} =\n \\begin{bmatrix}u_1 \\\\ u_2 \\\\ u_3 \\\\ u_4\\end{bmatrix}\n \\begin{bmatrix}v_1 & v_2 & v_3\\end{bmatrix} =\n \\begin{bmatrix}\n u_1 v_1 & u_1 v_2 & u_1 v_3 \\\\\n u_2 v_1 & u_2 v_2 & u_2 v_3 \\\\\n u_3 v_1 & u_3 v_2 & u_3 v_3 \\\\\n u_4 v_1 & u_4 v_2 & u_4 v_3\n \\end{bmatrix}.\n"
},
{
"math_id": 23,
"text": "\\mathbf{v},"
},
{
"math_id": 24,
"text": "\\mathbf{v}^\\dagger"
},
{
"math_id": 25,
"text": "\\left(\\mathbf{v}^\\textsf{T}\\right)^*"
},
{
"math_id": 26,
"text": "\\mathbf{u} \\otimes \\mathbf{v} = \\mathbf{u} \\mathbf{v}^\\dagger = \\mathbf{u} \\left(\\mathbf{v}^\\textsf{T}\\right)^*."
},
{
"math_id": 27,
"text": "m = n,"
},
{
"math_id": 28,
"text": "1 \\times 1"
},
{
"math_id": 29,
"text": "\\left\\langle\\mathbf{u}, \\mathbf{v}\\right\\rangle = \\mathbf{u}^\\textsf{T} \\mathbf{v}"
},
{
"math_id": 30,
"text": "\\mathbf{w}"
},
{
"math_id": 31,
"text": "\\left(\\mathbf{u} \\otimes \\mathbf{v}\\right)\\mathbf{w} = \\mathbf{u}\\left\\langle\\mathbf{v}, \\mathbf{w}\\right\\rangle"
},
{
"math_id": 32,
"text": "\\mathbf{u}, \\mathbf{v}"
},
{
"math_id": 33,
"text": "(k_1, k_2, \\dots, k_m)"
},
{
"math_id": 34,
"text": "(l_1, l_2, \\dots, l_n)"
},
{
"math_id": 35,
"text": "(k_1, k_2, \\dots, k_m, l_1, l_2, \\dots, l_n)"
},
{
"math_id": 36,
"text": "(\\mathbf{u} \\otimes \\mathbf{v})_{i_1, i_2, \\dots i_m, j_1, j_2, \\dots, j_n} = u_{i_1, i_2, \\dots, i_m} v_{j_1, j_2, \\dots, j_n}"
},
{
"math_id": 37,
"text": "(3, 5, 7)"
},
{
"math_id": 38,
"text": "\\mathbf{B}"
},
{
"math_id": 39,
"text": "(10, 100),"
},
{
"math_id": 40,
"text": "\\mathbf{C}"
},
{
"math_id": 41,
"text": "(3, 5, 7, 10, 100)."
},
{
"math_id": 42,
"text": "\\mathbf{u} = \\begin{bmatrix}1 & 2 & 3\\end{bmatrix}^\\textsf{T}"
},
{
"math_id": 43,
"text": "\\mathbf{v} = \\begin{bmatrix}4 & 5\\end{bmatrix}^\\textsf{T}"
},
{
"math_id": 44,
"text": "\\begin{align}\n \\mathbf{u} \\otimes_\\text{Kron} \\mathbf{v} &= \\begin{bmatrix} 4 \\\\ 5 \\\\ 8 \\\\ 10 \\\\ 12 \\\\ 15\\end{bmatrix}, &\n \\mathbf{u} \\otimes_\\text{outer} \\mathbf{v} &= \\begin{bmatrix} 4 & 5 \\\\ 8 & 10 \\\\ 12 & 15\\end{bmatrix}\n\\end{align}"
},
{
"math_id": 45,
"text": "\\mathbf{u} \\otimes_{\\text{Kron}} \\mathbf{v} = \\operatorname{vec}(\\mathbf{v} \\otimes_\\text{outer} \\mathbf{u})"
},
{
"math_id": 46,
"text": "\\mathbf{u} \\otimes_{\\text{Kron}} \\mathbf{v}^\\textsf{T} = \\mathbf u \\mathbf{v}^\\textsf{T} = \\mathbf{u} \\otimes_{\\text{outer}} \\mathbf{v}"
},
{
"math_id": 47,
"text": "m\\times p"
},
{
"math_id": 48,
"text": "p\\times n"
},
{
"math_id": 49,
"text": "\\mathbf{C} = \\mathbf{A}\\,\\mathbf{B}"
},
{
"math_id": 50,
"text": "m\\times n"
},
{
"math_id": 51,
"text": "\\mathbf a^\\text{col}_k"
},
{
"math_id": 52,
"text": "k"
},
{
"math_id": 53,
"text": "\\mathbf A"
},
{
"math_id": 54,
"text": "\\mathbf b^\\text{row}_k"
},
{
"math_id": 55,
"text": "\\mathbf B"
},
{
"math_id": 56,
"text": "\\mathbf{C} = \\mathbf{A}\\, \\mathbf{B} =\n\\left(\n \\sum_{k=1}^p {A}_{ik}\\, {B}_{kj}\n\\right)_{\n \\begin{matrix} 1\\le i \\le m \\\\[-20pt] 1 \\le j\\le n \\end{matrix}\n} =\n\\begin{bmatrix} & & \\\\ \\mathbf a^\\text{col}_{1} & \\cdots & \\mathbf a^\\text{col}_{p} \\\\ & & \\end{bmatrix}\n\\begin{bmatrix} & \\mathbf b^\\text{row}_{1} & \\\\ & \\vdots & \\\\ & \\mathbf b^\\text{row}_{p} & \\end{bmatrix}\n= \\sum_{k=1}^p \\mathbf a^\\text{col}_k \\otimes \\mathbf b^\\text{row}_k"
},
{
"math_id": 57,
"text": "C_{ij} = \\langle{\\mathbf a^\\text{row}_i,\\,\\mathbf b_j^\\text{col}}\\rangle"
},
{
"math_id": 58,
"text": "\\mathbf{u}_k"
},
{
"math_id": 59,
"text": "\\mathbf{v}_k"
},
{
"math_id": 60,
"text": "\\sigma_k"
},
{
"math_id": 61,
"text": "\\mathbf{A} = \\mathbf{U \\Sigma V^T} = \\sum_{k=1}^{\\operatorname{rank}(A)}(\\mathbf{u}_k \\otimes \\mathbf{v}_k) \\, \\sigma_k"
},
{
"math_id": 62,
"text": "\\begin{align}\n (\\mathbf{u} \\otimes \\mathbf{v})^\\textsf{T} &= (\\mathbf{v} \\otimes \\mathbf{u}) \\\\\n (\\mathbf{v} + \\mathbf{w}) \\otimes \\mathbf{u} &= \\mathbf{v} \\otimes \\mathbf{u} + \\mathbf{w} \\otimes \\mathbf{u} \\\\\n \\mathbf{u} \\otimes (\\mathbf{v} + \\mathbf{w}) &= \\mathbf{u} \\otimes \\mathbf{v} + \\mathbf{u} \\otimes \\mathbf{w} \\\\\n c (\\mathbf{v} \\otimes \\mathbf{u}) &= (c\\mathbf{v}) \\otimes \\mathbf{u} = \\mathbf{v} \\otimes (c\\mathbf{u})\n\\end{align}"
},
{
"math_id": 63,
"text": "\n (\\mathbf{u} \\otimes \\mathbf{v}) \\otimes \\mathbf{w} =\n \\mathbf{u} \\otimes (\\mathbf{v} \\otimes \\mathbf{w})\n"
},
{
"math_id": 64,
"text": "\\mathbf v \\in V"
},
{
"math_id": 65,
"text": "\\mathbf w \\in W"
},
{
"math_id": 66,
"text": "\\mathbf v \\otimes \\mathbf w \\in V \\otimes W"
},
{
"math_id": 67,
"text": "\\mathbf x \\mapsto \\langle \\mathbf v, \\mathbf x\\rangle"
},
{
"math_id": 68,
"text": "\\langle \\mathbf v, \\mathbf x\\rangle"
},
{
"math_id": 69,
"text": "(\\mathbf w \\otimes \\mathbf v) (\\mathbf x) = \\left\\langle \\mathbf v, \\mathbf x \\right\\rangle \\mathbf w."
},
{
"math_id": 70,
"text": "\\begin{pmatrix} sw & tw \\\\ sz & tz \\end{pmatrix}."
},
{
"math_id": 71,
"text": "\\left(a_{ij}\\right) = \\left(u_i \\land v_j\\right)"
}
] | https://en.wikipedia.org/wiki?curid=103109 |
103118 | Distributive property | Property involving two mathematical operations
In mathematics, the distributive property of binary operations is a generalization of the distributive law, which asserts that the equality
formula_0
is always true in elementary algebra.
For example, in elementary arithmetic, one has
formula_1
Therefore, one would say that multiplication "distributes" over addition.
This basic property of numbers is part of the definition of most algebraic structures that have two operations called addition and multiplication, such as complex numbers, polynomials, matrices, rings, and fields. It is also encountered in Boolean algebra and mathematical logic, where each of the logical and (denoted formula_2) and the logical or (denoted formula_3) distributes over the other.
Definition.
Given a set formula_4 and two binary operators formula_5 and formula_6 on formula_7
formula_9
formula_10
When formula_5 is commutative, the three conditions above are logically equivalent.
Meaning.
The operators used for examples in this section are those of the usual addition formula_6 and multiplication formula_11
If the operation denoted formula_12 is not commutative, there is a distinction between left-distributivity and right-distributivity:
formula_13
formula_14
In either case, the distributive property can be described in words as:
To multiply a sum (or difference) by a factor, each summand (or minuend and subtrahend) is multiplied by this factor and the resulting products are added (or subtracted).
If the operation outside the parentheses (in this case, the multiplication) is commutative, then left-distributivity implies right-distributivity and vice versa, and one talks simply of distributivity.
One example of an operation that is "only" right-distributive is division, which is not commutative:
formula_15
In this case, left-distributivity does not apply:
formula_16
The distributive laws are among the axioms for rings (like the ring of integers) and fields (like the field of rational numbers). Here multiplication is distributive over addition, but addition is not distributive over multiplication. Examples of structures with two operations that are each distributive over the other are Boolean algebras such as the algebra of sets or the switching algebra.
Multiplying sums can be put into words as follows: When a sum is multiplied by a sum, multiply each summand of a sum with each summand of the other sum (keeping track of signs) then add up all of the resulting products.
Examples.
Real numbers.
In the following examples, the use of the distributive law on the set of real numbers formula_17 is illustrated. When multiplication is mentioned in elementary mathematics, it usually refers to this kind of multiplication. From the point of view of algebra, the real numbers form a field, which ensures the validity of the distributive law.
<templatestyles src="Glossary/styles.css" />
Matrices.
The distributive law is valid for matrix multiplication. More precisely,
formula_18
for all formula_19-matrices formula_20 and formula_21-matrices formula_22 as well as
formula_23
for all formula_19-matrices formula_24 and formula_21-matrices formula_25
Because the commutative property does not hold for matrix multiplication, the second law does not follow from the first law. In this case, they are two different laws.
Propositional logic.
Rule of replacement.
In standard truth-functional propositional logic, distribution in logical proofs uses two valid rules of replacement to expand individual occurrences of certain logical connectives, within some formula, into separate applications of those connectives across subformulas of the given formula. The rules are
formula_35
where "formula_36", also written formula_37 is a metalogical symbol representing "can be replaced in a proof with" or "is logically equivalent to".
Truth functional connectives.
Distributivity is a property of some logical connectives of truth-functional propositional logic. The following logical equivalences demonstrate that distributivity is a property of particular connectives. The following are truth-functional tautologies.
formula_38
formula_39
Distributivity and rounding.
In approximate arithmetic, such as floating-point arithmetic, the distributive property of multiplication (and division) over addition may fail because of the limitations of arithmetic precision. For example, the identity formula_40 fails in decimal arithmetic, regardless of the number of significant digits. Methods such as banker's rounding may help in some cases, as may increasing the precision used, but ultimately some calculation errors are inevitable.
In rings and other structures.
Distributivity is most commonly found in semirings, notably the particular cases of rings and distributive lattices.
A semiring has two binary operations, commonly denoted formula_6 and formula_41 and requires that formula_5 must distribute over formula_42
A ring is a semiring with additive inverses.
A lattice is another kind of algebraic structure with two binary operations, formula_43
If either of these operations distributes over the other (say formula_2 distributes over formula_44), then the reverse also holds (formula_3 distributes over formula_2), and the lattice is called distributive. See also Distributivity (order theory).
A Boolean algebra can be interpreted either as a special kind of ring (a Boolean ring) or a special kind of distributive lattice (a Boolean lattice). Each interpretation is responsible for different distributive laws in the Boolean algebra.
Similar structures without distributive laws are near-rings and near-fields instead of rings and division rings. The operations are usually defined to be distributive on the right but not on the left.
Generalizations.
In several mathematical areas, generalized distributivity laws are considered. This may involve the weakening of the above conditions or the extension to infinitary operations. Especially in order theory one finds numerous important variants of distributivity, some of which include infinitary operations, such as the infinite distributive law; others being defined in the presence of only one binary operation, such as the according definitions and their relations are given in the article distributivity (order theory). This also includes the notion of a completely distributive lattice.
In the presence of an ordering relation, one can also weaken the above equalities by replacing formula_45 by either formula_46 or formula_47 Naturally, this will lead to meaningful concepts only in some situations. An application of this principle is the notion of sub-distributivity as explained in the article on interval arithmetic.
In category theory, if formula_48 and formula_49 are monads on a category formula_22 a distributive law formula_50 is a natural transformation formula_51 such that formula_52 is a lax map of monads formula_53 and formula_54 is a colax map of monads formula_55 This is exactly the data needed to define a monad structure on formula_56: the multiplication map is formula_57 and the unit map is formula_58 See: distributive law between monads.
A generalized distributive law has also been proposed in the area of information theory.
Antidistributivity.
The ubiquitous identity that relates inverses to the binary operation in any group, namely formula_59 which is taken as an axiom in the more general context of a semigroup with involution, has sometimes been called an antidistributive property (of inversion as a unary operation).
In the context of a near-ring, which removes the commutativity of the additively written group and assumes only one-sided distributivity, one can speak of (two-sided) distributive elements but also of antidistributive elements. The latter reverse the order of (the non-commutative) addition; assuming a left-nearring (i.e. one which all elements distribute when multiplied on the left), then an antidistributive element formula_60 reverses the order of addition when multiplied to the right: formula_61
In the study of propositional logic and Boolean algebra, the term antidistributive law is sometimes used to denote the interchange between conjunction and disjunction when implication factors over them:
formula_62
formula_63
These two tautologies are a direct consequence of the duality in De Morgan's laws.
Notes.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "x \\cdot (y + z) = x \\cdot y + x \\cdot z"
},
{
"math_id": 1,
"text": "2 \\cdot (1 + 3) = (2 \\cdot 1) + (2 \\cdot 3)."
},
{
"math_id": 2,
"text": "\\,\\land\\,"
},
{
"math_id": 3,
"text": "\\,\\lor\\,"
},
{
"math_id": 4,
"text": "S"
},
{
"math_id": 5,
"text": "\\,*\\,"
},
{
"math_id": 6,
"text": "\\,+\\,"
},
{
"math_id": 7,
"text": "S,"
},
{
"math_id": 8,
"text": "x, y, \\text{ and } z"
},
{
"math_id": 9,
"text": "x * (y + z) = (x * y) + (x * z);"
},
{
"math_id": 10,
"text": "(y + z) * x = (y * x) + (z * x);"
},
{
"math_id": 11,
"text": "\\,\\cdot.\\,"
},
{
"math_id": 12,
"text": "\\cdot"
},
{
"math_id": 13,
"text": "a \\cdot \\left( b \\pm c \\right) = a \\cdot b \\pm a \\cdot c \\qquad \\text{ (left-distributive) }"
},
{
"math_id": 14,
"text": "(a \\pm b) \\cdot c = a \\cdot c \\pm b \\cdot c \\qquad \\text{ (right-distributive) }."
},
{
"math_id": 15,
"text": "(a \\pm b) \\div c = a \\div c \\pm b \\div c."
},
{
"math_id": 16,
"text": "a \\div(b \\pm c) \\neq a \\div b \\pm a \\div c"
},
{
"math_id": 17,
"text": "\\R"
},
{
"math_id": 18,
"text": "(A + B) \\cdot C = A \\cdot C + B \\cdot C"
},
{
"math_id": 19,
"text": "l \\times m"
},
{
"math_id": 20,
"text": "A, B"
},
{
"math_id": 21,
"text": "m \\times n"
},
{
"math_id": 22,
"text": "C,"
},
{
"math_id": 23,
"text": "A \\cdot (B + C) = A \\cdot B + A \\cdot C"
},
{
"math_id": 24,
"text": "A"
},
{
"math_id": 25,
"text": "B, C."
},
{
"math_id": 26,
"text": "\\max(a, \\min(b, c)) = \\min(\\max(a, b), \\max(a, c)) \\quad \\text{ and } \\quad \\min(a, \\max(b, c)) = \\max(\\min(a, b), \\min(a, c))."
},
{
"math_id": 27,
"text": "\\gcd(a, \\operatorname{lcm}(b, c)) = \\operatorname{lcm}(\\gcd(a, b), \\gcd(a, c)) \\quad \\text{ and } \\quad \\operatorname{lcm}(a, \\gcd(b, c)) = \\gcd(\\operatorname{lcm}(a, b), \\operatorname{lcm}(a, c))."
},
{
"math_id": 28,
"text": "a + \\max(b, c) = \\max(a + b, a + c) \\quad \\text{ and } \\quad a + \\min(b, c) = \\min(a + b, a + c)."
},
{
"math_id": 29,
"text": "a c,"
},
{
"math_id": 30,
"text": "a d,"
},
{
"math_id": 31,
"text": "b c,"
},
{
"math_id": 32,
"text": "b d"
},
{
"math_id": 33,
"text": "(a + b) \\cdot (c + d) = a c + a d + b c + b d."
},
{
"math_id": 34,
"text": "u (v + w) = u v + u w, (u + v)w = u w + v w."
},
{
"math_id": 35,
"text": "(P \\land (Q \\lor R)) \\Leftrightarrow ((P \\land Q) \\lor (P \\land R)) \\qquad \\text{ and } \\qquad (P \\lor (Q \\land R)) \\Leftrightarrow ((P \\lor Q) \\land (P \\lor R))"
},
{
"math_id": 36,
"text": "\\Leftrightarrow"
},
{
"math_id": 37,
"text": "\\,\\equiv,\\,"
},
{
"math_id": 38,
"text": "\\begin{alignat}{13}\n&(P &&\\;\\land &&(Q \\lor R)) &&\\;\\Leftrightarrow\\;&& ((P \\land Q) &&\\;\\lor (P \\land R)) && \\quad\\text{ Distribution of } && \\text{ conjunction } && \\text{ over } && \\text{ disjunction } \\\\\n&(P &&\\;\\lor &&(Q \\land R)) &&\\;\\Leftrightarrow\\;&& ((P \\lor Q) &&\\;\\land (P \\lor R)) && \\quad\\text{ Distribution of } && \\text{ disjunction } && \\text{ over } && \\text{ conjunction } \\\\\n&(P &&\\;\\land &&(Q \\land R)) &&\\;\\Leftrightarrow\\;&& ((P \\land Q) &&\\;\\land (P \\land R)) && \\quad\\text{ Distribution of } && \\text{ conjunction } && \\text{ over } && \\text{ conjunction } \\\\\n&(P &&\\;\\lor &&(Q \\lor R)) &&\\;\\Leftrightarrow\\;&& ((P \\lor Q) &&\\;\\lor (P \\lor R)) && \\quad\\text{ Distribution of } && \\text{ disjunction } && \\text{ over } && \\text{ disjunction } \\\\\n&(P &&\\to &&(Q \\to R)) &&\\;\\Leftrightarrow\\;&& ((P \\to Q) &&\\to (P \\to R)) && \\quad\\text{ Distribution of } && \\text{ implication } && \\text{ } && \\text{ } \\\\\n&(P &&\\to &&(Q \\leftrightarrow R)) &&\\;\\Leftrightarrow\\;&& ((P \\to Q) &&\\leftrightarrow (P \\to R)) && \\quad\\text{ Distribution of } && \\text{ implication } && \\text{ over } && \\text{ equivalence } \\\\\n&(P &&\\to &&(Q \\land R)) &&\\;\\Leftrightarrow\\;&& ((P \\to Q) &&\\;\\land (P \\to R)) && \\quad\\text{ Distribution of } && \\text{ implication } && \\text{ over } && \\text{ conjunction } \\\\\n&(P &&\\;\\lor &&(Q \\leftrightarrow R)) &&\\;\\Leftrightarrow\\;&& ((P \\lor Q) &&\\leftrightarrow (P \\lor R)) && \\quad\\text{ Distribution of } && \\text{ disjunction } && \\text{ over } && \\text{ equivalence } \\\\\n\\end{alignat}"
},
{
"math_id": 39,
"text": "\\begin{alignat}{13}\n&((P \\land Q) &&\\;\\lor (R \\land S)) &&\\;\\Leftrightarrow\\;&& (((P \\lor R) \\land (P \\lor S)) &&\\;\\land ((Q \\lor R) \\land (Q \\lor S))) && \\\\\n&((P \\lor Q) &&\\;\\land (R \\lor S)) &&\\;\\Leftrightarrow\\;&& (((P \\land R) \\lor (P \\land S)) &&\\;\\lor ((Q \\land R) \\lor (Q \\land S))) && \\\\\n\\end{alignat}"
},
{
"math_id": 40,
"text": "1/3 + 1/3 + 1/3 = (1 + 1 + 1) / 3"
},
{
"math_id": 41,
"text": "\\,*,"
},
{
"math_id": 42,
"text": "\\,+."
},
{
"math_id": 43,
"text": "\\,\\land \\text{ and } \\lor."
},
{
"math_id": 44,
"text": "\\,\\lor"
},
{
"math_id": 45,
"text": "\\,=\\,"
},
{
"math_id": 46,
"text": "\\,\\leq\\,"
},
{
"math_id": 47,
"text": "\\,\\geq."
},
{
"math_id": 48,
"text": "(S, \\mu, \\nu)"
},
{
"math_id": 49,
"text": "\\left(S^{\\prime}, \\mu^{\\prime}, \\nu^{\\prime}\\right)"
},
{
"math_id": 50,
"text": "S . S^{\\prime} \\to S^{\\prime} . S"
},
{
"math_id": 51,
"text": "\\lambda : S . S^{\\prime} \\to S^{\\prime} . S"
},
{
"math_id": 52,
"text": "\\left(S^{\\prime}, \\lambda\\right)"
},
{
"math_id": 53,
"text": "S \\to S"
},
{
"math_id": 54,
"text": "(S, \\lambda)"
},
{
"math_id": 55,
"text": "S^{\\prime} \\to S^{\\prime}."
},
{
"math_id": 56,
"text": "S^{\\prime} . S"
},
{
"math_id": 57,
"text": "S^{\\prime} \\mu . \\mu^{\\prime} S^2 . S^{\\prime} \\lambda S"
},
{
"math_id": 58,
"text": "\\eta^{\\prime} S . \\eta."
},
{
"math_id": 59,
"text": "(x y)^{-1} = y^{-1} x^{-1},"
},
{
"math_id": 60,
"text": "a"
},
{
"math_id": 61,
"text": "(x + y) a = y a + x a."
},
{
"math_id": 62,
"text": "(a \\lor b) \\Rightarrow c \\equiv (a \\Rightarrow c) \\land (b \\Rightarrow c)"
},
{
"math_id": 63,
"text": "(a \\land b) \\Rightarrow c \\equiv (a \\Rightarrow c) \\lor (b \\Rightarrow c)."
}
] | https://en.wikipedia.org/wiki?curid=103118 |
10313521 | Anomalous cancellation | Kind of arithmetic error
formula_0Anomalous cancellation in calculus
An anomalous cancellation or accidental cancellation is a particular kind of arithmetic procedural error that gives a numerically correct answer. An attempt is made to reduce a fraction by cancelling individual digits in the numerator and denominator. This is not a legitimate operation, and does not in general give a correct answer, but in some rare cases the result is numerically the same as if a correct procedure had been applied. The trivial cases of cancelling trailing zeros or where all of the digits are equal are ignored.
Examples of anomalous cancellations which still produce the correct result include (these and their inverses are all the cases in base 10 with the fraction different from 1 and with two digits):
The article by Boas analyzes two-digit cases in bases other than base 10, e.g., = and its inverse are the only solutions in base 4 with two digits.
An example of anomalous cancellation with more than two digits is = , and an example with different numbers of digits is =.
Elementary properties.
When the base is prime, no two-digit solutions exist. This can be proven by contradiction: suppose a solution exists. Without loss of generality, we can say that this solution is
formula_1
where the double vertical line indicates digit concatenation. Thus, we have
formula_2
But formula_3, as they are digits in base formula_4; yet formula_4 divides formula_5, which means that formula_6. Therefore. the right hand side is zero, which means the left hand side must also be zero, i.e., formula_7, a contradiction by the definition of the problem. (If formula_7, the calculation becomes formula_8, which is one of the excluded trivial cases.)
Another property is that the numbers of solutions in a base formula_9 is odd if and only if formula_9 is an even square. This can be proven similarly to the above: suppose that we have a solution
formula_10
Then, doing the same manipulation, we get
formula_11
Suppose that formula_12. Then note that formula_13 is also a solution to the equation. This almost sets up an involution from the set of solutions to itself. But we can also substitute in to get formula_14, which only has solutions when formula_9 is a square. Let formula_15. Taking square roots and rearranging yields formula_16. Since the greatest common divisor of formula_17 is one, we know that formula_18. Noting that formula_19, this has precisely the solutions formula_20: i.e., it has an odd number of solutions when formula_15 is an even square. The converse of the statement may be proven by noting that these solutions all satisfy the initial requirements.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\begin{array}{l}\n\\;\\;\\; \\dfrac {d} {dx} \\dfrac{1}{x}\n\\\\ = \\dfrac {d} {d} \\dfrac{1}{x^2}\n\\\\ = \\dfrac{d\\!\\!\\!\\backslash}{d\\!\\!\\!\\backslash} \\dfrac{1}{x^2}\n\\\\ = - \\dfrac{1}{x^2}\n\\end{array}"
},
{
"math_id": 1,
"text": "\\frac{a||b}{c||a}=\\frac{b}{c},\\ {\\rm base}\\ p,"
},
{
"math_id": 2,
"text": "\\frac{ap+b}{cp+a}=\\frac{b}{c}\\implies (a-b)cp=b(a-c)"
},
{
"math_id": 3,
"text": "p>a,b,a-c"
},
{
"math_id": 4,
"text": "p"
},
{
"math_id": 5,
"text": "b(a-c)"
},
{
"math_id": 6,
"text": "a=c"
},
{
"math_id": 7,
"text": "a=b"
},
{
"math_id": 8,
"text": "\\frac{a||a}{c||a}=\\frac{a}{c} \\implies \\frac{a||a}{a||a}=\\frac{a}{a}=1"
},
{
"math_id": 9,
"text": "n"
},
{
"math_id": 10,
"text": "\\frac{a||b}{c||a}=\\frac{b}{c}"
},
{
"math_id": 11,
"text": "\\frac{an+b}{cn+a}=\\frac{b}{c}\\implies (a-b)cn=b(a-c)"
},
{
"math_id": 12,
"text": "a>b,c"
},
{
"math_id": 13,
"text": "a,b,c\\to a,a-c,a-b"
},
{
"math_id": 14,
"text": "(a-b)^2n=b^2"
},
{
"math_id": 15,
"text": "n=k^2"
},
{
"math_id": 16,
"text": "ak=(k+1)b"
},
{
"math_id": 17,
"text": "k,(k+1)"
},
{
"math_id": 18,
"text": "a=(k+1)x,b=kx"
},
{
"math_id": 19,
"text": "a,b<k^2"
},
{
"math_id": 20,
"text": "x=1,2,3,\\ldots,k-1"
}
] | https://en.wikipedia.org/wiki?curid=10313521 |
10314482 | De Bruijn index | Mathematical notation in lambda calculus
In mathematical logic, the de Bruijn index is a tool invented by the Dutch mathematician Nicolaas Govert de Bruijn for representing terms of lambda calculus without naming the bound variables. Terms written using these indices are invariant with respect to α-conversion, so the check for α-equivalence is the same as that for syntactic equality. Each de Bruijn index is a natural number that represents an occurrence of a variable in a λ-term, and denotes the number of binders that are in scope between that occurrence and its corresponding binder. The following are some examples:
De Bruijn indices are commonly used in higher-order reasoning systems such as automated theorem provers and logic programming systems.
Formal definition.
Formally, λ-terms ("M", "N", ...) written using de Bruijn indices have the following syntax (parentheses allowed freely):
"M", "N", ... ::= "n" | "M" "N" | λ "M"
where "n"—natural numbers greater than 0—are the variables. A variable "n" is bound if it is in the scope of at least "n" binders (λ); otherwise it is free. The binding site for a variable "n" is the "n"th binder it is in the scope of, starting from the innermost binder.
The most primitive operation on λ-terms is substitution: replacing free variables in a term with other terms. In the β-reduction (λ "M") "N", for example, we must
To illustrate, consider the application
(λ λ 4 2 (λ 1 3)) (λ 5 1)
which might correspond to the following term written in the usual notation
(λ"x". λ"y". "z" "x" (λ"u". "u" "x")) (λ"x". "w" "x").
After step 1, we obtain the term λ 4 □ (λ 1 □), where the variables that are destined for substitution are replaced with boxes. Step 2 decrements the free variables, giving λ 3 □ (λ 1 □). Finally, in step 3, we replace the boxes with the argument, namely λ 5 1; the first box is under one binder, so we replace it with λ 6 1 (which is λ 5 1 with the free variables increased by 1); the second is under two binders, so we replace it with λ 7 1. The final result is λ 3 (λ 6 1) (λ 1 (λ 7 1)).
Formally, a substitution is an unbounded list of terms, written "M"1."M"2..., where "M""i" is the replacement for the "i"th free variable. The increasing operation in step 3 is sometimes called "shift" and written ↑"k" where "k" is a natural number indicating the amount to increase the variables, and is defined by
formula_0
For example, ↑0 is the identity substitution, leaving a term unchanged. A finite list of terms "M"1."M"2..."M"n abbreviates the substitution "M"1."M"2..."M"n.(n+1).(n+2)... leaving all variables greater than n unchanged.
The application of a substitution "s" to a term "M" is written "M"["s"]. The composition of two substitutions "s""1" and "s"2 is written "s"1 "s"2 and is defined by
("M"1."M"2...) "s" = "M"1["s"]."M"2["s"]...
satisfying the property
"M" ["s"1 "s"2] = ("M" ["s"1]) ["s"2],
and substitution is defined on terms as follows:
formula_1
The steps outlined for the β-reduction above are thus more concisely expressed as:
(λ "M") "N" →β "M" ["N"].
Alternatives to de Bruijn indices.
When using the standard "named" representation of λ-terms, where variables are treated as labels or strings, one must explicitly handle α-conversion when defining any operation on the terms. In practice this is cumbersome, inefficient, and often error-prone. It has therefore led to the search for different representations of such terms. On the other hand, the named representation of λ-terms is more pervasive and can be more immediately understandable by others because the variables can be given descriptive names. Thus, even if a system uses de Bruijn indices internally, it will present a user interface with names.
An alternative way to view de Bruijn indices is as de Bruijn levels, which indexes variables from the bottom of the stack rather than from the top. This eliminates the need to reindex free variables, for example when weakening the context, whereas de Bruijn indices eliminate the need to reindex bound variables, for example when
substituting a closed expression in another context.
De Bruijn indices are not the only representation of λ-terms that obviates the problem of α-conversion. Among named representations, the nominal techniques of Pitts and Gabbay is one approach, where the representation of a λ-term is treated as an equivalence class of all terms rewritable to it using variable permutations. This approach is taken by the Nominal Datatype Package of Isabelle/HOL.
Another common alternative is an appeal to higher-order representations where the λ-binder is treated as a true function. In such representations, the issues of α-equivalence, substitution, etc. are identified with the same operations in a meta-logic.
When reasoning about the meta-theoretic properties of a deductive system in a proof assistant, it is sometimes desirable to limit oneself to first-order representations and to have the ability to name or rename assumptions. The "locally nameless approach" uses a mixed representation of variables—de Bruijn indices for bound variables and names for free variables—that is able to benefit from the α-canonical form of de Bruijn indexed terms when appropriate.
Barendregt's variable convention.
Barendregt's variable convention is a convention commonly used in proofs and definitions where it is assumed that:
In the general context of an inductive definition, it is not possible to apply α-conversion as needed to convert an inductive definition using the convention to one where the convention is not used, because a variable may appear in both a binding position and a non-binding position in the rule. The induction principle holds if every rule satisfies the following two conditions:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\uparrow^k = (k + 1).(k + 2). ..."
},
{
"math_id": 1,
"text": "\\begin{align}\n n [N_1\\ldots N_n\\ldots] =& N_n \\\\\n (M_1\\;M_2) [s] =& (M_1[s]) (M_2[s]) \\\\\n (\\lambda\\;M) [s] =& \\lambda\\;(M [1.s'])\\\\\n & \\text{where } s' = s \\uparrow^1\n\\end{align}"
}
] | https://en.wikipedia.org/wiki?curid=10314482 |
1031713 | Computer experiment | Experiment used to study computer simulation
A computer experiment or simulation experiment is an experiment used to study a computer simulation, also referred to as an in silico system. This area includes computational physics, computational chemistry, computational biology and other similar disciplines.
Background.
Computer simulations are constructed to emulate a physical system. Because these are meant to replicate some aspect of a system in detail, they often do not yield an analytic solution. Therefore, methods such as discrete event simulation or finite element solvers are used. A computer model is used to make inferences about the system it replicates. For example, climate models are often used because experimentation on an earth sized object is impossible.
Objectives.
Computer experiments have been employed with many purposes in mind. Some of those include:
Computer simulation modeling.
Modeling of computer experiments typically uses a Bayesian framework. Bayesian statistics is an interpretation of the field of statistics where all evidence about the true state of the world is explicitly expressed in the form of probabilities. In the realm of computer experiments, the Bayesian interpretation would imply we must form a prior distribution that represents our prior belief on the structure of the computer model. The use of this philosophy for computer experiments started in the 1980s and is nicely summarized by Sacks et al. (1989) . While the Bayesian approach is widely used, frequentist approaches have been recently discussed .
The basic idea of this framework is to model the computer simulation as an unknown function of a set of inputs. The computer simulation is implemented as a piece of computer code that can be evaluated to produce a collection of outputs. Examples of inputs to these simulations are coefficients in the underlying model, initial conditions and forcing functions. It is natural to see the simulation as a deterministic function that maps these "inputs" into a collection of "outputs". On the basis of seeing our simulator this way, it is common to refer to the collection of inputs as formula_0, the computer simulation itself as formula_1, and the resulting output as formula_2. Both formula_0 and formula_2 are vector quantities, and they can be very large collections of values, often indexed by space, or by time, or by both space and time.
Although formula_3 is known in principle, in practice this is not the case. Many simulators comprise tens of thousands of lines of high-level computer code, which is not accessible to intuition. For some simulations, such as climate models, evaluation of the output for a single set of inputs can require millions of computer hours .
Gaussian process prior.
The typical model for a computer code output is a Gaussian process. For notational simplicity, assume formula_4 is a scalar. Owing to the Bayesian framework, we fix our belief that the function formula_1 follows a Gaussian process,
formula_5
where formula_6 is the mean function and formula_7 is the covariance function. Popular mean functions are low order polynomials and a popular covariance function is Matern covariance, which includes both the exponential (formula_8) and Gaussian covariances (as formula_9).
Design of computer experiments.
The design of computer experiments has considerable differences from design of experiments for parametric models. Since a Gaussian process prior has an infinite dimensional representation, the concepts of A and D criteria (see Optimal design), which focus on reducing the error in the parameters, cannot be used. Replications would also be wasteful in cases when the computer simulation has no error. Criteria that are used to determine a good experimental design include integrated mean squared prediction error and distance based criteria .
Popular strategies for design include latin hypercube sampling and low discrepancy sequences.
Problems with massive sample sizes.
Unlike physical experiments, it is common for computer experiments to have thousands of different input combinations. Because the standard inference requires matrix inversion of a square matrix of the size of the number of samples (formula_10), the cost grows on the formula_11. Matrix inversion of large, dense matrices can also cause numerical inaccuracies. Currently, this problem is solved by greedy decision tree techniques, allowing effective computations for unlimited dimensionality and sample size patent WO2013055257A1, or avoided by using approximation methods, e.g. . | [
{
"math_id": 0,
"text": "x"
},
{
"math_id": 1,
"text": "f"
},
{
"math_id": 2,
"text": "f(x)"
},
{
"math_id": 3,
"text": "f(\\cdot)"
},
{
"math_id": 4,
"text": " f(x) "
},
{
"math_id": 5,
"text": "f \\sim \\operatorname{GP}(m(\\cdot),C(\\cdot,\\cdot)),"
},
{
"math_id": 6,
"text": " m"
},
{
"math_id": 7,
"text": " C "
},
{
"math_id": 8,
"text": " \\nu = 1/2 "
},
{
"math_id": 9,
"text": " \\nu \\rightarrow \\infty "
},
{
"math_id": 10,
"text": "n"
},
{
"math_id": 11,
"text": " \\mathcal{O} (n^3) "
}
] | https://en.wikipedia.org/wiki?curid=1031713 |
10317758 | De Bruijn notation | In mathematical logic, the De Bruijn notation is a syntax for terms in the λ calculus invented by the Dutch mathematician Nicolaas Govert de Bruijn. It can be seen as a reversal of the usual syntax for the λ calculus where the argument in an application is placed next to its corresponding binder in the function instead of after the latter's body.
Formal definition.
Terms (formula_0) in the De Bruijn notation are either variables (formula_1), or have one of two "wagon" prefixes. The "abstractor wagon", written formula_2, corresponds to the usual λ-binder of the λ calculus, and the "applicator wagon", written formula_3, corresponds to the argument in an application in the λ calculus.
formula_4
Terms in the traditional syntax can be converted to the De Bruijn notation by defining an inductive function formula_5 for which:
formula_6
All operations on λ-terms commute with respect to the formula_5 translation. For example, the usual β-reduction,
formula_7
in the De Bruijn notation is, predictably,
formula_8
A feature of this notation is that abstractor and applicator wagons of β-redexes are paired like parentheses. For example, consider the stages in the β-reduction of the term formula_9, where the redexes are underlined:
formula_10
Thus, if one views the applicator as an open paren ('codice_0') and the abstractor as a close bracket ('codice_1'), then the pattern in the above term is 'codice_2'. De Bruijn called an applicator and its corresponding abstractor in this interpretation "partners", and wagons without partners "bachelors". A sequence of wagons, which he called a "segment", is "well balanced" if all its wagons are partnered.
Advantages of the De Bruijn notation.
In a well balanced segment, the partnered wagons may be moved around arbitrarily and, as long as parity is not destroyed, the meaning of the term stays the same. For example, in the above example, the applicator formula_3 can be brought to its abstractor formula_11, or the abstractor to the applicator. In fact, "all" commutative and permutative conversions on lambda terms may be described simply in terms of parity-preserving reorderings of partnered wagons. One thus obtains a "generalised conversion" primitive for λ-terms in the De Bruijn notation.
Several properties of λ-terms that are difficult to state and prove using the traditional notation are easily expressed in the De Bruijn notation. For example, in a type-theoretic setting, one can easily compute the canonical class of types for a term in a typing context, and restate the type checking problem to one of verifying that the checked type is a member of this class. De Bruijn notation has also been shown to be useful in calculi for explicit substitution in pure type systems.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "M, N, \\ldots"
},
{
"math_id": 1,
"text": "v"
},
{
"math_id": 2,
"text": "[v]"
},
{
"math_id": 3,
"text": "(M)"
},
{
"math_id": 4,
"text": "M,N,... ::=\\ v\\ |\\ [v]\\;M\\ |\\ (M)\\;N"
},
{
"math_id": 5,
"text": "\\mathcal{I}"
},
{
"math_id": 6,
"text": "\n\\begin{align}\n \\mathcal{I}(v) &= v \\\\\n \\mathcal{I}(\\lambda v.\\ M) &= [v]\\;\\mathcal{I}(M) \\\\\n \\mathcal{I}(M\\;N) &= (\\mathcal{I}(N))\\mathcal{I}(M).\n\\end{align}\n"
},
{
"math_id": 7,
"text": "(\\lambda v.\\ M)\\;N\\ \\ \\longrightarrow_\\beta\\ \\ M[v := N]"
},
{
"math_id": 8,
"text": "(N)\\;[v]\\;M\\ \\ \\longrightarrow_\\beta\\ \\ M[v := N]."
},
{
"math_id": 9,
"text": "(M)\\;(N)\\;[u]\\;(P)\\;[v]\\;[w]\\;(Q)\\;z"
},
{
"math_id": 10,
"text": "\n\\begin{align}\n(M)\\;\\underline{(N)\\;[u]}\\;(P)\\;[v]\\;[w]\\;(Q)\\;z\n &{\\ \\longrightarrow_\\beta\\ } \n (M)\\;\\underline{(P[u:=N])\\;[v]}\\;[w]\\;(Q[u:=N])\\;z \\\\\n &{\\ \\longrightarrow_\\beta\\ }\n \\underline{(M)\\;[w]}\\;(Q[u:=N,v:=P[u:=N]])\\;z \\\\\n &{\\ \\longrightarrow_\\beta\\ }\n (Q[u:=N,v:=P[u:=N],w:=M])\\;z.\n\\end{align}\n"
},
{
"math_id": 11,
"text": "[w]"
}
] | https://en.wikipedia.org/wiki?curid=10317758 |
10318351 | Span (category theory) | In category theory, a span, roof or correspondence is a generalization of the notion of relation between two objects of a category. When the category has all pullbacks (and satisfies a small number of other conditions), spans can be considered as morphisms in a category of fractions.
The notion of a span is due to Nobuo Yoneda (1954) and Jean Bénabou (1967).
Formal definition.
A span is a diagram of type formula_0 i.e., a diagram of the form formula_1.
That is, let Λ be the category (-1 ← 0 → +1). Then a span in a category "C" is a functor "S" : Λ → "C". This means that a span consists of three objects "X", "Y" and "Z" of "C" and morphisms "f" : "X" → "Y" and "g" : "X" → "Z": it is two maps with common "domain".
The colimit of a span is a pushout.
Cospans.
A cospan "K" in a category C is a functor K : Λop → C; equivalently, a "contravariant" functor from Λ to C. That is, a diagram of type formula_6 i.e., a diagram of the form formula_7.
Thus it consists of three objects "X", "Y" and "Z" of C and morphisms "f" : "Y" → "X" and "g" : "Z" → "X": it is two maps with common "codomain."
The limit of a cospan is a pullback.
An example of a cospan is a cobordism "W" between two manifolds "M" and "N", where the two maps are the inclusions into "W". Note that while cobordisms are cospans, the category of cobordisms is not a "cospan category": it is not the category of all cospans in "the category of manifolds with inclusions on the boundary", but rather a subcategory thereof, as the requirement that "M" and "N" form a partition of the boundary of "W" is a global constraint.
The category nCob of finite-dimensional cobordisms is a dagger compact category. More generally, the category Span("C") of spans on any category "C" with finite limits is also dagger compact.
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\Lambda = (-1 \\leftarrow 0 \\rightarrow +1),"
},
{
"math_id": 1,
"text": "Y \\leftarrow X \\rightarrow Z"
},
{
"math_id": 2,
"text": "X \\times Y \\overset{\\pi_X}{\\to} X"
},
{
"math_id": 3,
"text": "X \\times Y \\overset{\\pi_Y}{\\to} Y"
},
{
"math_id": 4,
"text": "\\phi\\colon A \\to B"
},
{
"math_id": 5,
"text": "X \\leftarrow Y \\rightarrow Z,"
},
{
"math_id": 6,
"text": "\\Lambda^\\text{op} = (-1 \\rightarrow 0 \\leftarrow +1),"
},
{
"math_id": 7,
"text": "Y \\rightarrow X \\leftarrow Z"
}
] | https://en.wikipedia.org/wiki?curid=10318351 |
10319171 | Density wave theory | Density wave theory or the Lin–Shu density wave theory is a theory proposed by C.C. Lin and Frank Shu in the mid-1960s to explain the spiral arm structure of spiral galaxies. The Lin–Shu theory introduces the idea of long-lived quasistatic spiral structure (QSSS hypothesis). In this hypothesis, the spiral pattern rotates with a particular angular frequency (pattern speed), whereas the stars in the galactic disk orbit at varying speeds, which depend on their distance to the galaxy center. The presence of spiral density waves in galaxies has implications on star formation, since the gas orbiting around the galaxy may be compressed and cause shock waves periodically. Theoretically, the formation of a global spiral pattern is treated as an instability of the stellar disk caused by the self-gravity, as opposed to tidal interactions. The mathematical formulation of the theory has also been extended to other astrophysical disk systems, such as Saturn's rings.
Galactic spiral arms.
Originally, astronomers had the idea that the arms of a spiral galaxy were material. However, if this were the case, then the arms would become more and more tightly wound, since the matter nearer to the center of the galaxy rotates faster than the matter at the edge of the galaxy. The arms would become indistinguishable from the rest of the galaxy after only a few orbits. This is called the winding problem.
Lin & Shu proposed in 1964 that the arms were not material in nature, but instead made up of areas of greater density, similar to a traffic jam on a highway. The cars move through the traffic jam: the density of cars increases in the middle of it. The traffic jam itself, however, moves more slowly. In the galaxy, stars, gas, dust, and other components move through the density waves, are compressed, and then move out of them.
More specifically, the density wave theory argues that the "gravitational attraction between stars at different radii" prevents the so-called winding problem, and actually maintains the spiral pattern.
The rotation speed of the arms is defined to be formula_0, the global pattern speed. (Thus, within a certain non-inertial reference frame, which is rotating at formula_0, the spiral arms appear to be at rest). The stars "within" the arms are not necessarily stationary, though at a certain distance from the center, formula_1, the corotation radius, the stars and the density waves move together. Inside that radius, stars move more quickly (formula_2) than the spiral arms, and outside, stars move more slowly (formula_3). For an "m"-armed spiral, a star at radius "R" from the center will move through the structure with a frequency formula_4. So, the gravitational attraction between stars can only maintain the spiral structure if the frequency at which a star passes through the arms is less than the epicyclic frequency, formula_5, of the star. This means that a long-lived spiral structure will only exist between the inner and outer Lindblad resonance (ILR, OLR, respectively), which are defined as the radii such that: formula_6 and formula_7, respectively. Past the OLR and within the ILR, the extra density in the spiral arms pulls more often than the epicyclic rate of the stars, and the stars are thus unable to react and move in such a way as to "reinforce the spiral density enhancement".
Further implications.
The density wave theory also explains a number of other observations that have been made about spiral galaxies. For example, "the ordering of H I clouds and dust bands on the inner edges of spiral arms, the existence of young, massive stars and H II regions throughout the arms, and an abundance of old, red stars in the remainder of the disk".
When clouds of gas and dust enter into a density wave and are compressed, the rate of star formation increases as some clouds meet the Jeans criterion, and collapse to form new stars. Since star formation does not happen immediately, the stars are slightly behind the density waves. The hot OB stars that are created ionize the gas of the interstellar medium, and form H II regions. These stars have relatively short lifetimes, however, and expire before fully leaving the density wave. The smaller, redder stars do leave the wave, and become distributed throughout the galactic disk.
Density waves have also been described as pressurizing gas clouds and thereby catalyzing star formation.
Application to Saturn's rings.
Beginning in the late 1970s, Peter Goldreich, Frank Shu, and others applied density wave theory to the rings of Saturn. Saturn's rings (particularly the A Ring) contain a great many spiral density waves and spiral bending waves excited by Lindblad resonances and vertical resonances (respectively) with Saturn's moons. The physics are largely the same as with galaxies, though spiral waves in Saturn's rings are much more tightly wound (extending a few hundred kilometers at most) due to the very large central mass (Saturn itself) compared to the mass of the disk. The "Cassini" mission revealed very small density waves excited by the ring-moons Pan and Atlas and by high-order resonances with the larger moons, as well as waves whose form changes with time due to the varying orbits of Janus and Epimetheus.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\Omega_{gp}"
},
{
"math_id": 1,
"text": "R_{c}"
},
{
"math_id": 2,
"text": "\\Omega > \\Omega_{gp}"
},
{
"math_id": 3,
"text": "\\Omega < \\Omega_{gp}"
},
{
"math_id": 4,
"text": "m(\\Omega_{gp} - \\Omega (R))"
},
{
"math_id": 5,
"text": "\\kappa (R)"
},
{
"math_id": 6,
"text": "\\Omega (R)=\\Omega_{gp} + \\kappa /m"
},
{
"math_id": 7,
"text": "\\Omega (R)=\\Omega_{gp} - \\kappa /m"
}
] | https://en.wikipedia.org/wiki?curid=10319171 |
103194 | Leidenfrost effect | Physical phenomenon
The Leidenfrost effect is a physical phenomenon in which a liquid, close to a solid surface of another body that is significantly hotter than the liquid's boiling point, produces an insulating vapor layer that keeps the liquid from boiling rapidly. Because of this repulsive force, a droplet hovers over the surface, rather than making physical contact with it. The effect is named after the German doctor Johann Gottlob Leidenfrost, who described it in "A Tract About Some Qualities of Common Water".
This is most commonly seen when cooking, when drops of water are sprinkled onto a hot pan. If the pan's temperature is at or above the Leidenfrost point, which is approximately for water, the water skitters across the pan and takes longer to evaporate than it would take if the water droplets had been sprinkled onto a cooler pan.
Details.
The effect can be seen as drops of water are sprinkled onto a pan at various times as it heats up. Initially, as the temperature of the pan is just below , the water flattens out and slowly evaporates, or if the temperature of the pan is well below , the water stays liquid. As the temperature of the pan rises above , the water droplets hiss when touching the pan, and these droplets evaporate quickly. When the temperature exceeds the Leidenfrost point, the Leidenfrost effect appears. On contact with the pan, the water droplets bunch up into small balls of water and skitter around, lasting much longer than when the temperature of the pan was lower. This effect works until a much higher temperature causes any further drops of water to evaporate too quickly to cause this effect.
The effect happens because, at temperatures at or above the Leidenfrost point, the bottom part of the water droplet vaporizes immediately on contact with the hot pan. The resulting gas suspends the rest of the water droplet just above it, preventing any further direct contact between the liquid water and the hot pan. As steam has much poorer thermal conductivity than the metal pan, further heat transfer between the pan and the droplet is slowed down dramatically. This also results in the drop being able to skid around the pan on the layer of gas just under it.
The temperature at which the Leidenfrost effect appears is difficult to predict. Even if the volume of the drop of liquid stays the same, the Leidenfrost point may be quite different, with a complicated dependence on the properties of the surface, as well as any impurities in the liquid. Some research has been conducted into a theoretical model of the system, but it is quite complicated.
The effect was also described by the Victorian steam boiler designer, William Fairbairn, in reference to its effect on massively reducing heat transfer from a hot iron surface to water, such as within a boiler. In a pair of lectures on boiler design, he cited the work of Pierre Hippolyte Boutigny (1798–1884) and Professor Bowman of King's College, London, in studying this. A drop of water that was vaporized almost immediately at persisted for 152 seconds at . Lower temperatures in a boiler firebox might evaporate water more quickly as a result; compare Mpemba effect. An alternative approach was to increase the temperature beyond the Leidenfrost point. Fairbairn considered this, too, and may have been contemplating the flash steam boiler, but considered the technical aspects insurmountable for the time.
The Leidenfrost point may also be taken to be the temperature for which the hovering droplet lasts longest.
It has been demonstrated that it is possible to stabilize the Leidenfrost vapor layer of water by exploiting superhydrophobic surfaces. In this case, once the vapor layer is established, cooling never collapses the layer, and no nucleate boiling occurs; the layer instead slowly relaxes until the surface is cooled.
Droplets of different liquids with different boiling temperatures will also exhibit a Leidenfrost effect with respect to each other and repel each other.
The Leidenfrost effect has been used for the development of high sensitivity ambient mass spectrometry. Under the influence of the Leidenfrost condition, the levitating droplet does not release molecules, and the molecules are enriched inside the droplet. At the last moment of droplet evaporation, all the enriched molecules release in a short time period and thereby increase the sensitivity.
A heat engine based on the Leidenfrost effect has been prototyped; it has the advantage of extremely low friction.
The effect also applies when the surface is at room temperature but the liquid is cryogenic, allowing liquid nitrogen droplets to harmlessly roll off exposed skin. Conversely, the "inverse Leidenfrost effect" lets drops of relatively warm liquid levitate on a bath of liquid nitrogen.
Leidenfrost point.
The Leidenfrost point signifies the onset of stable film boiling. It represents the point on the boiling curve where the heat flux is at the minimum and the surface is completely covered by a vapor blanket. Heat transfer from the surface to the liquid occurs by conduction and radiation through the vapour. In 1756, Leidenfrost observed that water droplets supported by the vapor film slowly evaporate as they move about on the hot surface. As the surface temperature is increased, radiation through the vapor film becomes more significant and the heat flux increases with increasing excess temperature.
The minimum heat flux for a large horizontal plate can be derived from Zuber's equation,
formula_0
where the properties are evaluated at saturation temperature. Zuber's constant, formula_1, is approximately 0.09 for most fluids at moderate pressures.
Heat transfer correlations.
The heat transfer coefficient may be approximated using Bromley's equation,
formula_2
where formula_3 is the outside diameter of the tube. The correlation constant "C" is 0.62 for horizontal cylinders and vertical plates, and 0.67 for spheres. Vapor properties are evaluated at film temperature.
For stable film boiling on a horizontal surface, Berenson has modified Bromley's equation to yield,
formula_4
For vertical tubes, Hsu and Westwater have correlated the following equation,
formula_5
where m is the mass flow rate in formula_6 at the upper end of the tube.
At excess temperatures above that at the minimum heat flux, the contribution of radiation becomes appreciable, and it becomes dominant at high excess temperatures. The total heat transfer coefficient is thus a combination of the two. Bromley has suggested the following equations for film boiling from the outer surface of horizontal tubes:
formula_7
If formula_8,
formula_9
The effective radiation coefficient, formula_10 can be expressed as,
formula_11
where formula_12 is the emissivity of the solid and formula_13 is the Stefan–Boltzmann constant.
Pressure field in a Leidenfrost droplet.
The equation for the pressure field in the vapor region between the droplet and the solid surface can be solved for using the standard momentum and continuity equations. For the sake of simplicity in solving, a linear temperature profile and a parabolic velocity profile are assumed within the vapor phase. The heat transfer within the vapor phase is assumed to be through conduction. With these approximations, the Navier–Stokes equations can be solved to get the pressure field.
Leidenfrost temperature and surface tension effects.
The Leidenfrost temperature is the property of a given set of solid–liquid pair. The temperature of the solid surface beyond which the liquid undergoes the Leidenfrost phenomenon is termed the Leidenfrost temperature. Calculation of the Leidenfrost temperature involves the calculation of the minimum film boiling temperature of a fluid. Berenson obtained a relation for the minimum film boiling temperature from minimum heat flux arguments. While the equation for the minimum film boiling temperature, which can be found in the reference above, is quite complex, the features of it can be understood from a physical perspective. One critical parameter to consider is the surface tension. The proportional relationship between the minimum film boiling temperature and surface tension is to be expected, since fluids with higher surface tension need higher quantities of heat flux for the onset of nucleate boiling. Since film boiling occurs after nucleate boiling, the minimum temperature for film boiling should have a proportional dependence on the surface tension.
Henry developed a model for Leidenfrost phenomenon which includes transient wetting and microlayer evaporation. Since the Leidenfrost phenomenon is a special case of film boiling, the Leidenfrost temperature is related to the minimum film boiling temperature via a relation which factors in the properties of the solid being used. While the Leidenfrost temperature is not directly related to the surface tension of the fluid, it is indirectly dependent on it through the film boiling temperature. For fluids with similar thermophysical properties, the one with higher surface tension usually has a higher Leidenfrost temperature.
For example, for a saturated water–copper interface, the Leidenfrost temperature is . The Leidenfrost temperatures for glycerol and common alcohols are significantly smaller because of their lower surface tension values (density and viscosity differences are also contributing factors.)
Reactive Leidenfrost effect.
Non-volatile materials were discovered in 2015 to also exhibit a 'reactive Leidenfrost effect', whereby solid particles were observed to float above hot surfaces and skitter around erratically. Detailed characterization of the reactive Leidenfrost effect was completed for small particles of cellulose (~0.5 mm) on high temperature polished surfaces by high speed photography. Cellulose was shown to decompose to short-chain oligomers which melt and wet smooth surfaces with increasing heat transfer associated with increasing surface temperature. Above , cellulose was observed to exhibit transition boiling with violent bubbling and associated reduction in heat transfer. Liftoff of the cellulose droplet (depicted at the right) was observed to occur above about , associated with a dramatic reduction in heat transfer.
High speed photography of the reactive Leidenfrost effect of cellulose on porous surfaces (macroporous alumina) was also shown to suppress the reactive Leidenfrost effect and enhance overall heat transfer rates to the particle from the surface. The new phenomenon of a 'reactive Leidenfrost (RL) effect' was characterized by a dimensionless quantity, (φRL= τconv/τrxn), which relates the time constant of solid particle heat transfer to the time constant of particle reaction, with the reactive Leidenfrost effect occurring for 10−1< φRL< 10+1. The reactive Leidenfrost effect with cellulose will occur in numerous high temperature applications with carbohydrate polymers, including biomass conversion to biofuels, preparation and cooking of food, and tobacco use.
The Leidenfrost effect has also been used as a means to promote chemical change of various organic liquids through their conversion by thermal decomposition into various products. Examples include decomposition of ethanol, diethyl carbonate, and glycerol.
In popular culture.
In Jules Verne's 1876 book "Michael Strogoff", the protagonist is saved from being blinded with a hot blade by evaporating tears.
In the 2009 season 7 finale of "MythBusters", "Mini Myth Mayhem", the team demonstrated that a person can wet their hand and briefly dip it into molten lead without injury, using the Leidenfrost effect as the scientific basis. | [
{
"math_id": 0,
"text": "{{\\frac{q}{A}}_{min}}=C{{h}_{fg}}{{\\rho }_{v}}{{\\left[ \\frac{\\sigma g\\left( {{\\rho }_{L}}-{{\\rho }_{v}} \\right)}{{{\\left( {{\\rho }_{L}}+{{\\rho }_{v}} \\right)}^{2}}} \\right]}^{{}^{1}\\!\\!\\diagup\\!\\!{}_{4}\\;}}"
},
{
"math_id": 1,
"text": "C"
},
{
"math_id": 2,
"text": "h=C{{\\left[ \\frac{k_{v}^{3}{{\\rho }_{v}}g\\left( {{\\rho }_{L}}-{{\\rho }_{v}} \\right)\\left( {{h}_{fg}}+0.4{{c}_{pv}}\\left( {{T}_{s}}-{{T}_{sat}} \\right) \\right)}{{{D}_{o}}{{\\mu }_{v}}\\left( {{T}_{s}}-{{T}_{sat}} \\right)} \\right]}^{{}^{1}\\!\\!\\diagup\\!\\!{}_{4}\\;}}"
},
{
"math_id": 3,
"text": "{{D}_{o}}"
},
{
"math_id": 4,
"text": "h=0.425{{\\left[ \\frac{k_{vf}^{3}{{\\rho }_{vf}}g\\left( {{\\rho }_{L}}-{{\\rho }_{v}} \\right)\\left( {{h}_{fg}}+0.4{{c}_{pv}}\\left( {{T}_{s}}-{{T}_{sat}} \\right) \\right)}{{{\\mu }_{vf}}\\left( {{T}_{s}}-{{T}_{sat}} \\right)\\sqrt{\\sigma /g\\left( {{\\rho }_{L}}-{{\\rho }_{v}} \\right)}} \\right]}^{{}^{1}\\!\\!\\diagup\\!\\!{}_{4}\\;}}"
},
{
"math_id": 5,
"text": "h{{\\left[ \\frac{\\mu _{v}^{2}}{g{{\\rho }_{v}}\\left( {{\\rho }_{L}}-{{\\rho }_{v}} \\right)k_{v}^{3}} \\right]}^{{}^{1}\\!\\!\\diagup\\!\\!{}_{3}\\;}}=0.0020{{\\left[ \\frac{4m}{\\pi {{D}_{v}}{{\\mu }_{v}}} \\right]}^{0.6}}"
},
{
"math_id": 6,
"text": "l{{b}_{m}}/hr"
},
{
"math_id": 7,
"text": "{{h}^{{}^{4}\\!\\!\\diagup\\!\\!{}_{3}\\;}}={{h}_{conv}}^{{}^{4}\\!\\!\\diagup\\!\\!{}_{3}\\;}+{{h}_{rad}}{{h}^{{}^{1}\\!\\!\\diagup\\!\\!{}_{3}\\;}}"
},
{
"math_id": 8,
"text": "{{h}_{rad}}<{{h}_{conv}}"
},
{
"math_id": 9,
"text": "h={{h}_{conv}}+\\frac{3}{4}{{h}_{rad}}"
},
{
"math_id": 10,
"text": "{{h}_{rad}}"
},
{
"math_id": 11,
"text": "{{h}_{rad}}=\\frac{\\varepsilon \\sigma \\left( T_{s}^{4}-T_{sat}^{4} \\right)}{\\left( {{T}_{s}}-{{T}_{sat}} \\right)}"
},
{
"math_id": 12,
"text": "\\varepsilon "
},
{
"math_id": 13,
"text": "\\sigma "
}
] | https://en.wikipedia.org/wiki?curid=103194 |
10320021 | Triethylaluminium | <templatestyles src="Chembox/styles.css"/>
Chemical compound
Triethylaluminium is one of the simplest examples of an organoaluminium compound. Despite its name the compound has the formula Al2(C2H5)6 (abbreviated as Al2Et6 or TEA). This colorless liquid is pyrophoric. It is an industrially important compound, closely related to trimethylaluminium.
Structure and bonding.
The structure and bonding in Al2R6 and diborane are analogous (R = alkyl). Referring to Al2Me6, the Al-C(terminal) and Al-C(bridging) distances are 1.97 and 2.14 Å, respectively. The Al center is tetrahedral. The carbon atoms of the bridging ethyl groups are each surrounded by five neighbors: carbon, two hydrogen atoms and two aluminium atoms. The ethyl groups interchange readily intramolecularly. At higher temperatures, the dimer cracks into monomeric AlEt3.
Synthesis and reactions.
Triethylaluminium can be formed via several routes. The discovery of an efficient route was a significant technological achievement. The multistep process uses aluminium, hydrogen gas, and ethylene, summarized as follows:
2 Al + 3 H2 + 6 C2H4 → Al2Et6
Because of this efficient synthesis, triethylaluminium is one of the most available organoaluminium compounds.
Triethylaluminium can also be generated from ethylaluminium sesquichloride (Al2Cl3Et3), which arises by treating aluminium powder with chloroethane. Reduction of ethylaluminium sesquichloride with an alkali metal such as sodium gives triethylaluminium:
6 Al2Cl3Et3 + 18 Na → 3 Al2Et6 + 6 Al + 18 NaCl
Reactivity.
The Al–C bonds of triethylaluminium are polarized to such an extent that the carbon is easily protonated, releasing ethane:
Al2Et6 + 6 HX → 2 AlX3 + 6 EtH
For this reaction, even weak acids can be employed such as terminal acetylenes and alcohols.
The linkage between the pair of aluminium centres is relatively weak and can be cleaved by Lewis bases (L) to give adducts with the formula AlEt3L:
Al2Et6 + 2 L → 2 LAlEt3
Applications.
Precursors to fatty alcohols.
Triethylaluminium is used industrially as an intermediate in the production of fatty alcohols, which are converted to detergents. The first step involves the oligomerization of ethylene by the "Aufbau" reaction, which gives a mixture of trialkylaluminium compounds (simplified here as octyl groups):
Al2(C2H5)6 + 18 C2H4 → Al2(C8H17)6
Subsequently, these trialkyl compounds are oxidized to aluminium alkoxides, which are then hydrolysed:
Al2(C8H17)6 + 3 O2 → Al2(OC8H17)6
Al2(OC8H17)6 + 6 H2O → 6 C8H17OH + 2 Al(OH)3
Co-catalysts in olefin polymerization.
A large amount of TEAL and related aluminium alkyls are used in Ziegler-Natta catalysis. They serve to activate the transition metal catalyst both as a reducing agent and an alkylating agent. TEAL also functions to scavenge water and oxygen.
Reagent in organic and organometallic chemistry.
Triethylaluminium has niche uses as a precursor to other organoaluminium compounds, such as diethylaluminium cyanide:
formula_0
Pyrophoric agent.
Triethylaluminium ignites on contact with air and will ignite and/or decompose on contact with water, and with any other oxidizer—it is one of the few substances sufficiently pyrophoric to ignite on contact with cryogenic liquid oxygen. The enthalpy of combustion, ΔcH°, is –5105.70 ± 2.90 kJ/mol (–22.36 kJ/g). Its easy ignition makes it particularly desirable as a rocket engine ignitor. The SpaceX Falcon 9 rocket uses a triethylaluminium-triethylborane mixture as a first-stage ignitor.
Triethylaluminium thickened with polyisobutylene is used as an incendiary weapon, as a pyrophoric alternative to napalm; e.g., in the M74 clip holding four rockets for the M202A1 launchers. In this application it is known as TPA, for "thickened pyrotechnic agent" or "thickened pyrophoric agent". The usual amount of the thickener is 6%. The amount of thickener can be decreased to 1% if other diluents are added. For example, "n"-hexane, can be used with increased safety by rendering the compound non-pyrophoric until the diluent evaporates, at which point a combined fireball results from both the triethylaluminium and the hexane vapors. The M202 was withdrawn from service in the mid-1980s owing to safety, transport, and storage issues. Some saw limited use in the Afghanistan War against caves and fortified compounds.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\ce{{1/2Al2Et6} + HCN ->}\\ \\tfrac 1 n \\ce{[Et2AlCN]}_n + \\ce{C2H6}"
}
] | https://en.wikipedia.org/wiki?curid=10320021 |
1032155 | All one polynomial | Polynomial in which all coefficients are one
In mathematics, an all one polynomial (AOP) is a polynomial in which all coefficients are one. Over the finite field of order two, conditions for the AOP to be irreducible are known, which allow this polynomial to be used to define efficient algorithms and circuits for multiplication in finite fields of characteristic two. The AOP is a 1-equally spaced polynomial.
Definition.
An AOP of degree "m" has all terms from "x""m" to "x"0 with coefficients of 1, and can be written as
formula_0
or
formula_1
or
formula_2
Thus the roots of the all one polynomial of degree "m" are all ("m"+1)th roots of unity other than unity itself.
Properties.
Over GF(2) the AOP has many interesting properties, including:
Despite the fact that the Hamming weight is large, because of the ease of representation and other improvements there are efficient implementations in areas such as coding theory and cryptography.
Over formula_3, the AOP is irreducible whenever "m" + 1 is a prime "p", and therefore in these cases, the "p"th cyclotomic polynomial.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "AOP_m(x) = \\sum_{i=0}^{m} x^i"
},
{
"math_id": 1,
"text": "AOP_m(x) = x^m + x^{m-1} + \\cdots + x + 1"
},
{
"math_id": 2,
"text": "AOP_m(x) = {x^{m+1} - 1\\over{x-1}}."
},
{
"math_id": 3,
"text": "\\mathbb{Q}"
}
] | https://en.wikipedia.org/wiki?curid=1032155 |
1032170 | Equally spaced polynomial | An equally spaced polynomial (ESP) is a polynomial used in finite fields, specifically GF(2) (binary).
An "s"-ESP of degree "sm" can be written as:
formula_0 for formula_1
or
formula_2
Properties.
Over GF(2) the ESP - which then can be referred to as all one polynomial (AOP) - has many interesting properties, including:
A 1-ESP is known as an all one polynomial (AOP) and has additional properties including the above.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "ESP(x) = \\sum_{i=0}^{m} x^{si}"
},
{
"math_id": 1,
"text": "i = 0, 1, \\ldots, m"
},
{
"math_id": 2,
"text": "ESP(x) = x^{sm} + x^{s(m-1)} + \\cdots + x^s + 1."
}
] | https://en.wikipedia.org/wiki?curid=1032170 |
10322112 | Epsilon-equilibrium | In game theory, an epsilon-equilibrium, or near-Nash equilibrium, is a strategy profile that approximately
satisfies the condition of Nash equilibrium. In a Nash equilibrium, no player has an incentive to change his
behavior. In an approximate Nash equilibrium, this requirement is weakened to allow the possibility that a
player may have a small incentive to do something different. This may still be considered an adequate
solution concept, assuming for example status quo bias. This solution concept may be preferred to Nash
equilibrium due to being easier to compute, or alternatively due to the possibility that in games of more
than 2 players, the probabilities involved in an exact Nash equilibrium need not be rational numbers.
Definition.
There is more than one alternative definition.
The standard definition.
Given a game and a real non-negative parameter formula_0, a strategy profile is said to be an
formula_0-equilibrium if it is not possible for any player to gain more than formula_0 in expected payoff by unilaterally deviating from his strategy.
Every Nash Equilibrium is equivalent to an formula_0-equilibrium where formula_1.
Formally, let formula_2
be an formula_3-player game with action sets formula_4 for each player formula_5 and utility function formula_6.
Let formula_7 denote the payoff to player formula_5 when strategy profile formula_8 is played.
Let formula_9 be the space of probability distributions over formula_4.
A vector of strategies formula_10 is an formula_0-Nash Equilibrium for formula_11 if
formula_12 for all formula_13
Well-supported approximate equilibrium.
The following definition
imposes the stronger requirement that a player may only assign positive probability to a pure strategy formula_14 if the payoff of formula_14 has expected payoff at most formula_0 less than the best response payoff.
Let formula_15 be the probability that strategy profile formula_8 is played. For player formula_16 let formula_17 be strategy profiles of players other than formula_16; for formula_18 and a pure strategy formula_19 of formula_16 let formula_20 be the strategy profile where formula_16 plays formula_19 and other players play formula_8.
Let formula_21 be the payoff to formula_16 when strategy profile formula_8 is used.
The requirement can be expressed by the formula
formula_22
Results.
The existence of a polynomial-time approximation scheme (PTAS) for ε-Nash equilibria is
equivalent to the question of whether there exists one for ε-well-supported
approximate Nash equilibria, but the existence of a PTAS remains an open problem.
For constant values of ε, polynomial-time algorithms for approximate equilibria
are known for lower values of ε than are known for well-supported
approximate equilibria.
For games with payoffs in the range [0,1] and ε=0.3393, ε-Nash equilibria can
be computed in polynomial time.
For games with payoffs in the range [0,1] and ε=2/3, ε-well-supported equilibria can
be computed in polynomial time.
Example.
The notion of ε-equilibria is important in the theory of
stochastic games of potentially infinite duration. There are
simple examples of stochastic games with no Nash equilibrium
but with an ε-equilibrium for any ε strictly bigger than 0.
Perhaps the simplest such example is the following variant of Matching Pennies, suggested by Everett. Player 1 hides a penny and
Player 2 must guess if it is heads up or tails up. If Player 2 guesses correctly, he
wins the penny from Player 1 and the game ends. If Player 2 incorrectly guesses that the penny
is heads up,
the game ends with payoff zero to both players. If he incorrectly guesses that it is tails up, the game repeats. If the play continues forever, the payoff to both players is zero.
Given a parameter "ε" > 0, any strategy profile where Player 2 guesses heads up with
probability ε and tails up with probability 1 − "ε" (at every stage of the game, and independently
from previous stages) is an "ε"-equilibrium for the game. The expected payoff of Player 2 in
such a strategy profile is at least 1 − "ε". However, it is easy to see that there is no
strategy for Player 2 that can guarantee an expected payoff of exactly 1. Therefore, the game
has no Nash equilibrium.
Another simple example is the finitely repeated prisoner's dilemma for T periods, where the payoff is averaged over the T periods. The only Nash equilibrium of this game is to choose Defect in each period. Now consider the two strategies tit-for-tat and grim trigger. Although neither tit-for-tat nor grim trigger are Nash equilibria for the game, both of them are formula_23-equilibria for some positive formula_23. The acceptable values of formula_23 depend on the payoffs of the constituent game and on the number T of periods.
In economics, the concept of a pure strategy epsilon-equilibrium is used when the mixed-strategy approach is seen as unrealistic. In a pure-strategy epsilon-equilibrium, each player chooses a pure-strategy that is within epsilon of its best pure-strategy. For example, in the Bertrand–Edgeworth model, where no pure-strategy equilibrium exists, a pure-strategy epsilon equilibrium may exist.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\varepsilon"
},
{
"math_id": 1,
"text": "\\varepsilon = 0"
},
{
"math_id": 2,
"text": "G = (N, A=A_1 \\times \\dotsb \\times A_N, u\\colon A \\to R^N)"
},
{
"math_id": 3,
"text": "N"
},
{
"math_id": 4,
"text": "A_i"
},
{
"math_id": 5,
"text": "i"
},
{
"math_id": 6,
"text": "u"
},
{
"math_id": 7,
"text": "u_i (s)"
},
{
"math_id": 8,
"text": "s"
},
{
"math_id": 9,
"text": "\\Delta_i"
},
{
"math_id": 10,
"text": "\\sigma \\in \\Delta = \\Delta_1 \\times \\dotsb \\times \\Delta_N"
},
{
"math_id": 11,
"text": "G"
},
{
"math_id": 12,
"text": "u_i(\\sigma)\\geq u_i(\\sigma_i^',\\sigma_{-i})-\\varepsilon"
},
{
"math_id": 13,
"text": "\\sigma_i^' \\in \\Delta_i, i \\in N."
},
{
"math_id": 14,
"text": "a"
},
{
"math_id": 15,
"text": "x_s"
},
{
"math_id": 16,
"text": "p"
},
{
"math_id": 17,
"text": "S_{-p}"
},
{
"math_id": 18,
"text": "s\\in S_{-p}"
},
{
"math_id": 19,
"text": "j"
},
{
"math_id": 20,
"text": "js"
},
{
"math_id": 21,
"text": "u_p(s)"
},
{
"math_id": 22,
"text": "\\sum_{s\\in S_{-p}}u_p(js)x_s > \\varepsilon+\\sum_{s\\in S_{-p}}u_p(j's)x_s \\Longrightarrow x^p_{j'} = 0."
},
{
"math_id": 23,
"text": "\\epsilon"
}
] | https://en.wikipedia.org/wiki?curid=10322112 |
10322441 | Differential equations of addition | Equations in differential cryptanalysis
In cryptography, differential equations of addition (DEA) are one of the most basic equations related to differential cryptanalysis that mix additions over two different groups (e.g. addition modulo 232 and addition over GF(2)) and where input and output differences are expressed as XORs.
Examples.
Differential equations of addition (DEA) are of the following form:
formula_0
where formula_1 and formula_2 are formula_3-bit unknown variables and formula_4, formula_5 and formula_6 are known variables. The symbols formula_7 and formula_8 denote "addition modulo" formula_9 and "bitwise exclusive-or" respectively. The above equation is denoted by formula_10.
Let a set
formula_11
for integer formula_12 denote a system of formula_13 DEA where formula_13 is a polynomial in formula_3. It has been proved that the satisfiability of an arbitrary set of DEA is in the complexity class P when a brute force search requires an exponential time.
In 2013, some properties of a special form of DEA were reported by Chengqing Li et al., where formula_14 and formula_2 is assumed known. Essentially, the special DEA can be represented as formula_15. Based on the found properties, an algorithm for deriving formula_1 was proposed and analyzed.
Applications.
Solution to an arbitrary set of DEA (either in batch and or in adaptive query model) was due to Souradyuti Paul and Bart Preneel. The solution techniques have been used to attack the stream cipher Helix. | [
{
"math_id": 0,
"text": "(x+y)\\oplus((x\\oplus a)+(y\\oplus b))=c"
},
{
"math_id": 1,
"text": "x"
},
{
"math_id": 2,
"text": "y"
},
{
"math_id": 3,
"text": "n"
},
{
"math_id": 4,
"text": "a"
},
{
"math_id": 5,
"text": "b"
},
{
"math_id": 6,
"text": "c"
},
{
"math_id": 7,
"text": "+"
},
{
"math_id": 8,
"text": "\\oplus"
},
{
"math_id": 9,
"text": "2^n"
},
{
"math_id": 10,
"text": "(a, b, c)"
},
{
"math_id": 11,
"text": "S=\\{(a_i, b_i, c_i)|i < k\\}"
},
{
"math_id": 12,
"text": "i"
},
{
"math_id": 13,
"text": "k(n)"
},
{
"math_id": 14,
"text": "a=0"
},
{
"math_id": 15,
"text": "(x \\dotplus \\alpha) \\oplus (x\\dotplus \\beta)=c"
}
] | https://en.wikipedia.org/wiki?curid=10322441 |
10324484 | Hyperhomology | Generalization of (co)homology using chain complexes
In homological algebra, the hyperhomology or hypercohomology (formula_0) is a generalization of (co)homology functors which takes as input not objects in an abelian category formula_1 but instead chain complexes of objects, so objects in formula_2. It is a sort of cross between the derived functor cohomology of an object and the homology of a chain complex since hypercohomology corresponds to the derived global sections functor formula_3.
Hyperhomology is no longer used much: since about 1970 it has been largely replaced by the roughly equivalent concept of a derived functor between derived categories.
Motivation.
One of the motivations for hypercohomology comes from the fact that there isn't an obvious generalization of cohomological long exact sequences associated to short exact sequencesformula_4i.e. there is an associated long exact sequenceformula_5It turns out hypercohomology gives techniques for constructing a similar cohomological associated long exact sequence from an arbitrary long exact sequenceformula_6since its inputs are given by chain complexes instead of just objects from an abelian category. We can turn this chain complex into a distinguished triangle (using the language of triangulated categories on a derived category)formula_7which we denote byformula_8Then, taking derived global sections formula_3 gives a long exact sequence, which is a long exact sequence of hypercohomology groups.
Definition.
We give the definition for hypercohomology as this is more common. As usual, hypercohomology and hyperhomology are essentially the same: one converts from one to the other by dualizing, i.e. by changing the direction of all arrows, replacing injective objects with projective ones, and so on.
Suppose that "A" is an abelian category with enough injectives and "F" a left exact functor to another abelian category "B".
If "C" is a complex of objects of "A" bounded on the left, the hypercohomology
H"i"("C")
of "C" (for an integer "i") is
calculated as follows:
The hypercohomology of "C" is independent of the choice of the quasi-isomorphism, up to unique isomorphisms.
The hypercohomology can also be defined using derived categories: the hypercohomology of "C" is just the cohomology of "RF"("C") considered as an element of the derived category of "B".
For complexes that vanish for negative indices, the hypercohomology can be defined as the derived functors of H0 = "FH"0 = "H"0"F".
The hypercohomology spectral sequences.
There are two hypercohomology spectral sequences; one with "E"2 term
formula_9
and the other with "E"1 term
formula_10
and "E"2 term
formula_11
both converging to the hypercohomology
formula_12,
where "R""j""F" is a right derived functor of "F".
Applications.
One application of hypercohomology spectral sequences are in the study of gerbes. Recall that rank n vector bundles on a space formula_13 can be classified as the Cech-cohomology group formula_14. The main idea behind gerbes is to extend this idea cohomologically, so instead of taking formula_15 for some functor formula_16, we instead consider the cohomology group formula_17, so it classifies objects which are glued by objects in the original classifying group. A closely related subject which studies gerbes and hypercohomology is Deligne-cohomology.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbb{H}_*(-), \\mathbb{H}^*(-)"
},
{
"math_id": 1,
"text": "\\mathcal{A}"
},
{
"math_id": 2,
"text": "\\text{Ch}(\\mathcal{A})"
},
{
"math_id": 3,
"text": "\\mathbf{R}^*\\Gamma(-)"
},
{
"math_id": 4,
"text": "0 \\to M' \\to M \\to M'' \\to 0"
},
{
"math_id": 5,
"text": "0 \\to H^0(M') \\to H^0(M) \\to H^0(M'')\\to H^1(M') \\to \\cdots "
},
{
"math_id": 6,
"text": "0 \\to M_1 \\to M_2\\to \\cdots \\to M_k \\to 0"
},
{
"math_id": 7,
"text": "M_1 \\to [M_2 \\to \\cdots \\to M_{k-1}] \\to M_k[-k+3] \\xrightarrow{+1}"
},
{
"math_id": 8,
"text": "\\mathcal{M}'_\\bullet \\to \\mathcal{M}_\\bullet \\to \\mathcal{M}''_\\bullet \\xrightarrow{+1}"
},
{
"math_id": 9,
"text": "R^iF(H^j(C))"
},
{
"math_id": 10,
"text": "R^jF(C^i)"
},
{
"math_id": 11,
"text": "H^i(R^jF(C))"
},
{
"math_id": 12,
"text": "H^{i+j}(RF(C))"
},
{
"math_id": 13,
"text": "X"
},
{
"math_id": 14,
"text": "H^1(X,\\underline{GL}_n)"
},
{
"math_id": 15,
"text": "H^1(X,\\textbf{R}^0F)"
},
{
"math_id": 16,
"text": "F"
},
{
"math_id": 17,
"text": "H^1(X,\\textbf{R}^1F)"
}
] | https://en.wikipedia.org/wiki?curid=10324484 |
10324687 | Geometric median | Point minimizing sum of distances to given points
In geometry, the geometric median of a discrete set of sample points in a Euclidean space is the point minimizing the sum of distances to the sample points. This generalizes the median, which has the property of minimizing the sum of distances for one-dimensional data, and provides a central tendency in higher dimensions. It is also known as the spatial median, Euclidean minisum point, Torricelli point, or 1-median.
The geometric median is an important estimator of location in statistics, because it minimizes the sum of the "L"2 distances of the samples. It is to be compared to the mean, which minimizes the sum of the squared "L"2 distances, and to the coordinate-wise median which minimizes the sum of the "L"1 distances. It is also a standard problem in facility location, where it models the problem of locating a facility to minimize the cost of transportation.
The more general "k"-median problem asks for the location of "k" cluster centers minimizing the sum of "L"2 distances from each sample point to its nearest center.
The special case of the problem for three points in the plane (that is, m = 3 and n = 2 in the definition below) is sometimes also known as Fermat's problem; it arises in the construction of minimal Steiner trees, and was originally posed as a problem by Pierre de Fermat and solved by Evangelista Torricelli. Its solution is now known as the "Fermat point" of the triangle formed by the three sample points. The geometric median may in turn be generalized to the problem of minimizing the sum of "weighted" distances, known as the "Weber problem" after Alfred Weber's discussion of the problem in his 1909 book on facility location. Some sources instead call Weber's problem the "Fermat–Weber problem", but others use this name for the unweighted geometric median problem.
provides a survey of the geometric median problem. See for generalizations of the problem to non-discrete point sets.
Definition.
Formally, for a given set of "m" points formula_0 with each formula_1, the geometric median is defined as the sum of the "L"2 distances minimizer
formula_2
Here, arg min means the value of the argument formula_3 which minimizes the sum. In this case, it is the point formula_3 in "n"-dimensional Euclidean space from where the sum of all Euclidean distances to the formula_4's is minimum.
Computation.
Despite the geometric median's being an easy-to-understand concept, computing it poses a challenge. The centroid or center of mass, defined similarly to the geometric median as minimizing the sum of the "squares" of the distances to each point, can be found by a simple formula — its coordinates are the averages of the coordinates of the points — but it has been shown that no explicit formula, nor an exact algorithm involving only arithmetic operations and "k"th roots, can exist in general for the geometric median. Therefore, only numerical or symbolic approximations to the solution of this problem are possible under this model of computation.
However, it is straightforward to calculate an approximation to the geometric median using an iterative procedure in which each step produces a more accurate approximation. Procedures of this type can be derived from the fact that the sum of distances to the sample points is a convex function, since the distance to each sample point is convex and the sum of convex functions remains convex. Therefore, procedures that decrease the sum of distances at each step cannot get trapped in a local optimum.
One common approach of this type, called Weiszfeld's algorithm after the work of Endre Weiszfeld, is a form of iteratively re-weighted least squares. This algorithm defines a set of weights that are inversely proportional to the distances from the current estimate to the sample points, and creates a new estimate that is the weighted average of the sample according to these weights. That is,
formula_8
This method converges for almost all initial positions, but may fail to converge when one of its estimates falls on one of the given points. It can be modified to handle these cases so that it converges for all initial points.
describe more sophisticated geometric optimization procedures for finding approximately optimal solutions to this problem.
show how to compute the geometric median to arbitrary precision in nearly linear time.
Note also that the problem can be formulated as the second-order cone program
formula_9
which can be solved in polynomial time using common optimization solvers.
Characterization of the geometric median.
If "y" is distinct from all the given points, "x""i", then "y" is the geometric median if and only if it satisfies:
formula_10
This is equivalent to:
formula_11
which is closely related to Weiszfeld's algorithm.
In general, "y" is the geometric median if and only if there are vectors "u""i" such that:
formula_12
where for "x""i" ≠ "y",
formula_13
and for "x""i" = "y",
formula_14
An equivalent formulation of this condition is
formula_15
It can be seen as a generalization of the median property, in the sense that any partition of the points, in particular as induced by any hyperplane through "y", has the same and opposite sum of positive "directions" from "y" on each side. In the one dimensional case, the hyperplane is the point "y" itself, and the sum of directions simplifies to the (directed) counting measure.
Generalizations.
The geometric median can be generalized from Euclidean spaces to general Riemannian manifolds (and even metric spaces) using the same idea which is used to define the Fréchet mean on a Riemannian manifold. Let formula_16 be a Riemannian manifold with corresponding distance function formula_17, let formula_18 be formula_19 weights summing to 1, and let formula_20
be formula_19 observations from formula_16. Then we define the weighted geometric median formula_21 (or weighted Fréchet median) of the data points as
formula_22.
If all the weights are equal, we say simply that formula_21 is the geometric median.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbb{X}^m = x_1, x_2, \\dots, x_m\\,"
},
{
"math_id": 1,
"text": "x_i \\in \\mathbb{R}^n"
},
{
"math_id": 2,
"text": "\\underset{y \\in \\mathbb{R}^n}{\\operatorname{arg\\,min}} \\sum_{i=1}^m \\left \\| x_i-y \\right \\|_2"
},
{
"math_id": 3,
"text": "y"
},
{
"math_id": 4,
"text": "x_i"
},
{
"math_id": 5,
"text": "p_{(n+1)/2}"
},
{
"math_id": 6,
"text": "p_{n/2}"
},
{
"math_id": 7,
"text": "p_{(n/2)+1}"
},
{
"math_id": 8,
"text": "\\left. y_{k+1}=\\left( \\sum_{i=1}^m \\frac{x_i}{\\| x_i - y_k \\|} \\right) \\right/ \\left( \\sum_{i=1}^m \\frac{1}{\\| x_i - y_k \\|} \\right)."
},
{
"math_id": 9,
"text": " \\underset{y \\in \\mathbb{R}^n, \\ s \\in \\mathbb{R}^m}{\\min} \\ \\sum_{i=1}^m s_i \\text{ subject to } s_i \\geq \\left \\| x_i-y \\right \\|_2 \\text{ for } i=1, \\ldots, m,"
},
{
"math_id": 10,
"text": "0 = \\sum_{i=1}^m \\frac {x_i - y} {\\left \\| x_i - y \\right \\|}."
},
{
"math_id": 11,
"text": "\\left. y = \\left( \\sum_{i=1}^m \\frac{x_i}{\\| x_i - y \\|} \\right) \\right/ \\left( \\sum_{i=1}^m \\frac{1}{\\| x_i - y \\|} \\right),"
},
{
"math_id": 12,
"text": "0 = \\sum_{i=1}^m u_i "
},
{
"math_id": 13,
"text": "u_i = \\frac {x_i - y} {\\left \\| x_i - y \\right \\|}"
},
{
"math_id": 14,
"text": "\\| u_i \\| \\leq 1 ."
},
{
"math_id": 15,
"text": "\\sum _{1\\le i\\le m, x_i\\ne y}\n\\frac {x_i - y} {\\left \\| x_i - y \\right \\|} \\le \\left|\\{\n\\,i\\mid 1\\le i\\le m, x_i= y\\,\\}\\right|."
},
{
"math_id": 16,
"text": "M"
},
{
"math_id": 17,
"text": "d(\\cdot, \\cdot)"
},
{
"math_id": 18,
"text": "w_1, \\ldots, w_n"
},
{
"math_id": 19,
"text": "n"
},
{
"math_id": 20,
"text": "x_1, \\ldots, x_n"
},
{
"math_id": 21,
"text": "m"
},
{
"math_id": 22,
"text": " m = \\underset{x \\in M}{\\operatorname{arg\\,min}} \\sum_{i=1}^n w_i d(x,x_i) "
}
] | https://en.wikipedia.org/wiki?curid=10324687 |
10325676 | Hybrid functional | Approximations in density functional theory
Hybrid functionals are a class of approximations to the exchange–correlation energy functional in density functional theory (DFT) that incorporate a portion of exact exchange from Hartree–Fock theory with the rest of the exchange–correlation energy from other sources ("ab initio" or empirical). The exact exchange energy functional is expressed in terms of the Kohn–Sham orbitals rather than the density, so is termed an "implicit" density functional. One of the most commonly used versions is B3LYP, which stands for "Becke, 3-parameter, Lee–Yang–Parr".
Origin.
The hybrid approach to constructing density functional approximations was introduced by Axel Becke in 1993. Hybridization with Hartree–Fock (HF) exchange (also called exact exchange) provides a simple scheme for improving the calculation of many molecular properties, such as atomization energies, bond lengths and vibration frequencies, which tend to be poorly described with simple "ab initio" functionals.
Method.
A hybrid exchange–correlation functional is usually constructed as a linear combination of the Hartree–Fock exact exchange functional
formula_0
and any number of exchange and correlation explicit density functionals. The parameters determining the weight of each individual functional are typically specified by fitting the functional's predictions to experimental or accurately calculated thermochemical data, although in the case of the "adiabatic connection functionals" the weights can be set "a priori".
B3LYP.
For example, the popular B3LYP (Becke, 3-parameter, Lee–Yang–Parr) exchange-correlation functional is
formula_1
where formula_2, formula_3, and formula_4. formula_5 is a generalized gradient approximation: the Becke 88 exchange functional and the correlation functional of Lee, Yang and Parr for B3LYP, and formula_6 is the VWN local spin density approximation to the correlation functional.
The three parameters defining B3LYP have been taken without modification from Becke's original fitting of the analogous B3PW91 functional to a set of atomization energies, ionization potentials, proton affinities, and total atomic energies.
PBE0.
The PBE0 functional
mixes the Perdew–Burke–Ernzerhof (PBE) exchange energy and Hartree–Fock exchange energy in a set 3:1 ratio, along with the full PBE correlation energy:
formula_7
where formula_8 is the Hartree–Fock exact exchange functional, formula_9 is the PBE exchange functional, and formula_10 is the PBE correlation functional.
HSE.
The HSE (Heyd–Scuseria–Ernzerhof) exchange–correlation functional uses an error-function-screened Coulomb potential to calculate the exchange portion of the energy in order to improve computational efficiency, especially for metallic systems:
formula_11
where formula_12 is the mixing parameter, and formula_13 is an adjustable parameter controlling the short-rangeness of the interaction. Standard values of formula_14 and formula_15 (usually referred to as HSE06) have been shown to give good results for most systems. The HSE exchange–correlation functional degenerates to the PBE0 hybrid functional for formula_16. formula_17 is the short-range Hartree–Fock exact exchange functional, formula_18 and formula_19 are the short- and long-range components of the PBE exchange functional, and formula_20 is the PBE correlation functional.
Meta-hybrid GGA.
The M06 suite of functionals is a set of four meta-hybrid GGA and meta-GGA DFT functionals. These functionals are constructed by empirically fitting their parameters, while being constrained to a uniform electron gas.
The family includes the functionals M06-L, M06, M06-2X and M06-HF, with a different amount of exact exchange for each one. M06-L is fully local without HF exchange (thus it cannot be considered hybrid), M06 has 27% HF exchange, M06-2X 54% and M06-HF 100%.
The advantages and usefulness of each functional are
The suite gives good results for systems containing dispersion forces, one of the biggest deficiencies of standard DFT methods.
Medvedev, Perdew, et al. say: "Despite their excellent performance for energies and geometries, we must suspect that modern highly parameterized functionals need further guidance from exact constraints, or exact density, or both" | [
{
"math_id": 0,
"text": "E_\\text{x}^\\text{HF} = -\\frac{1}{2} \\sum_{i,j} \\iint \\psi_i^*(\\mathbf r_1) \\psi_j^*(\\mathbf r_2) \\frac{1}{r_{12}} \\psi_j(\\mathbf r_1) \\psi_i(\\mathbf r_2) \\,d\\mathbf r_1 \\,d\\mathbf r_2"
},
{
"math_id": 1,
"text": "E_\\text{xc}^\\text{B3LYP} =(1-a) E_\\text{x}^\\text{LSDA} + aE_\\text{x}^\\text{HF} + b\\vartriangle E_\\text{x}^\\text{B} + (1-c)E_\\text{c}^\\text{LSDA} + c E_\\text{c}^\\text{LYP} ,"
},
{
"math_id": 2,
"text": "a = 0.20"
},
{
"math_id": 3,
"text": "b = 0.72"
},
{
"math_id": 4,
"text": "c\n = 0.81"
},
{
"math_id": 5,
"text": "E_\\text{x}^\\text{B}"
},
{
"math_id": 6,
"text": "E_\\text{c}^\\text{LSDA}"
},
{
"math_id": 7,
"text": "E_\\text{xc}^\\text{PBE0} = \\frac{1}{4} E_\\text{x}^\\text{HF} + \\frac{3}{4} E_\\text{x}^\\text{PBE} + E_\\text{c}^\\text{PBE},"
},
{
"math_id": 8,
"text": "E_\\text{x}^\\text{HF}"
},
{
"math_id": 9,
"text": "E_\\text{x}^\\text{PBE}"
},
{
"math_id": 10,
"text": "E_\\text{c}^\\text{PBE}"
},
{
"math_id": 11,
"text": "E_\\text{xc}^{\\omega\\text{PBEh}} = a E_\\text{x}^\\text{HF,SR}(\\omega) + (1 - a) E_\\text{x}^\\text{PBE,SR}(\\omega) + E_\\text{x}^\\text{PBE,LR}(\\omega) + E_\\text{c}^\\text{PBE},"
},
{
"math_id": 12,
"text": "a"
},
{
"math_id": 13,
"text": "\\omega"
},
{
"math_id": 14,
"text": "a = 1/4"
},
{
"math_id": 15,
"text": "\\omega = 0.2"
},
{
"math_id": 16,
"text": "\\omega = 0"
},
{
"math_id": 17,
"text": "E_\\text{x}^\\text{HF,SR}(\\omega)"
},
{
"math_id": 18,
"text": "E_\\text{x}^\\text{PBE,SR}(\\omega)"
},
{
"math_id": 19,
"text": "E_\\text{x}^\\text{PBE,LR}(\\omega)"
},
{
"math_id": 20,
"text": "E_\\text{c}^\\text{PBE}(\\omega)"
}
] | https://en.wikipedia.org/wiki?curid=10325676 |
1032607 | Focus (geometry) | Geometric point from which certain types of curves are constructed
In geometry, focuses or foci (; sg.: focus) are special points with reference to which any of a variety of curves is constructed. For example, one or two foci can be used in defining conic sections, the four types of which are the circle, ellipse, parabola, and hyperbola. In addition, two foci are used to define the Cassini oval and the Cartesian oval, and more than two foci are used in defining an "n"-ellipse.
Conic sections.
Defining conics in terms of two foci.
An ellipse can be defined as the locus of points for which the sum of the distances to two given foci is constant.
A circle is the special case of an ellipse in which the two foci coincide with each other. Thus, a circle can be more simply defined as the locus of points each of which is a fixed distance from a single given focus. A circle can also be defined as the circle of Apollonius, in terms of two different foci, as the locus of points having a fixed ratio of distances to the two foci.
A parabola is a limiting case of an ellipse in which one of the foci is a point at infinity.
A hyperbola can be defined as the locus of points for which the absolute value of the difference between the distances to two given foci is constant.
Defining conics in terms of a focus and a directrix.
It is also possible to describe all conic sections in terms of a single focus and a single directrix, which is a given line not containing the focus. A conic is defined as the locus of points for each of which the distance to the focus divided by the distance to the directrix is a fixed positive constant, called the eccentricity e. If 0 < "e" < 1 the conic is an ellipse, if "e" = 1 the conic is a parabola, and if "e" > 1 the conic is a hyperbola. If the distance to the focus is fixed and the directrix is a line at infinity, so the eccentricity is zero, then the conic is a circle.
Defining conics in terms of a focus and a directrix circle.
It is also possible to describe all the conic sections as loci of points that are equidistant from a single focus and a single, circular directrix. For the ellipse, both the focus and the center of the directrix circle have finite coordinates and the radius of the directrix circle is greater than the distance between the center of this circle and the focus; thus, the focus is inside the directrix circle. The ellipse thus generated has its second focus at the center of the directrix circle, and the ellipse lies entirely within the circle.
For the parabola, the center of the directrix moves to the point at infinity (see Projective geometry). The directrix "circle" becomes a curve with zero curvature, indistinguishable from a straight line. The two arms of the parabola become increasingly parallel as they extend, and "at infinity" become parallel; using the principles of projective geometry, the two parallels intersect at the point at infinity and the parabola becomes a closed curve (elliptical projection).
To generate a hyperbola, the radius of the directrix circle is chosen to be less than the distance between the center of this circle and the focus; thus, the focus is outside the directrix circle. The arms of the hyperbola approach asymptotic lines and the "right-hand" arm of one branch of a hyperbola meets the "left-hand" arm of the other branch of a hyperbola at the point at infinity; this is based on the principle that, in projective geometry, a single line meets itself at a point at infinity. The two branches of a hyperbola are thus the two (twisted) halves of a curve closed over infinity.
In projective geometry, all conics are equivalent in the sense that every theorem that can be stated for one can be stated for the others.
Astronomical significance.
In the gravitational two-body problem, the orbits of the two bodies about each other are described by two overlapping conic sections with one of the foci of one being coincident with one of the foci of the other at the center of mass (barycenter) of the two bodies.
Thus, for instance, the minor planet Pluto's largest moon Charon has an elliptical orbit which has one focus at the Pluto-Charon system's barycenter, which is a point that is in space between the two bodies; and Pluto also moves in an ellipse with one of its foci at that same barycenter between the bodies. Pluto's ellipse is entirely inside Charon's ellipse, as shown in this animation of the system.
By comparison, the Earth's Moon moves in an ellipse with one of its foci at the barycenter of the Moon and the Earth, this barycenter being within the Earth itself, while the Earth (more precisely, its center) moves in an ellipse with one focus at that same barycenter within the Earth. The barycenter is about three-quarters of the distance from Earth's center to its surface.
Moreover, the Pluto-Charon system moves in an ellipse around its barycenter with the Sun, as does the Earth-Moon system (and every other planet-moon system or moonless planet in the solar system). In both cases the barycenter is well within the body of the Sun.
Two binary stars also move in ellipses sharing a focus at their barycenter; for an animation, see here.
Cartesian and Cassini ovals.
A Cartesian oval is the set of points for each of which the weighted sum of the distances to two given foci is constant. If the weights are equal, the special case of an ellipse results.
A Cassini oval is the set of points for each of which the product of the distances to two given foci is constant.
Generalizations.
An "n"-ellipse is the set of points all having the same sum of distances to n foci (the "n" = 2 case being the conventional ellipse).
The concept of a focus can be generalized to arbitrary algebraic curves. Let C be a curve of class m and let I and J denote the circular points at infinity. Draw the m tangents to C through each of I and J. There are two sets of m lines which will have "m"2 points of intersection, with exceptions in some cases due to singularities, etc. These points of intersection are the defined to be the foci of C. In other words, a point P is a focus if both PI and PJ are tangent to C. When C is a real curve, only the intersections of conjugate pairs are real, so there are m in a real foci and "m"2 − "m" imaginary foci. When C is a conic, the real foci defined this way are exactly the foci which can be used in the geometric construction of C.
Confocal curves.
Let "P"1, "P"2, …, "Pm" be given as foci of a curve C of class m. Let P be the product of the tangential equations of these points and Q the product of the tangential equations of the circular points at infinity. Then all the lines which are common tangents to both "P" = 0 and "Q" = 0 are tangent to C. So, by the AF+BG theorem, the tangential equation of C has the form "HP" + "KQ" = 0. Since C has class m, H must be a constant and K but have degree less than or equal to "m" − 2. The case "H" = 0 can be eliminated as degenerate, so the tangential equation of C can be written as "P" + "fQ" = 0 where f is an arbitrary polynomial of degree 2"m".
For example, let "m" = 2, "P"1 = (1, 0), and "P"2 = (−1, 0). The tangential equations are
formula_0
so "P" = "X"2 − 1 = 0. The tangential equations for the circular points at infinity are
formula_1
so "Q" = "X"2 +"Y"2. Therefore, the tangential equation for a conic with the given foci is
formula_2
or
formula_3
where c is an arbitrary constant. In point coordinates this becomes
formula_4
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\begin{align}\nX + 1 &= 0 \\\\\nX - 1 &= 0\n\\end{align}"
},
{
"math_id": 1,
"text": "\\begin{align}\nX + iY &= 0 \\\\ \nX - iY &= 0\n\\end{align}"
},
{
"math_id": 2,
"text": "X^2 - 1 + c(X^2 +Y^2) = 0"
},
{
"math_id": 3,
"text": "(1+c)X^2 + cY^2 = 1,"
},
{
"math_id": 4,
"text": "\\frac{x^2}{1+c} + \\frac{y^2}{c} = 1."
}
] | https://en.wikipedia.org/wiki?curid=1032607 |
10328101 | Hexacode | In coding theory, the hexacode is a length 6 linear code of dimension 3 over the Galois field formula_0 of 4 elements defined by
formula_1
It is a 3-dimensional subspace of the vector space of dimension 6 over
formula_2.
Then formula_3 contains 45 codewords of weight 4, 18 codewords of weight 6 and
the zero word. The full automorphism group of the hexacode is
formula_4. The hexacode can be used to describe the Miracle Octad Generator
of R. T. Curtis. | [
{
"math_id": 0,
"text": "GF(4)=\\{0,1,\\omega,\\omega^2\\}"
},
{
"math_id": 1,
"text": "H=\\{(a,b,c,f(1),f(\\omega),f(\\omega^2)) : f(x):=ax^2+bx+c; a,b,c\\in GF(4)\\}."
},
{
"math_id": 2,
"text": "GF(4)"
},
{
"math_id": 3,
"text": "H"
},
{
"math_id": 4,
"text": "3.S_6"
}
] | https://en.wikipedia.org/wiki?curid=10328101 |
1032887 | Mollweide projection | Pseudocylindrical equal-area map projection
The Mollweide projection is an equal-area, pseudocylindrical map projection generally used for maps of the world or celestial sphere. It is also known as the Babinet projection, homalographic projection, homolographic projection, and elliptical projection. The projection trades accuracy of angle and shape for accuracy of proportions in area, and as such is used where that property is needed, such as maps depicting global distributions.
The projection was first published by mathematician and astronomer Karl (or Carl) Brandan Mollweide (1774–1825) of Leipzig in 1805. It was reinvented and popularized in 1857 by Jacques Babinet, who gave it the name homalographic projection. The variation homolographic arose from frequent nineteenth-century usage in star atlases.
Properties.
The Mollweide is a pseudocylindrical projection in which the equator is represented as a straight horizontal line perpendicular to a central meridian that is one-half the equator's length. The other parallels compress near the poles, while the other meridians are equally spaced at the equator. The meridians at 90 degrees east and west form a perfect circle, and the whole earth is depicted in a proportional 2:1 ellipse. The proportion of the area of the ellipse between any given parallel and the equator is the same as the proportion of the area on the globe between that parallel and the equator, but at the expense of shape distortion, which is significant at the perimeter of the ellipse, although not as severe as in the sinusoidal projection.
Shape distortion may be diminished by using an "interrupted" version. A "sinusoidal interrupted" Mollweide projection discards the central meridian in favor of alternating half-meridians which terminate at right angles to the equator. This has the effect of dividing the globe into lobes. In contrast, a "parallel interrupted" Mollweide projection uses multiple disjoint central meridians, giving the effect of multiple ellipses joined at the equator. More rarely, the projection can be drawn obliquely to shift the areas of distortion to the oceans, allowing the continents to remain truer to form.
The Mollweide, or its properties, has inspired the creation of several other projections, including the Goode's homolosine, van der Grinten and the Boggs eumorphic.
Mathematical formulation.
The projection transforms from latitude and longitude to map coordinates "x" and "y" via the following equations:
formula_0
where "θ" is an auxiliary angle defined by
formula_1
and "λ" is the longitude, "λ"0 is the central meridian, "φ" is the latitude, and "R" is the radius of the globe to be projected. The map has area 4π"R"2, conforming to the surface area of the generating globe. The "x"-coordinate has a range of [−2"R"√2, 2"R"√2], and the "y"-coordinate has a range of [−"R"√2, "R"√2].
Equation (1) may be solved with rapid convergence (but slow near the poles) using Newton–Raphson iteration:
formula_2
If "φ" = ±, then also "θ" = ±. In that case the iteration should be bypassed; otherwise, division by zero may result.
There exists a closed-form inverse transformation:
formula_3
where "θ" can be found by the relation
formula_4
The inverse transformations allow one to find the latitude and longitude corresponding to the map coordinates "x" and "y".
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\begin{align} x &= R \\frac{2 \\sqrt 2}{\\pi} \\left( \\lambda - \\lambda_{0} \\right) \\cos \\theta, \\\\[5px] y &= R \\sqrt 2 \\sin \\theta ,\\end{align}"
},
{
"math_id": 1,
"text": "2\\theta + \\sin 2\\theta = \\pi \\sin \\varphi \\qquad (1)"
},
{
"math_id": 2,
"text": "\\begin{align} \\theta_0 &= \\varphi, \\\\ \\theta_{n+1} &= \\theta_n - \\frac{2 \\theta_n + \\sin 2\\theta_n - \\pi \\sin \\varphi}{2 + 2 \\cos 2\\theta_n}.\\end{align}"
},
{
"math_id": 3,
"text": "\\begin{align} \\varphi &= \\arcsin \\frac{2 \\theta + \\sin 2\\theta}{\\pi}, \\\\[5px] \\lambda &= \\lambda_{0} + \\frac{\\pi x}{2 R \\sqrt{2} \\cos \\theta}, \\end{align}"
},
{
"math_id": 4,
"text": "\\theta = \\arcsin \\frac{y}{R \\sqrt{2}}. \\,"
}
] | https://en.wikipedia.org/wiki?curid=1032887 |
1032998 | Gas centrifuge | Device that performs isotope separation of gases
A gas centrifuge is a device that performs isotope separation of gases. A centrifuge relies on the principles of centrifugal force accelerating molecules so that particles of different masses are physically separated in a gradient along the radius of a rotating container. A prominent use of gas centrifuges is for the separation of uranium-235 (235U) from uranium-238 (238U). The gas centrifuge was developed to replace the gaseous diffusion method of 235U extraction. High degrees of separation of these isotopes relies on using many individual centrifuges arranged in series that achieve successively higher concentrations. This process yields higher concentrations of 235U while using significantly less energy compared to the gaseous diffusion process.
History.
Suggested in 1919, the centrifugal process was first successfully performed in 1934. American scientist Jesse Beams and his team at the University of Virginia developed the process by separating two chlorine isotopes through a vacuum ultracentrifuge. It was one of the initial isotopic separation means pursued during the Manhattan Project, more particularly by Harold Urey and Karl P. Cohen, but research was discontinued in 1944 as it was felt that the method would not produce results by the end of the war, and that other means of uranium enrichment (gaseous diffusion and electromagnetic separation) had a better chance of success in the short term. This method was successfully used in the Soviet nuclear program, making the Soviet Union the most effective supplier of enriched uranium. Franz Simon, Rudolf Peierls, Klaus Fuchs and Nicholas Kurti made important contributions to the centrifugal process.
Paul Dirac made important theoretical contributions to the centrifugal process during World War II; Dirac developed the fundamental theory of separation processes that underlies the design and analysis of modern uranium enrichment plants. In the long term, especially with the development of the Zippe-type centrifuge, the gas centrifuge has become a very economical mode of separation, using considerably less energy than other methods and having numerous other advantages.
Research in the physical performance of centrifuges was carried out by the Pakistani scientist Abdul Qadeer Khan in the 1970s–80s, using vacuum methods for advancing the role of centrifuges in the development of nuclear fuel for Pakistan's atomic bomb. Many of the theorists working with Khan were unsure that either gaseous and enriched uranium would be feasible on time. One scientist recalled: "No one in the world has used the [gas] centrifuge method to produce military-grade uranium... This was not going to work. He was simply wasting time." In spite of skepticism, the program was quickly proven to be feasible. Enrichment via centrifuge has been used in experimental physics, and the method was smuggled to at least three different countries by the end of the 20th century.
Centrifugal process.
The centrifuge relies on the force resulting from centrifugal acceleration to separate molecules according to their mass and can be applied to most fluids. The dense (heavier) molecules move towards the wall, and the lighter ones remain close to the center. The centrifuge consists of a rigid body rotor rotating at full period at high speed. Concentric gas tubes located on the axis of the rotor are used to introduce feed gas into the rotor and extract the heavier and lighter separated streams. For 235U production, the heavier stream is the waste stream and the lighter stream is the product stream. Modern Zippe-type centrifuges are tall cylinders spinning on a vertical axis. A vertical temperature gradient can be applied to create a convective circulation rising in the center and descending at the periphery of the centrifuge. Such a countercurrent flow can also be stimulated mechanically by the scoops that take out the enriched and depleted fractions. Diffusion between these opposing flows increases the separation by the principle of countercurrent multiplication.
In practice, since there are limits to how tall a single centrifuge can be made, several such centrifuges are connected in series. Each centrifuge receives one input line and produces two output lines, corresponding to light and heavy fractions. The input of each centrifuge is the product stream of the previous centrifuge. This produces an almost pure light fraction from the product stream of the last centrifuge and an almost pure heavy fraction from the waste stream of the first centrifuge.
Gas centrifugation process.
The gas centrifugation process uses a unique design that allows gas to constantly flow in and out of the centrifuge. Unlike most centrifuges which rely on batch processing, the gas centrifuge uses continuous processing, allowing cascading in which multiple identical processes occur in succession. The gas centrifuge consists of a cylindrical rotor, a casing, an electric motor, and three lines for material to travel. The gas centrifuge is designed with a casing that completely encloses the centrifuge. The cylindrical rotor is located inside the casing, which is evacuated of all air to produce a near frictionless rotation when operating. The motor spins the rotor, creating the centrifugal force on the components as they enter the cylindrical rotor. This force acts to separate the molecules of the gas, with heavier molecules moving towards the wall of the rotor and the lighter molecules towards the central axis. There are two output lines, one for the fraction enriched in the desired isotope (in uranium separation, this is 235U), and one depleted of that isotope. The output lines take these separations to other centrifuges to continue the centrifugation process. The process begins when the rotor is balanced in three stages. Most of the technical details on gas centrifuges are difficult to obtain because they are shrouded in "nuclear secrecy".
The early gas centrifuges used in the UK used an alloy body wrapped in epoxy-impregnated glass fibre. Dynamic balancing of the assembly was accomplished by adding small traces of epoxy at the locations indicated by the balancing test unit. The motor was usually a pancake type located at the bottom of the cylinder. The early units were typically around 2 metres long, but subsequent developments gradually increased the length. The present generation are over 4 metres in length. The bearings are gas-based devices, as mechanical bearings would not survive at the normal operating speeds of these centrifuges.
A section of centrifuges would be fed with variable-frequency alternating current from an electronic (bulk) inverter, which would slowly ramp them up to the required speed, generally in excess of 50,000 rpm. One precaution was to quickly get past frequencies at which the cylinder was known to suffer resonance problems. The inverter is a high-frequency unit capable of operating at frequencies around 1 kilohertz. The whole process is normally silent; if a noise is heard coming from a centrifuge, it is a warning of failure (which normally occurs very quickly). The design of the cascade normally allows for the failure of at least one centrifuge unit without compromising the operation of the cascade. The units are normally very reliable, with early models having operated continuously for over 30 years.
Later models have steadily increased the rotation speed of the centrifuges, as it is the velocity of the centrifuge wall that has the most effect on the separation efficiency. A feature of the cascade system of centrifuges is that it is possible to increase plant throughput incrementally, by adding cascade "blocks" to the existing installation at suitable locations, rather than having to install a completely new line of centrifuges.
Concurrent and countercurrent centrifuges.
The simplest gas centrifuge is the concurrent centrifuge, where separative effect is produced by the centrifugal effects of the rotor's rotation. In these centrifuges, the heavy fraction is collected at the periphery of the rotor and the light fraction from nearer the axis of rotation.
Inducing a countercurrent flow uses countercurrent multiplication to enhance the separative effect. A vertical circulating current is set up, with the gas flowing axially along the rotor walls in one direction and a return flow closer to the center of the rotor. The centrifugal separation continues as before (heavier molecules preferentially moving outwards), which means that the heavier molecules are collected by the wall flow, and the lighter fraction collects at the other end. In a centrifuge with downward wall flow, the heavier molecules collect at the bottom. The outlet scoops are then placed at the ends of the rotor cavity, with the feed mixture injected along the axis of the cavity (ideally, the injection point is at the point where the mixture in the rotor is equal to the feed).
This countercurrent flow can be induced mechanically or thermally, or a combination. In mechanically induced countercurrent flow, the arrangement of the (stationary) scoops and internal rotor structures are used to generate the flow. A scoop interacts with the gas by slowing it, which tends to draw it into the centre of the rotor. The scoops at each end induce opposing currents, so one scoop is protected from the flow by a "baffle": a perforated disc within the rotor which rotates along with the gas—at this end of the rotor, the flow will be outwards, towards the rotor wall. Thus, in a centrifuge with a baffled top scoop, the wall flow is downwards, and heavier molecules are collected at the bottom. Thermally induced convection currents can be created by heating the bottom of the centrifuge and/or cooling the top end.
Separative work units.
The separative work unit (SWU) is a measure of the amount of work done by the centrifuge and has units of mass (typically "kilogram separative work unit"). The work formula_0 necessary to separate a mass formula_1 of feed of assay formula_2 into a mass formula_3 of product assay formula_4, and tails of mass formula_5 and assay formula_6 is expressed in terms of the number of separative work units needed, given by the expression
formula_7
where formula_8 is the value function, defined as
formula_9
Practical application of centrifugation.
Separation of uranium-235 from uranium-238.
The separation of uranium requires the material in a gaseous form; uranium hexafluoride (UF6) is used for uranium enrichment. Upon entering the centrifuge cylinder, the UF6 gas is rotated at a high speed. The rotation creates a strong centrifugal force that draws more of the heavier gas molecules (containing the 238U) toward the wall of the cylinder, while the lighter gas molecules (containing the 235U) tend to collect closer to the center. The stream that is slightly enriched in 235U is withdrawn and fed into the next higher stage, while the slightly depleted stream is recycled back into the next lower stage.
Separation of zinc isotopes.
For some uses in nuclear technology, the content of zinc-64 in zinc metal has to be lowered in order to prevent formation of radioisotopes by its neutron activation. Diethyl zinc is used as the gaseous feed medium for the centrifuge cascade. An example of a resulting material is depleted zinc oxide, used as a corrosion inhibitor.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "W_\\mathrm{SWU}"
},
{
"math_id": 1,
"text": "F"
},
{
"math_id": 2,
"text": "x_{f}"
},
{
"math_id": 3,
"text": "P"
},
{
"math_id": 4,
"text": "x_{p}"
},
{
"math_id": 5,
"text": "T"
},
{
"math_id": 6,
"text": "x_{t}"
},
{
"math_id": 7,
"text": "W_\\mathrm{SWU} = P \\cdot V\\left(x_{p}\\right)+T \\cdot V(x_{t})-F \\cdot V(x_{f})"
},
{
"math_id": 8,
"text": "V\\left(x\\right)"
},
{
"math_id": 9,
"text": "V(x) = (1 - 2x) \\cdot \\ln\\left(\\frac{1 - x}{x}\\right)"
}
] | https://en.wikipedia.org/wiki?curid=1032998 |
1033036 | Thin film | Thin layer of material
A thin film is a layer of material ranging from fractions of a nanometer (monolayer) to several micrometers in thickness. The controlled synthesis of materials as thin films (a process referred to as deposition) is a fundamental step in many applications. A familiar example is the household mirror, which typically has a thin metal coating on the back of a sheet of glass to form a reflective interface. The process of silvering was once commonly used to produce mirrors, while more recently the metal layer is deposited using techniques such as sputtering. Advances in thin film deposition techniques during the 20th century have enabled a wide range of technological breakthroughs in areas such as magnetic recording media, electronic semiconductor devices, integrated passive devices, light-emitting diodes, optical coatings (such as antireflective coatings), hard coatings on cutting tools, and for both energy generation (e.g. thin-film solar cells) and storage (thin-film batteries). It is also being applied to pharmaceuticals, via thin-film drug delivery. A stack of thin films is called a multilayer.
In addition to their applied interest, thin films play an important role in the development and study of materials with new and unique properties. Examples include multiferroic materials, and superlattices that allow the study of quantum phenomena.
Nucleation.
Nucleation is an important step in growth that helps determine the final structure of a thin film. Many growth methods rely on nucleation control such as atomic-layer epitaxy (atomic layer deposition). Nucleation can be modeled by characterizing surface process of adsorption, desorption, and surface diffusion.
Adsorption and desorption.
Adsorption is the interaction of a vapor atom or molecule with a substrate surface. The interaction is characterized the sticking coefficient, the fraction of incoming species thermally equilibrated with the surface. Desorption reverses adsorption where a previously adsorbed molecule overcomes the bounding energy and leaves the substrate surface.
The two types of adsorptions, physisorption and chemisorption, are distinguished by the strength of atomic interactions. Physisorption describes the Van der Waals bonding between a stretched or bent molecule and the surface characterized by adsorption energy formula_0. Evaporated molecules rapidly lose kinetic energy and reduces its free energy by bonding with surface atoms. Chemisorption describes the strong electron transfer (ionic or covalent bond) of molecule with substrate atoms characterized by adsorption energy formula_1. The process of physic- and chemisorption can be visualized by the potential energy as a function of distance. The equilibrium distance for physisorption is further from the surface than chemisorption. The transition from physisorbed to chemisorbed states are governed by the effective energy barrier formula_2.
Crystal surfaces have specific bonding sites with larger formula_2 values that would preferentially be populated by vapor molecules to reduce the overall free energy. These stable sites are often found on step edges, vacancies and screw dislocations. After the most stable sites become filled, the adatom-adatom (vapor molecule) interaction becomes important.
Nucleation models.
Nucleation kinetics can be modeled considering only adsorption and desorption. First consider case where there are no mutual adatom interactions, no clustering or interaction with step edges.
The rate of change of adatom surface density formula_3, where formula_4 is the net flux, formula_5 is the mean surface lifetime prior to desorption and formula_6 is the sticking coefficient:
formula_7
formula_8
Adsorption can also be modeled by different isotherms such as Langmuir model and BET model. The Langmuir model derives an equilibrium constant formula_9 based on the adsorption reaction of vapor adatom with vacancy on the substrate surface. The BET model expands further and allows adatoms deposition on previously adsorbed adatoms without interaction between adjacent piles of atoms. The resulting derived surface coverage is in terms of the equilibrium vapor pressure and applied pressure.
Langmuir model where formula_10 is the vapor pressure of adsorbed adatoms:
formula_11
BET model where formula_12 is the equilibrium vapor pressure of adsorbed adatoms and formula_13 is the applied vapor pressure of adsorbed adatoms:
formula_14
As an important note, surface crystallography and differ from the bulk to minimize the overall free electronic and bond energies due to the broken bonds at the surface. This can result in a new equilibrium position known as “selvedge”, where the parallel bulk lattice symmetry is preserved. This phenomenon can cause deviations from theoretical calculations of nucleation.
Surface diffusion.
Surface diffusion describes the lateral motion of adsorbed atoms moving between energy minima on the substrate surface. Diffusion most readily occurs between positions with lowest intervening potential barriers. Surface diffusion can be measured using glancing-angle ion scattering. The average time between events can be describes by:
formula_15
In addition to adatom migration, clusters of adatom can coalesce or deplete. Cluster coalescence through processes, such as Ostwald ripening and sintering, occur in response to reduce the total surface energy of the system. Ostwald repining describes the process in which islands of adatoms with various sizes grow into larger ones at the expense of smaller ones. Sintering is the coalescence mechanism when the islands contact and join.
Deposition.
The act of applying a thin film to a surface is "thin-film deposition" – any technique for depositing a thin film of material onto a substrate or onto previously deposited layers. "Thin" is a relative term, but most deposition techniques control layer thickness within a few tens of nanometres. Molecular beam epitaxy, the Langmuir–Blodgett method, atomic layer deposition and molecular layer deposition allow a single layer of atoms or molecules to be deposited at a time.
It is useful in the manufacture of optics (for reflective, anti-reflective coatings or self-cleaning glass, for instance), electronics (layers of insulators, semiconductors, and conductors form integrated circuits), packaging (i.e., aluminium-coated PET film), and in contemporary art (see the work of Larry Bell). Similar processes are sometimes used where thickness is not important: for instance, the purification of copper by electroplating, and the deposition of silicon and enriched uranium by a chemical vapor deposition-like process after gas-phase processing.
Deposition techniques fall into two broad categories, depending on whether the process is primarily chemical or physical.
Chemical deposition.
Here, a fluid precursor undergoes a chemical change at a solid surface, leaving a solid layer. An everyday example is the formation of soot on a cool object when it is placed inside a flame. Since the fluid surrounds the solid object, deposition happens on every surface, with little regard to direction; thin films from chemical deposition techniques tend to be "conformal", rather than "directional".
Chemical deposition is further categorized by the phase of the precursor:
Plating relies on liquid precursors, often a solution of water with a salt of the metal to be deposited. Some plating processes are driven entirely by reagents in the solution (usually for noble metals), but by far the most commercially important process is electroplating. In semiconductor manufacturing, an advanced form of electroplating known as electrochemical deposition is now used to create the copper conductive wires in advanced chips, replacing the chemical and physical deposition processes used to previous chip generations for aluminum wires
Chemical solution deposition or chemical bath deposition uses a liquid precursor, usually a solution of organometallic powders dissolved in an organic solvent. This is a relatively inexpensive, simple thin-film process that produces stoichiometrically accurate crystalline phases. This technique is also known as the sol-gel method because the 'sol' (or solution) gradually evolves towards the formation of a gel-like diphasic system.
The Langmuir–Blodgett method uses molecules floating on top of an aqueous subphase. The packing density of molecules is controlled, and the packed monolayer is transferred on a solid substrate by controlled withdrawal of the solid substrate from the subphase. This allows creating thin films of various molecules such as nanoparticles, polymers and lipids with controlled particle packing density and layer thickness.
Spin coating or spin casting, uses a liquid precursor, or sol-gel precursor deposited onto a smooth, flat substrate which is subsequently spun at a high velocity to centrifugally spread the solution over the substrate. The speed at which the solution is spun and the viscosity of the sol determine the ultimate thickness of the deposited film. Repeated depositions can be carried out to increase the thickness of films as desired. Thermal treatment is often carried out in order to crystallize the amorphous spin coated film. Such crystalline films can exhibit certain preferred orientations after crystallization on single crystal substrates.
Dip coating is similar to spin coating in that a liquid precursor or sol-gel precursor is deposited on a substrate, but in this case the substrate is completely submerged in the solution and then withdrawn under controlled conditions. By controlling the withdrawal speed, the evaporation conditions (principally the humidity, temperature) and the volatility/viscosity of the solvent, the film thickness, homogeneity and nanoscopic morphology are controlled. There are two evaporation regimes: the capillary zone at very low withdrawal speeds, and the draining zone at faster evaporation speeds.
Chemical vapor deposition generally uses a gas-phase precursor, often a halide or hydride of the element to be deposited. In the case of metalorganic vapour phase epitaxy, an organometallic gas is used. Commercial techniques often use very low pressures of precursor gas.
Plasma Enhanced Chemical Vapor Deposition uses an ionized vapor, or plasma, as a precursor. Unlike the soot example above, this method relies on electromagnetic means (electric current, microwave excitation), rather than a chemical-reaction, to produce a plasma.
Atomic layer deposition and its sister technique molecular layer deposition, uses gaseous precursor to deposit conformal thin film's one layer at a time. The process is split up into two half reactions, run in sequence and repeated for each layer, in order to ensure total layer saturation before beginning the next layer. Therefore, one reactant is deposited first, and then the second reactant is deposited, during which a chemical reaction occurs on the substrate, forming the desired composition. As a result of the stepwise, the process is slower than chemical vapor deposition; however, it can be run at low temperatures. When performed on polymeric substrates, atomic layer deposition can become sequential infiltration synthesis, where the reactants diffuse into the polymer and interact with functional groups on the polymer chains.
Physical deposition.
Physical deposition uses mechanical, electromechanical or thermodynamic means to produce a thin film of solid. An everyday example is the formation of frost. Since most engineering materials are held together by relatively high energies, and chemical reactions are not used to store these energies, commercial physical deposition systems tend to require a low-pressure vapor environment to function properly; most can be classified as physical vapor deposition.
The material to be deposited is placed in an energetic, entropic environment, so that particles of material escape its surface. Facing this source is a cooler surface which draws energy from these particles as they arrive, allowing them to form a solid layer. The whole system is kept in a vacuum deposition chamber, to allow the particles to travel as freely as possible. Since particles tend to follow a straight path, films deposited by physical means are commonly "directional", rather than "conformal".
Examples of physical deposition include:
A thermal evaporator that uses an electric resistance heater to melt the material and raise its vapor pressure to a useful range. This is done in a high vacuum, both to allow the vapor to reach the substrate without reacting with or scattering against other gas-phase atoms in the chamber, and reduce the incorporation of impurities from the residual gas in the vacuum chamber. Obviously, only materials with a much higher vapor pressure than the heating element can be deposited without contamination of the film. Molecular beam epitaxy is a particularly sophisticated form of thermal evaporation.
An electron beam evaporator fires a high-energy beam from an electron gun to boil a small spot of material; since the heating is not uniform, lower vapor pressure materials can be deposited. The beam is usually bent through an angle of 270° in order to ensure that the gun filament is not directly exposed to the evaporant flux. Typical deposition rates for electron beam evaporation range from 1 to 10 nanometres per second.
In molecular beam epitaxy, slow streams of an element can be directed at the substrate, so that material deposits one atomic layer at a time. Compounds such as gallium arsenide are usually deposited by repeatedly applying a layer of one element (i.e., gallium), then a layer of the other (i.e., arsenic), so that the process is chemical, as well as physical; this is known also as atomic layer deposition. If the precursors in use are organic, then the technique is called molecular layer deposition. The beam of material can be generated by either physical means (that is, by a furnace) or by a chemical reaction (chemical beam epitaxy).
Sputtering relies on a plasma (usually a noble gas, such as argon) to knock material from a "target" a few atoms at a time. The target can be kept at a relatively low temperature, since the process is not one of evaporation, making this one of the most flexible deposition techniques. It is especially useful for compounds or mixtures, where different components would otherwise tend to evaporate at different rates. Note, sputtering's step coverage is more or less conformal. It is also widely used in optical media. The manufacturing of all formats of CD, DVD, and BD are done with the help of this technique. It is a fast technique and also it provides a good thickness control. Presently, nitrogen and oxygen gases are also being used in sputtering.
Pulsed laser deposition systems work by an ablation process. Pulses of focused laser light vaporize the surface of the target material and convert it to plasma; this plasma usually reverts to a gas before it reaches the substrate.
Cathodic arc deposition (arc-physical vapor deposition), which is a kind of ion beam deposition where an electrical arc is created that blasts ions from the cathode. The arc has an extremely high power density resulting in a high level of ionization (30–100%), multiply charged ions, neutral particles, clusters and macro-particles (droplets). If a reactive gas is introduced during the evaporation process, dissociation, ionization and excitation can occur during interaction with the ion flux and a compound film will be deposited.
Electrohydrodynamic deposition (electrospray deposition) is a relatively new process of thin-film deposition. The liquid to be deposited, either in the form of nanoparticle solution or simply a solution, is fed to a small capillary nozzle (usually metallic) which is connected to a high voltage. The substrate on which the film has to be deposited is connected to ground. Through the influence of electric field, the liquid coming out of the nozzle takes a conical shape (Taylor cone) and at the apex of the cone a thin jet emanates which disintegrates into very fine and small positively charged droplets under the influence of Rayleigh charge limit. The droplets keep getting smaller and smaller and ultimately get deposited on the substrate as a uniform thin layer.
Growth modes.
Frank–van der Merwe growth ("layer-by-layer"). In this growth mode the adsorbate-surface and adsorbate-adsorbate interactions are balanced. This type of growth requires lattice matching, and hence considered an "ideal" growth mechanism.
Stranski–Krastanov growth ("joint islands" or "layer-plus-island"). In this growth mode the adsorbate-surface interactions are stronger than adsorbate-adsorbate interactions.
Volmer–Weber ("isolated islands"). In this growth mode the adsorbate-adsorbate interactions are stronger than adsorbate-surface interactions, hence "islands" are formed right away.
There are three distinct stages of stress evolution that arise during Volmer-Weber film deposition. The first stage consists of the nucleation of individual atomic islands. During this first stage, the overall observed stress is very low. The second stage commences as these individual islands coalesce and begin to impinge on each other, resulting in an increase in the overall tensile stress in the film. This increase in overall tensile stress can be attributed to the formation of grain boundaries upon island coalescence that results in interatomic forces acting over the newly formed grain boundaries. The magnitude of this generated tensile stress depends on the density of the formed grain boundaries, as well as their grain-boundary energies. During this stage, the thickness of the film is not uniform because of the random nature of the island coalescence but is measured as the average thickness. The third and final stage of the Volmer-Weber film growth begins when the morphology of the film’s surface is unchanging with film thickness. During this stage, the overall stress in the film can remain tensile, or become compressive.
On a stress-thickness vs. thickness plot, an overall compressive stress is represented by a negative slope, and an overall tensile stress is represented by a positive slope. The overall shape of the stress-thickness vs. thickness curve depends on various processing conditions (such as temperature, growth rate, and material). Koch states that there are three different modes of Volmer-Weber growth. Zone I behavior is characterized by low grain growth in subsequent film layers and is associated with low atomic mobility. Koch suggests that Zone I behavior can be observed at lower temperatures. The zone I mode typically has small columnar grains in the final film. The second mode of Volmer-Weber growth is classified as Zone T, where the grain size at the surface of the film deposition increases with film thickness, but the grain size in the deposited layers below the surface does not change. Zone T-type films are associated with higher atomic mobilities, higher deposition temperatures, and V-shaped final grains. The final mode of proposed Volmer-Weber growth is Zone II type growth, where the grain boundaries in the bulk of the film at the surface are mobile, resulting in large yet columnar grains. This growth mode is associated with the highest atomic mobility and deposition temperature. There is also a possibility of developing a mixed Zone T/Zone II type structure, where the grains are mostly wide and columnar, but do experience slight growth as their thickness approaches the surface of the film. Although Koch focuses mostly on temperature to suggest a potential zone mode, factors such as deposition rate can also influence the final film microstructure.
Epitaxy.
A subset of thin-film deposition processes and applications is focused on the so-called epitaxial growth of materials,
the deposition of crystalline thin films that grow following the crystalline structure of the substrate. The term epitaxy comes from the Greek roots epi (ἐπί), meaning "above", and taxis (τάξις), meaning "an ordered manner". It can be translated as "arranging upon".
The term homoepitaxy refers to the specific case in which a film of the same material is grown on a crystalline
substrate. This technology is used, for instance, to grow a film which is more pure than the substrate, has a lower density
of defects, and to fabricate layers having different doping levels. Heteroepitaxy refers to the case in which the film being deposited is different from the substrate.
Techniques used for epitaxial growth of thin films include molecular beam epitaxy, chemical vapor deposition,
and pulsed laser deposition.
Mechanical Behavior.
Stress.
Thin films may be biaxially loaded via stresses originated from their interface with a substrate. Epitaxial thin films may experience stresses from misfit strains between the coherent lattices of the film and substrate, and from the restructuring of the surface triple junction. Thermal stress is common in thin films grown at elevated temperatures due to differences in thermal expansion coefficients with the substrate. Differences in interfacial energy and the growth and coalescence of grains contribute to intrinsic stress in thin films. These intrinsic stresses can be a function of film thickness. These stresses may be tensile or compressive and can cause cracking, buckling, or delamination along the surface. In epitaxial films, initially deposited atomic layers may have coherent lattice planes with the substrate. However, past a critical thickness misfit dislocations will form leading to relaxation of stresses in the film.
Strain.
Films may experience a dilatational transformation strain formula_16 relative to its substrate due to a volume change in the film. Volume changes that cause dilatational strain may come from changes in temperature, defects, or phase transformations. A temperature change will induce a volume change if the film and substrate thermal expansion coefficients are different. The creation or annihilation of defects such as vacancies, dislocations, and grain boundaries will cause a volume change through densification. Phase transformations and concentration changes will cause volume changes via lattice distortions.
Thermal Strain.
A mismatch of thermal expansion coefficients between the film and substrate will cause thermal strain during a temperature change. The elastic strain of the film relative to the substrate is given by:
formula_17
where formula_18 is the elastic strain, formula_19 is the thermal expansion coefficient of the film, formula_20 is the thermal expansion coefficient of the substrate, formula_21 is the temperature, and formula_22 is the initial temperature of the film and substrate when it is in a stress-free state. For example, if a film is deposited onto a substrate with a lower thermal expansion coefficient at high temperatures, then cooled to room temperature, a positive elastic strain will be created. In this case, the film will develop tensile stresses.
Growth Strain.
A change in density due to the creation or destruction of defects, phase changes, or compositional changes after the film is grown on the substrate will generate a growth strain. Such as in the Stranski–Krastanov mode, where the layer of film is strained to fit the substrate due to an increase in supersaturation and interfacial energy which shifts from island to island. The elastic strain to accommodate these changes is related to the dilatational strain formula_16 by:
formula_23
A film experiencing growth strains will be under biaxial tensile strain conditions, generating tensile stresses in biaxial directions in order to match the substrate dimensions.
Epitaxial Strains.
An epitaxially grown film on a thick substrate will have an inherent elastic strain given by:
formula_24
where formula_25 and formula_26 are the lattice parameters of the substrate and film, respectively. It is assumed that the substrate is rigid due to its relative thickness. Therefore, all of the elastic strain occurs in the film to match the substrate.
Measuring stress and strain.
The stresses in Films deposited on flat substrates such as wafers can be calculated by measuring the curvature of the wafer due to the strain by the film. Using optical setups, such as those with lasers, allow for whole wafer characterization pre and post deposition. Lasers are reflected off the wafer in a grid pattern and distortions in the grid are used to calculate the curvature as well as measure the optical constants. Strain in thin films can also be measured by x-ray diffraction or by milling a section of the film using a focused ion beam and monitoring the relaxation via scanning electron microscopy.
Wafer Curvature Measurements.
A common method for determining the stress evolution of a film is to measure the wafer curvature during its deposition. Stoney relates a film’s average stress to its curvature through the following expression:
formula_27
where formula_28, where formula_29 is the bulk elastic modulus of the material comprising the film, and formula_30 is the Poisson’s ratio of the material comprising the film, formula_31 is the thickness of the substrate, formula_32 is the height of the film, and formula_33 is the average stress in the film. The assumptions made regarding the Stoney formula assume that the film and substrate are smaller than the lateral size of the wafer and that the stress is uniform across the surface. Therefore the average stress thickness of a given film can be determined by integrating the stress over a given film thickness:
formula_34
where formula_35 is the direction normal to the substrate and formula_36 represents the in-place stress at a particular height of the film. The stress thickness (or force per unit width) is represented by formula_37 is an important quantity as it is directionally proportional to the curvature by formula_38. Because of this proportionality, measuring the curvature of a film at a given film thickness can directly determine the stress in the film at that thickness. The curvature of a wafer is determined by the average stress of in the film. However, if stress is not uniformly distributed in a film (as it would be for epitaxially grown film layers that have not relaxed so that the intrinsic stress is due to the lattice mismatch of the substrate and the film), it is impossible to determine the stress at a specific film height without continuous curvature measurements. If continuous curvature measurements are taken, the time derivative of the curvature data: formula_39
can show how the intrinsic stress is changing at any given point. Assuming that stress in the underlying layers of a deposited film remains constant during further deposition, we can represent the incremental stress formula_40 as:
formula_41
Nanoindentation.
Nanoindentation is a popular method of measuring the mechanical properties of films. Measurements can be used to compare coated and uncoated films to reveal the effects of surface treatment on both elastic and plastic responses of the film. Load-displacement curves may reveal information about cracking, delamination, and plasticity in both the film and substrate.
The Oliver and Pharr method can be used to evaluate nanoindentation results for hardness and elastic modulus evaluation by the use of axisymmetric indenter geometries like a spherical indenter. This method assumes that during unloading, only elastic deformations are recovered (where reverse plastic deformation is negligible). The parameter formula_42 designates the load, formula_43 is the displacement relative to the undeformed coating surface and formula_32 is the final penetration depth after unloading. These are used to approximate the power law relation for unloading curves:
formula_44
After the contact area formula_45 is calculated, the hardness is estimated by:
formula_46
From the relationship of contact area, the unloading stiffness can be expressed by the relation:
formula_47
Where formula_48 is the effective elastic modulus and takes into account elastic displacements in the specimen and indenter. This relation can also be applied to elastic-plastic contact, which is not affected by pile-up and sink-in during indentation.
formula_49
Due to the low thickness of the films, accidental probing of the substrate is a concern. To avoid indenting beyond the film and into the substrate, penetration depths are often kept to less than 10% of the film thickness. For a conical or pyramidal indenters, the indentation depth scales as formula_50 where formula_51 is the radius of the contact circle and formula_52 is the film thickness. The ratio of penetration depth formula_43 and film thickness can be used as a scale parameter for soft films.
Strain engineering.
Stress and relaxation of stresses in films can influence the materials properties of the film, such as mass transport in microelectronics applications. Therefore precautions are taken to either mitigate or produce such stresses; for example a buffer layer may be deposited between the substrate and film. Strain engineering is also used to produce various phase and domain structures in thin films such as in the domain structure of the ferroelectric Lead Zirconate Titanate (PZT).
Multilayer medium.
In the physical sciences, a multilayer or stratified medium is a stack of different thin films. Typically, a multilayer medium is made for a specific purpose. Since layers are thin with respect to some relevant length scale, interface effects are much more important than in bulk materials, giving rise to novel physical properties.
The term "multilayer" is "not" an extension of "monolayer" and "bilayer", which describe a "single" layer that is one or two molecules thick. A multilayer medium rather consists of several thin films.
Applications.
Decorative coatings.
The usage of thin films for decorative coatings probably represents their oldest application. This encompasses ca. 100 nm thin gold leaves that were already used in ancient India more than 5000 years ago. It may also be understood as any form of painting, although this kind of work is generally considered as an arts craft rather than an engineering or scientific discipline. Today, thin-film materials of variable thickness and high refractive index like titanium dioxide are often applied for decorative coatings on glass for instance, causing a rainbow-color appearance like oil on water. In addition, intransparent gold-colored surfaces may either be prepared by sputtering of gold or titanium nitride.
Optical coatings.
These layers serve in both reflective and refractive systems. Large-area (reflective) mirrors became available during the 19th century and were produced by sputtering of metallic silver or aluminum on glass. Refractive lenses for optical instruments like cameras and microscopes typically exhibit aberrations, i.e. non-ideal refractive behavior. While large sets of lenses had to be lined up along the optical path previously, nowadays, the coating of optical lenses with transparent multilayers of titanium dioxide, silicon nitride or silicon oxide etc. may correct these aberrations. A well-known example for the progress in optical systems by thin-film technology is represented by the only a few mm wide lens in smart phone cameras. Other examples are given by anti-reflection coatings on eyeglasses or solar panels.
Protective coatings.
Thin films are often deposited to protect an underlying work piece from external influences. The protection may operate by minimizing the contact with the exterior medium in order to reduce the diffusion from the medium to the work piece or vice versa. For instance, plastic lemonade bottles are frequently coated by anti-diffusion layers to avoid the out-diffusion of , into which carbonic acid decomposes that was introduced into the beverage under high pressure. Another example is represented by thin TiN films in microelectronic chips separating electrically conducting aluminum lines from the embedding insulator in order to suppress the formation of . Often, thin films serve as protection against abrasion between mechanically moving parts. Examples for the latter application are diamond-like carbon layers used in car engines or thin films made of nanocomposites.
Electrically operating coatings.
Thin layers from elemental metals like copper, aluminum, gold or silver etc. and alloys have found numerous applications in electrical devices. Due to their high electrical conductivity they are able to transport electrical currents or supply voltages. Thin metal layers serve in conventional electrical system, for instance, as Cu layers on printed circuit boards, as the outer ground conductor in coaxial cables and various other forms like sensors etc. A major field of application became their use in integrated passive devices and integrated circuits, where the electrical network among active and passive devices like transistors and capacitors etc. is built up from thin Al or Cu layers. These layers dispose of thicknesses in the range of a few 100 nm up to a few μm, and they are often embedded into a few nm thin titanium nitride layers in order to block a chemical reaction with the surrounding dielectric like . The figure shows a micrograph of a laterally structured TiN/Al/TiN metal stack in a microelectronic chip.
Heterostructures of gallium nitride and similar semiconductors can lead to electrons being bound to a sub-nanometric layer, effectively behaving as a two-dimensional electron gas. Quantum effects in such thin films can significantly enhance electron mobility as compared to that of a bulk crystal, which is employed in high-electron-mobility transistors.
Biosensors and plasmonic devices.
Noble metal thin films are used in plasmonic structures such as surface plasmon resonance (SPR) sensors. Surface plasmon polaritons are surface waves in the optical regime that propagate in between metal-dielectric interfaces; in Kretschmann-Raether configuration for the SPR sensors, a prism is coated with a metallic film through evaporation. Due to the poor adhesive characteristics of metallic films, germanium, titanium or chromium films are used as intermediate layers to promote stronger adhesion. Metallic thin films are also used in plasmonic waveguide designs.
Thin-film photovoltaic cells.
Thin-film technologies are also being developed as a means of substantially reducing the cost of solar cells. The rationale for this is thin-film solar cells are cheaper to manufacture owing to their reduced material costs, energy costs, handling costs and capital costs. This is especially represented in the use of printed electronics (roll-to-roll) processes. Other thin-film technologies, that are still in an early stage of ongoing research or with limited commercial availability, are often classified as emerging or third generation photovoltaic cells and include, organic, dye-sensitized, and polymer solar cells, as well as quantum dot, copper zinc tin sulfide, nanocrystal and perovskite solar cells.
Thin-film batteries.
Thin-film printing technology is being used to apply solid-state lithium polymers to a variety of substrates to create unique batteries for specialized applications. Thin-film batteries can be deposited directly onto chips or chip packages in any shape or size. Flexible batteries can be made by printing onto plastic, thin metal foil, or paper.
Thin-film bulk acoustic wave resonators (TFBARs/FBARs).
For miniaturising and more precise control of resonance frequency of piezoelectric crystals thin-film bulk acoustic resonators TFBARs/FBARs are developed for oscillators, telecommunication filters and duplexers, and sensor applications.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" />
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "E_{p}"
},
{
"math_id": 1,
"text": "E_{c}"
},
{
"math_id": 2,
"text": "E_{a}"
},
{
"math_id": 3,
"text": "n"
},
{
"math_id": 4,
"text": "J"
},
{
"math_id": 5,
"text": "\\tau_{a}"
},
{
"math_id": 6,
"text": "\\sigma"
},
{
"math_id": 7,
"text": "{dn\\over dt}=J \\sigma-{n\\over \\tau_{a}} "
},
{
"math_id": 8,
"text": "n = J\\sigma\\tau_{a}\\left[1-\\exp\\left({-t\\over\\tau_{a}}\\right)\\right]\nn = J\\sigma\\tau_{a}\\left[\\exp\\left({-t\\over\\tau_{a}}\\right)\\right]"
},
{
"math_id": 9,
"text": "b"
},
{
"math_id": 10,
"text": "P_{A}"
},
{
"math_id": 11,
"text": "\\theta = {bP_{A}\\over (1+bP_{A})}"
},
{
"math_id": 12,
"text": "p_{e}"
},
{
"math_id": 13,
"text": "p"
},
{
"math_id": 14,
"text": "\\theta ={X p \\over (p_{e}-p)\\left[1+(X-1){p\\over p_{e}}\\right]}"
},
{
"math_id": 15,
"text": "\\tau_{d}=(1/v_{1})\\exp(E_{d}/kT_{s})"
},
{
"math_id": 16,
"text": "e_T"
},
{
"math_id": 17,
"text": "\\varepsilon = -(\\alpha_f-\\alpha_s)(T-T_0)"
},
{
"math_id": 18,
"text": "\\varepsilon"
},
{
"math_id": 19,
"text": "\\alpha_f"
},
{
"math_id": 20,
"text": "\\alpha_s"
},
{
"math_id": 21,
"text": "T"
},
{
"math_id": 22,
"text": "T_0"
},
{
"math_id": 23,
"text": "\\varepsilon=-e_T/3"
},
{
"math_id": 24,
"text": "\\varepsilon\\approx{a_s-a_f \\over a_f}"
},
{
"math_id": 25,
"text": "a_s"
},
{
"math_id": 26,
"text": "a_f"
},
{
"math_id": 27,
"text": "\\kappa=\\frac{6\\langle \\sigma \\rangle h_f}{M_sh^2_s}"
},
{
"math_id": 28,
"text": "M_s = \\frac{\\Epsilon}{1-\\upsilon}"
},
{
"math_id": 29,
"text": "\\Epsilon"
},
{
"math_id": 30,
"text": "\\upsilon"
},
{
"math_id": 31,
"text": "h_s"
},
{
"math_id": 32,
"text": "h_f"
},
{
"math_id": 33,
"text": "\\langle \\sigma \\rangle"
},
{
"math_id": 34,
"text": "\\langle \\sigma \\rangle = \\frac{1}{h_f} \\int_{0}^{h_f} \\sigma(z) dz"
},
{
"math_id": 35,
"text": "z"
},
{
"math_id": 36,
"text": "\\sigma(z)"
},
{
"math_id": 37,
"text": "\\langle \\sigma \\rangle h_f"
},
{
"math_id": 38,
"text": "\\frac{6}{M_s h_s^2}"
},
{
"math_id": 39,
"text": "\\frac{d\\kappa}{dt} \\propto \\sigma(h_f) \\frac{\\partial h_f}{\\partial t} + \\int_{0}^{h_f} \\frac{\\partial \\sigma(z,t)}{\\partial t}dz"
},
{
"math_id": 40,
"text": "\\sigma(h_f)"
},
{
"math_id": 41,
"text": "\\sigma (h_f) \\propto \\frac{\\frac{\\partial \\kappa}{\\partial t}}{\\frac{\\partial h_f}{\\partial t}} = \\frac{d \\kappa}{dh}"
},
{
"math_id": 42,
"text": "P"
},
{
"math_id": 43,
"text": "h"
},
{
"math_id": 44,
"text": "P = \\alpha (h - h_f)^m"
},
{
"math_id": 45,
"text": "A"
},
{
"math_id": 46,
"text": "H = \\frac {P_{max}}{A}"
},
{
"math_id": 47,
"text": "S = \\beta \\frac{2}{\\surd\\pi} E_{eff} \\surd A "
},
{
"math_id": 48,
"text": "E_{eff}"
},
{
"math_id": 49,
"text": "\\frac{1}{E_{eff}} = \\frac{1-\\nu^2}{E} + \\frac{1-\\nu^2}{E_i} "
},
{
"math_id": 50,
"text": "a/t"
},
{
"math_id": 51,
"text": "a"
},
{
"math_id": 52,
"text": "t"
}
] | https://en.wikipedia.org/wiki?curid=1033036 |
1033045 | Marcinkiewicz interpolation theorem | Mathematical theory by discovered by Józef Marcinkiewicz
In mathematics, the Marcinkiewicz interpolation theorem, discovered by Józef Marcinkiewicz (1939), is a result bounding the norms of non-linear operators acting on "L"p spaces.
Marcinkiewicz' theorem is similar to the Riesz–Thorin theorem about linear operators, but also applies to non-linear operators.
Preliminaries.
Let "f" be a measurable function with real or complex values, defined on a measure space ("X", "F", ω). The distribution function of "f" is defined by
formula_0
Then "f" is called weak formula_1 if there exists a constant "C" such that the distribution function of "f" satisfies the following inequality for all "t" > 0:
formula_2
The smallest constant "C" in the inequality above is called the weak formula_1 norm and is usually denoted by formula_3 or formula_4 Similarly the space is usually denoted by "L"1,"w" or "L"1,∞.
Any formula_1 function belongs to "L"1,"w" and in addition one has the inequality
formula_8
This is nothing but Markov's inequality (aka Chebyshev's Inequality). The converse is not true. For example, the function 1/"x" belongs to "L"1,"w" but not to "L"1.
Similarly, one may define the weak formula_9 space as the space of all functions "f" such that formula_10 belong to "L"1,"w", and the weak formula_9 norm using
formula_11
More directly, the "L""p","w" norm is defined as the best constant "C" in the inequality
formula_12
for all "t" > 0.
Formulation.
Informally, Marcinkiewicz's theorem is
Theorem. Let "T" be a bounded linear operator from formula_9 to formula_13 and at the same time from formula_14 to formula_15. Then "T" is also a bounded operator from formula_16 to formula_16 for any "r" between "p" and "q".
In other words, even if one only requires weak boundedness on the extremes "p" and "q", regular boundedness still holds. To make this more formal, one has to explain that "T" is bounded only on a dense subset and can be completed. See Riesz-Thorin theorem for these details.
Where Marcinkiewicz's theorem is weaker than the Riesz-Thorin theorem is in the estimates of the norm. The theorem gives bounds for the formula_16 norm of "T" but this bound increases to infinity as "r" converges to either "p" or "q". Specifically , suppose that
formula_17
formula_18
so that the operator norm of "T" from "L""p" to "L""p","w" is at most "N""p", and the operator norm of "T" from "L""q" to "L""q","w" is at most "N""q". Then the following interpolation inequality holds for all "r" between "p" and "q" and all "f" ∈ "L""r":
formula_19
where
formula_20
and
formula_21
The constants δ and γ can also be given for "q" = ∞ by passing to the limit.
A version of the theorem also holds more generally if "T" is only assumed to be a quasilinear operator in the following sense: there exists a constant "C" > 0 such that "T" satisfies
formula_22
for almost every "x". The theorem holds precisely as stated, except with γ replaced by
formula_23
An operator "T" (possibly quasilinear) satisfying an estimate of the form
formula_24
is said to be of weak type ("p","q"). An operator is simply of type ("p","q") if "T" is a bounded transformation from "Lp" to "Lq":
formula_25
A more general formulation of the interpolation theorem is as follows:
formula_26
The latter formulation follows from the former through an application of Hölder's inequality and a duality argument.
Applications and examples.
A famous application example is the Hilbert transform. Viewed as a multiplier, the Hilbert transform of a function "f" can be computed by first taking the Fourier transform of "f", then multiplying by the sign function, and finally applying the inverse Fourier transform.
Hence Parseval's theorem easily shows that the Hilbert transform is bounded from formula_27 to formula_27. A much less obvious fact is that it is bounded from formula_1 to formula_28. Hence Marcinkiewicz's theorem shows that it is bounded from formula_9 to formula_9 for any 1 < "p" < 2. Duality arguments show that it is also bounded for 2 < "p" < ∞. In fact, the Hilbert transform is really unbounded for "p" equal to 1 or ∞.
Another famous example is the Hardy–Littlewood maximal function, which is only sublinear operator rather than linear. While formula_9 to formula_9 bounds can be derived immediately from the formula_1 to weak formula_1 estimate by a clever change of variables, Marcinkiewicz interpolation is a more intuitive approach. Since the Hardy–Littlewood Maximal Function is trivially bounded from formula_29 to formula_29, strong boundedness for all formula_30 follows immediately from the weak (1,1) estimate and interpolation. The weak (1,1) estimate can be obtained from the Vitali covering lemma.
History.
The theorem was first announced by , who showed this result to Antoni Zygmund shortly before he died in World War II. The theorem was almost forgotten by Zygmund, and was absent from his original works on the theory of singular integral operators. Later realized that Marcinkiewicz's result could greatly simplify his work, at which time he published his former student's theorem together with a generalization of his own.
In 1964 Richard A. Hunt and Guido Weiss published a new proof of the Marcinkiewicz interpolation theorem.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\lambda_f(t) = \\omega\\left\\{x\\in X\\mid |f(x)| > t\\right\\}."
},
{
"math_id": 1,
"text": "L^1"
},
{
"math_id": 2,
"text": "\\lambda_f(t)\\leq \\frac{C}{t}."
},
{
"math_id": 3,
"text": "\\|f\\|_{1,w}"
},
{
"math_id": 4,
"text": "\\|f\\|_{1,\\infty}."
},
{
"math_id": 5,
"text": " (0,1) "
},
{
"math_id": 6,
"text": " 1/x "
},
{
"math_id": 7,
"text": " 1/(1-x) "
},
{
"math_id": 8,
"text": "\\|f\\|_{1,w}\\leq \\|f\\|_1."
},
{
"math_id": 9,
"text": "L^p"
},
{
"math_id": 10,
"text": "|f|^p"
},
{
"math_id": 11,
"text": "\\|f\\|_{p,w}= \\left \\||f|^p \\right \\|_{1,w}^{\\frac{1}{p}}."
},
{
"math_id": 12,
"text": "\\lambda_f(t) \\le \\frac{C^p}{t^p}"
},
{
"math_id": 13,
"text": "L^{p,w}"
},
{
"math_id": 14,
"text": "L^q"
},
{
"math_id": 15,
"text": "L^{q,w}"
},
{
"math_id": 16,
"text": "L^r"
},
{
"math_id": 17,
"text": "\\|Tf\\|_{p,w} \\le N_p\\|f\\|_p,"
},
{
"math_id": 18,
"text": "\\|Tf\\|_{q,w} \\le N_q\\|f\\|_q,"
},
{
"math_id": 19,
"text": "\\|Tf\\|_r\\le \\gamma N_p^\\delta N_q^{1-\\delta}\\|f\\|_r"
},
{
"math_id": 20,
"text": "\\delta=\\frac{p(q-r)}{r(q-p)}"
},
{
"math_id": 21,
"text": "\\gamma=2\\left(\\frac{r(q-p)}{(r-p)(q-r)}\\right)^{1/r}."
},
{
"math_id": 22,
"text": "|T(f+g)(x)| \\le C(|Tf(x)|+|Tg(x)|)"
},
{
"math_id": 23,
"text": "\\gamma=2C\\left(\\frac{r(q-p)}{(r-p)(q-r)}\\right)^{1/r}."
},
{
"math_id": 24,
"text": "\\|Tf\\|_{q,w}\\le C\\|f\\|_p"
},
{
"math_id": 25,
"text": "\\|Tf\\|_q\\le C\\|f\\|_p."
},
{
"math_id": 26,
"text": "\\frac{1}{p} = \\frac{1-\\theta}{p_0}+\\frac{\\theta}{p_1},\\quad \\frac{1}{q} = \\frac{1-\\theta}{q_0} + \\frac{\\theta}{q_1}."
},
{
"math_id": 27,
"text": "L^2"
},
{
"math_id": 28,
"text": "L^{1,w}"
},
{
"math_id": 29,
"text": "L^\\infty"
},
{
"math_id": 30,
"text": "p>1"
}
] | https://en.wikipedia.org/wiki?curid=1033045 |
10330610 | Crystal base | Representation of a quantum group
A crystal base for a representation of a quantum group on a formula_0-vector space
is not a base of that vector space but rather a formula_1-base of formula_2 where formula_3 is a formula_0-lattice in that vector space. Crystal bases appeared in the work of Kashiwara (1990) and also in the work of Lusztig (1990). They can be viewed as specializations as formula_4 of the canonical basis defined by Lusztig (1990).
Definition.
As a consequence of its defining relations, the quantum group formula_5 can be regarded as a Hopf algebra over the field of all rational functions of an indeterminate "q" over formula_1, denoted formula_6.
For simple root formula_7 and non-negative integer formula_8, define
formula_9
In an integrable module formula_10, and for weight formula_11, a vector formula_12 (i.e. a vector formula_13 in formula_10 with weight formula_11) can be uniquely decomposed into the sums
formula_14
where formula_15, formula_16, formula_17 only if formula_18, and formula_19 only if formula_20.
Linear mappings formula_21 can be defined on formula_22 by
formula_23
formula_24
Let formula_25 be the integral domain of all rational functions in formula_6 which are regular at formula_26 ("i.e." a rational function formula_27 is an element of formula_25 if and only if there exist polynomials formula_28 and formula_29 in the polynomial ring formula_30 such that formula_31, and formula_32).
A crystal base for formula_10 is an ordered pair formula_33, such that
To put this into a more informal setting, the actions of formula_47 and formula_48 are generally singular at formula_26 on an integrable module formula_10. The linear mappings formula_49 and formula_50 on the module are introduced so that the actions of formula_51 and formula_52 are regular at formula_26 on the module. There exists a formula_6-basis of weight vectors formula_53 for formula_10, with respect to which the actions of formula_49 and formula_50 are regular at formula_26 for all "i". The module is then restricted to the free formula_25-module generated by the basis, and the basis vectors, the formula_25-submodule and the actions of formula_49 and formula_50 are evaluated at formula_26. Furthermore, the basis can be chosen such that at formula_26, for all formula_54, formula_49 and formula_50 are represented by mutual transposes, and map basis vectors to basis vectors or 0.
A crystal base can be represented by a directed graph with labelled edges. Each vertex of the graph represents an element of the formula_55-basis formula_35 of formula_36, and a directed edge, labelled by "i", and directed from vertex formula_56 to vertex formula_57, represents that formula_58 (and, equivalently, that formula_59), where formula_60 is the basis element represented by formula_56, and formula_61 is the basis element represented by formula_57. The graph completely determines the actions of formula_49 and formula_50 at formula_26. If an integrable module has a crystal base, then the module is irreducible if and only if the graph representing the crystal base is connected (a graph is called "connected" if the set of vertices cannot be partitioned into the union of nontrivial disjoint subsets formula_62 and formula_63 such that there are no edges joining any vertex in formula_62 to any vertex in formula_63).
For any integrable module with a crystal base, the weight spectrum for the crystal base is the same as the weight spectrum for the module, and therefore the weight spectrum for the crystal base is the same as the weight spectrum for the corresponding module of the appropriate Kac–Moody algebra. The multiplicities of the weights in the crystal base are also the same as their multiplicities in the corresponding module of the appropriate Kac–Moody algebra.
It is a theorem of Kashiwara that every integrable highest weight module has a crystal base. Similarly, every integrable lowest weight module has a crystal base.
Tensor products of crystal bases.
Let formula_10 be an integrable module with crystal base formula_33 and formula_64 be an integrable module with crystal base formula_65. For crystal bases, the coproduct formula_66, given by
formula_67
is adopted. The integrable module formula_68 has crystal base formula_69, where formula_70. For a basis vector formula_71, define
formula_72
formula_73
The actions of formula_49 and formula_50 on formula_74 are given by
formula_75
The decomposition of the product two integrable highest weight modules into irreducible submodules is determined by the decomposition of the graph of the crystal base into its connected components (i.e. the highest weights of the submodules are determined, and the multiplicity of each highest weight is determined). | [
{
"math_id": 0,
"text": "\\Q(v)"
},
{
"math_id": 1,
"text": "\\Q"
},
{
"math_id": 2,
"text": "L/vL"
},
{
"math_id": 3,
"text": "L"
},
{
"math_id": 4,
"text": "v \\to 0"
},
{
"math_id": 5,
"text": "U_q(G)"
},
{
"math_id": 6,
"text": "\\Q(q)"
},
{
"math_id": 7,
"text": "\\alpha_i"
},
{
"math_id": 8,
"text": "n"
},
{
"math_id": 9,
"text": "\\begin{align}\ne_i^{(0)} = f_i^{(0)} &= 1 \\\\\ne_i^{(n)} &= \\frac{e_i^n}{[n]_{q_i}!} \\\\[6pt]\nf_i^{(n)} &= \\frac{f_i^n}{[n]_{q_i}!}\n\\end{align}"
},
{
"math_id": 10,
"text": "M"
},
{
"math_id": 11,
"text": "\\lambda"
},
{
"math_id": 12,
"text": "u \\in M_{\\lambda}"
},
{
"math_id": 13,
"text": "u"
},
{
"math_id": 14,
"text": "u = \\sum_{n=0}^\\infty f_i^{(n)} u_n = \\sum_{n=0}^\\infty e_i^{(n)} v_n,"
},
{
"math_id": 15,
"text": "u_n \\in \\ker(e_i) \\cap M_{\\lambda + n \\alpha_i}"
},
{
"math_id": 16,
"text": "v_n \\in \\ker(f_i) \\cap M_{\\lambda - n \\alpha_i}"
},
{
"math_id": 17,
"text": "u_n \\ne 0"
},
{
"math_id": 18,
"text": "n + \\frac{2 (\\lambda,\\alpha_i)}{(\\alpha_i,\\alpha_i)} \\ge 0"
},
{
"math_id": 19,
"text": "v_n \\ne 0"
},
{
"math_id": 20,
"text": "n - \\frac{2 (\\lambda,\\alpha_i)}{(\\alpha_i,\\alpha_i)} \\ge 0"
},
{
"math_id": 21,
"text": "\\tilde{e}_i, \\tilde{f}_i : M \\to M"
},
{
"math_id": 22,
"text": "M_\\lambda"
},
{
"math_id": 23,
"text": "\\tilde{e}_i u = \\sum_{n=1}^\\infty f_i^{(n-1)} u_n = \\sum_{n=0}^\\infty e_i^{(n+1)} v_n,"
},
{
"math_id": 24,
"text": "\\tilde{f}_i u = \\sum_{n=0}^\\infty f_i^{(n+1)} u_n = \\sum_{n=1}^\\infty e_i^{(n-1)} v_n."
},
{
"math_id": 25,
"text": "A"
},
{
"math_id": 26,
"text": "q = 0"
},
{
"math_id": 27,
"text": "f(q)"
},
{
"math_id": 28,
"text": "g(q)"
},
{
"math_id": 29,
"text": "h(q)"
},
{
"math_id": 30,
"text": "\\Q[q]"
},
{
"math_id": 31,
"text": "h(0) \\ne 0"
},
{
"math_id": 32,
"text": "f(q) = g(q)/h(q)"
},
{
"math_id": 33,
"text": "(L,B)"
},
{
"math_id": 34,
"text": "M = \\Q(q) \\otimes_A L;"
},
{
"math_id": 35,
"text": "B"
},
{
"math_id": 36,
"text": "L/qL"
},
{
"math_id": 37,
"text": "\\Q,"
},
{
"math_id": 38,
"text": "L = \\oplus_\\lambda L_\\lambda"
},
{
"math_id": 39,
"text": "B = \\sqcup_\\lambda B_\\lambda"
},
{
"math_id": 40,
"text": "L_\\lambda = L \\cap M_\\lambda"
},
{
"math_id": 41,
"text": "B_\\lambda = B \\cap (L_\\lambda/qL_\\lambda),"
},
{
"math_id": 42,
"text": "\\tilde{e}_i L \\subset L"
},
{
"math_id": 43,
"text": "\\tilde{f}_i L \\subset L \\text{ for all } i ,"
},
{
"math_id": 44,
"text": "\\tilde{e}_i B \\subset B \\cup \\{0\\}"
},
{
"math_id": 45,
"text": "\\tilde{f}_i B \\subset B \\cup \\{0\\}\\text{ for all } i, "
},
{
"math_id": 46,
"text": "\\text{for all }b \\in B\\text{ and }b' \\in B,\\text{ and for all }i,\\quad\\tilde{e}_i b = b'\\text{ if and only if }\\tilde{f}_i b' = b."
},
{
"math_id": 47,
"text": "e_i f_i"
},
{
"math_id": 48,
"text": "f_i e_i"
},
{
"math_id": 49,
"text": "\\tilde{e}_i"
},
{
"math_id": 50,
"text": "\\tilde{f}_i"
},
{
"math_id": 51,
"text": "\\tilde{e}_i \\tilde{f}_i"
},
{
"math_id": 52,
"text": "\\tilde{f}_i \\tilde{e}_i"
},
{
"math_id": 53,
"text": "\\tilde{B}"
},
{
"math_id": 54,
"text": "i"
},
{
"math_id": 55,
"text": "\\mathbb Q"
},
{
"math_id": 56,
"text": "v_1"
},
{
"math_id": 57,
"text": "v_2"
},
{
"math_id": 58,
"text": "b_2 = \\tilde{f}_i b_1"
},
{
"math_id": 59,
"text": "b_1 = \\tilde{e}_i b_2"
},
{
"math_id": 60,
"text": "b_1"
},
{
"math_id": 61,
"text": "b_2"
},
{
"math_id": 62,
"text": "V_1"
},
{
"math_id": 63,
"text": "V_2"
},
{
"math_id": 64,
"text": "M'"
},
{
"math_id": 65,
"text": "(L',B')"
},
{
"math_id": 66,
"text": "\\Delta"
},
{
"math_id": 67,
"text": "\\begin{align}\n\\Delta(k_{\\lambda}) &= k_\\lambda \\otimes k_\\lambda \\\\\n\\Delta(e_i) &= e_i \\otimes k_i^{-1} + 1 \\otimes e_i \\\\ \n\\Delta(f_i) &= f_i \\otimes 1 + k_i \\otimes f_i\n\\end{align}"
},
{
"math_id": 68,
"text": "M \\otimes_{\\Q(q)} M'"
},
{
"math_id": 69,
"text": "(L \\otimes_A L',B \\otimes B')"
},
{
"math_id": 70,
"text": "B \\otimes B' = \\left \\{ b \\otimes_{\\Q} b' : b \\in B,\\ b' \\in B' \\right \\}"
},
{
"math_id": 71,
"text": "b \\in B"
},
{
"math_id": 72,
"text": "\\varepsilon_i(b) = \\max \\left \\{ n \\ge 0 : \\tilde{e}_i^n b \\ne 0 \\right \\}"
},
{
"math_id": 73,
"text": "\\varphi_i(b) = \\max \\left \\{ n \\ge 0 : \\tilde{f}_i^n b \\ne 0 \\right \\}"
},
{
"math_id": 74,
"text": "b \\otimes b'"
},
{
"math_id": 75,
"text": "\\begin{align}\n\\tilde{e}_i (b \\otimes b') &= \\begin{cases} \\tilde{e}_i b \\otimes b' & \\varphi_i(b) \\ge \\varepsilon_i(b') \\\\ b \\otimes \\tilde{e}_i b' & \\varphi_i(b) < \\varepsilon_i(b') \\end{cases} \\\\\n\\tilde{f}_i (b \\otimes b') &= \\begin{cases} \\tilde{f}_i b \\otimes b' & \\varphi_i(b) > \\varepsilon_i(b') \\\\ b \\otimes \\tilde{f}_i b' & \\varphi_i(b) \\le \\varepsilon_i(b') \\end{cases} \n\\end{align}"
}
] | https://en.wikipedia.org/wiki?curid=10330610 |
1033084 | Gaseous diffusion | Old method of enriching uranium
Gaseous diffusion is a technology that was used to produce enriched uranium by forcing gaseous uranium hexafluoride (UF6) through microporous membranes. This produces a slight separation (enrichment factor 1.0043) between the molecules containing uranium-235 (235U) and uranium-238 (238U). By use of a large cascade of many stages, high separations can be achieved. It was the first process to be developed that was capable of producing enriched uranium in industrially useful quantities, but is nowadays considered obsolete, having been superseded by the more-efficient gas centrifuge process (enrichment factor 1.05 to 1.2).
Gaseous diffusion was devised by Francis Simon and Nicholas Kurti at the Clarendon Laboratory in 1940, tasked by the MAUD Committee with finding a method for separating uranium-235 from uranium-238 in order to produce a bomb for the British Tube Alloys project. The prototype gaseous diffusion equipment itself was manufactured by Metropolitan-Vickers (MetroVick) at Trafford Park, Manchester, at a cost of £150,000 for four units, for the M. S. Factory, Valley. This work was later transferred to the United States when the Tube Alloys project became subsumed by the later Manhattan Project.
Background.
Of the 33 known radioactive primordial nuclides, two (235U and 238U) are isotopes of uranium. These two isotopes are similar in many ways, except that only 235U is fissile (capable of sustaining a nuclear chain reaction of nuclear fission with thermal neutrons). In fact, 235U is the only naturally occurring fissile nucleus. Because natural uranium is only about 0.72% 235U by mass, it must be enriched to a concentration of 2–5% to be able to support a continuous nuclear chain reaction when normal water is used as the moderator. The product of this enrichment process is called enriched uranium.
Technology.
Gaseous diffusion is based on Graham's law, which states that the rate of effusion of a gas is inversely proportional to the square root of its molecular mass. For example, in a box with a microporous membrane containing a mixture of two gases, the lighter molecules will pass out of the container more rapidly than the heavier molecules, if the pore diameter is smaller than the mean free path length (molecular flow). The gas leaving the container is somewhat enriched in the lighter molecules, while the residual gas is somewhat depleted. A single container wherein the enrichment process takes place through gaseous diffusion is called a diffuser.
UF6 is the only compound of uranium sufficiently volatile to be used in the gaseous diffusion process. Fortunately, fluorine consists of only a single isotope 19F, so that the 1% difference in molecular weights between 235UF6 and 238UF6 is due only to the difference in weights of the uranium isotopes. For these reasons, UF6 is the only choice as a feedstock for the gaseous diffusion process. UF6, a solid at room temperature, sublimes at 56.4 °C (133 °F) at 1 atmosphere. The triple point is at 64.05 °C and 1.5 bar. Applying Graham's law gives:
formula_0
where:
"Rate1" is the rate of effusion of 235UF6.
"Rate2" is the rate of effusion of 238UF6.
"M1" is the molar mass of 235UF6 = 235.043930 + 6 × 18.998403 = 349.034348 g·mol−1
"M2" is the molar mass of 238UF6 = 238.050788 + 6 × 18.998403 = 352.041206 g·mol−1
This explains the 0.4% difference in the average velocities of 235UF6 molecules over that of 238UF6 molecules.
UF6 is a highly corrosive substance. It is an oxidant and a Lewis acid which is able to bind to fluoride, for instance the reaction of copper(II) fluoride with uranium hexafluoride in acetonitrile is reported to form copper(II) heptafluorouranate(VI), Cu(UF7)2. It reacts with water to form a solid compound, and is very difficult to handle on an industrial scale. As a consequence, internal gaseous pathways must be fabricated from austenitic stainless steel and other heat-stabilized metals. Non-reactive fluoropolymers such as Teflon must be applied as a coating to all valves and seals in the system.
Gaseous diffusion plants typically use aggregate barriers (porous membranes) constructed of sintered nickel or aluminum, with a pore size of 10–25 nanometers (this is less than one-tenth the mean free path of the UF6 molecule). They may also use film-type barriers, which are made by boring pores through an initially nonporous medium. One way this can be done is by removing one constituent in an alloy, for instance using hydrogen chloride to remove the zinc from silver-zinc (Ag-Zn) or sodium hydroxide to remove aluminum from Ni-Al alloy.
Because the molecular weights of 235UF6 and 238UF6 are nearly equal, very little separation of the 235U and 238U occurs in a single pass through a barrier, that is, in one diffuser. It is therefore necessary to connect a great many diffusers together in a sequence of stages, using the outputs of the preceding stage as the inputs for the next stage. Such a sequence of stages is called a "cascade". In practice, diffusion cascades require thousands of stages, depending on the desired level of enrichment.
All components of a diffusion plant must be maintained at an appropriate temperature and pressure to assure that the UF6 remains in the gaseous phase. The gas must be compressed at each stage to make up for a loss in pressure across the diffuser. This leads to compression heating of the gas, which then must be cooled before entering the diffuser. The requirements for pumping and cooling make diffusion plants enormous consumers of electric power. Because of this, gaseous diffusion was the most expensive method used until recently for producing enriched uranium.
History.
Workers working on the Manhattan Project in Oak Ridge, Tennessee, developed several different methods for the separation of isotopes of uranium. Three of these methods were used sequentially at three different plants in Oak Ridge to produce the 235U for "Little Boy" and other early nuclear weapons. In the first step, the S-50 uranium enrichment facility used the thermal diffusion process to enrich the uranium from 0.7% up to nearly 2% 235U. This product was then fed into the gaseous diffusion process at the K-25 plant, the product of which was around 23% 235U. Finally, this material was fed into calutrons at the Y-12. These machines (a type of mass spectrometer) employed electromagnetic isotope separation to boost the final 235U concentration to about 84%.
The preparation of UF6 feedstock for the K-25 gaseous diffusion plant was the first ever application for commercially produced fluorine, and significant obstacles were encountered in the handling of both fluorine and UF6. For example, before the K-25 gaseous diffusion plant could be built, it was first necessary to develop non-reactive chemical compounds that could be used as coatings, lubricants and gaskets for the surfaces that would come into contact with the UF6 gas (a highly reactive and corrosive substance). Scientists of the Manhattan Project recruited William T. Miller, a professor of organic chemistry at Cornell University, to synthesize and develop such materials, because of his expertise in organofluorine chemistry. Miller and his team developed several novel non-reactive chlorofluorocarbon polymers that were used in this application.
Calutrons were inefficient and expensive to build and operate. As soon as the engineering obstacles posed by the gaseous diffusion process had been overcome and the gaseous diffusion cascades began operating at Oak Ridge in 1945, all of the calutrons were shut down. The gaseous diffusion technique then became the preferred technique for producing enriched uranium.
At the time of their construction in the early 1940s, the gaseous diffusion plants were some of the largest buildings ever constructed. Large gaseous diffusion plants were constructed by the United States, the Soviet Union (including a plant that is now in Kazakhstan), the United Kingdom, France, and China. Most of these have now closed or are expected to close, unable to compete economically with newer enrichment techniques. Some of the technology used in pumps and membranes remains top secret. Some of the materials that were used remain subject to export controls, as a part of the continuing effort to control nuclear proliferation.
Current status.
In 2008, gaseous diffusion plants in the United States and France still generated 33% of the world's enriched uranium. However, the French plant (Eurodif's Georges-Besse plant) definitively closed in June 2012, and the Paducah Gaseous Diffusion Plant in Kentucky operated by the United States Enrichment Corporation (USEC) (the last fully functioning uranium enrichment facility in the United States to employ the gaseous diffusion process[#endnote_]) ceased enrichment in 2013. The only other such facility in the United States, the Portsmouth Gaseous Diffusion Plant in Ohio, ceased enrichment activities in 2001. Since 2010, the Ohio site is now used mainly by AREVA, a French conglomerate, for the conversion of depleted UF6 to uranium oxide.
As existing gaseous diffusion plants became obsolete, they were replaced by second generation gas centrifuge technology, which requires far less electric power to produce equivalent amounts of separated uranium. AREVA replaced its Georges Besse gaseous diffusion plant with the Georges Besse II centrifuge plant.[#endnote_]
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "{\\mbox{Rate}_1 \\over \\mbox{Rate}_2}=\\sqrt{M_2 \\over M_1}=\\sqrt{352.041206 \\over 349.034348}=1.004298..."
}
] | https://en.wikipedia.org/wiki?curid=1033084 |
1033299 | Shift-share analysis | A shift-share analysis, used in regional science, political economy, and urban studies, determines what portions of regional economic growth or decline can be attributed to national, economic industry, and regional factors. The analysis helps identify industries where a regional economy has competitive advantages over the larger economy. A shift-share analysis takes the change over time of an economic variable, such as employment, within industries of a regional economy, and divides that change into various components. A traditional shift-share analysis splits regional changes into just three components, but other models have evolved that expand the decomposition into additional components.
Overview.
A shift-share analysis attempts to identify the sources of regional economic changes. The region can be a town, city, country, statistical area, state, or any other region of the country. The analysis examines changes in an economic variable, such as migration, a demographic statistic, firm growth, or firm formations, although employment is most commonly used. The shift-share analysis is performed on a set of economic industries, like those defined by the North American Industry Classification System (NAICS). The analysis separates the regional economic changes within each industry into different categories. Although there are different versions of a shift-share analysis, they all identify national, industry, and regional factors that influence the variable changes.
Traditional model.
The traditional form of the shift-share analysis was developed by Daniel Creamer in the early 1940s, and was later formalized by Edgar S. Dunn in 1960. Also known as the "comparative static model", it examines changes in the economic variable between two years. Changes are calculated for each industry in the analysis, both regionally and nationally. Each regional change is decomposed into three components.
Formula.
The regional change in the variable e within industry i between the two years t and t+n is defined as the sum of the three shift-share effects: national growth effect (NSi), industry mix effect (IMi), and local share effect (RSi).
formula_0
The beginning and ending values of the economic variable within a particular industry are eit and eit+n, respectively. Each of the three effects is defined as a percentage of the beginning value of the economic variable.
formula_1
formula_2
formula_3
The total percent change in the economic variable nationwide for all industries combined is G, while the national and regional industry-specific percent changes are Gi and gi, respectively.
These three equations substituted into the first equation yield the following expression (from where the decomposition starts), which simply says that the regional economic variable (for industry i) grows at the speed of the regional industry-specific percent change. Note that usually (in case of slow growth) 0 < gi < 1 and that gi refers to the whole period from t to t+n.
formula_4
Example.
As an example, a shift-share analysis might be utilized to examine changes in the construction industry of a state's economy over the past decade, using employment as the economic variable studied. Total national employment may have increased 5% over the decade, while national construction employment increased 8%. However, state construction employment decreased 2%, from 100,000 to 98,000 employees, for a net loss of 2,000 employees.
The national growth effect is equal to the beginning 100,000 employees, times the total national growth rate of 5%, for an increase in 5,000 employees. The shift-share analysis implies that state construction would have increased by 5,000 employees, had it followed the same trend as the overall national economy.
The industry mix effect is equal to the original 100,000 employees times the growth in the industry nationwide, which was 8%, minus the total national growth of 5%. This results in an increase in 3,000 employees (100,000 employees times 3%, which is the 8% industry growth minus the 5% total growth). The analysis implies that the state construction would have increased by another 3,000 employees had it followed the industry trends, because the construction industry nationwide performed better than the national economy overall.
The local share effect in this example is equal to the beginning 100,000 employees times the state construction employment growth rate of −2% (it is "negative" because of the loss of employees), minus the national construction growth rate of 8%. This results in 100,000 employees times -10%, for a loss of 10,000 employees. However, the actual employment loss was only 2,000 employees, but that equals the sum of the three effects (5,000 gain + 3,000 gain + 10,000 loss). The analysis implies that local factors lead to a decrease in 10,000 employees in the state construction industry, because the growth in both the national economy and the construction industry should have increased state construction employment by 8,000 employees (the 5,000 national share effect plus the 3,000 industry mix effect).
Names and regions.
Shift-share analysts sometimes use different labels for the three effects, although the calculations are the same. National growth effect may be referred to as "national share". Industry mix effect may be referred to as "proportional shift". Local share effect may be referred to as "differential shift", "regional shift", or "competitive share".
In most shift-share analyses, the regional economy is compared to the national economy. However, the techniques may be used to compare any two regions (e.g., comparing a county to its state).
Dynamic model.
In 1988, Richard Barff and Prentice Knight, III, published the dynamic model shift-share analysis. In contrast to the comparative static model, which only considers two years in its analysis (the beginning and ending years), the dynamic model utilizes every year in the study period. Although it requires much more data to perform the calculations, the dynamic model takes into account continuous changes in the three shift-share effects, so the results are less affected by the choice of starting and ending years. The dynamic model is most useful when there are large differences between regional and national growth rates, or large changes in the regional industrial mix.
The dynamic model uses the same techniques as the comparative static model, including the same three shift-share effects. However, in the dynamic model, a time-series of traditional shift-share calculations are performed, comparing each year to the previous year. The annual shift-share effects are then totaled together for the entire study period, resulting in the dynamic model's shift-share effects.
Formula.
The regional change in the variable e within industry i between the two years t and t+n is defined as the sum of the three shift-share effects: national growth effect (NSi), industry mix effect (IMi), and local share effect (RSi).
formula_0
If the study period ranges from year t to year t+n, then traditional shift-share effects are calculated for every year k, where k spans from t+1 to t+n. The dynamic model shift-share effects are then calculated as the sum of the annual effects.
formula_5
formula_6
formula_7
The growth rates used in the calculations are annual rates, not growth from the beginning year in the study period, so the percent change from year k-1 to k in the economic variable nationwide for all industries combined is Gk, while the national and regional industry-specific percent changes are Gik and gik, respectively.
Esteban-Marquillas Model.
In 1972, J.M. Esteban-Marquillas extended the traditional model to address criticism that the regional share effect is correlated to the regional industrial mix. In the Esteban-Marquillas model, the regional share effect itself is decomposed into two components, isolating a regional shift component that is not correlated to the industrial mix. The model introduced a then-new concept to shift-share analyses, a homothetic level of the economic variable within an industry. This is the theoretical value of the variable within an industry assuming the region has the same industrial mix as the nation.
In the Esteban-Marquillas model, the calculations of the national share and industrial mix effects are unchanged. However, the regional share effect in the traditional model is separated into two effects: a new regional share effect that is not dependent on the industrial mix, and an allocation effect that is. The allocation effect indicates the extent to which the region is specialized in those industries where it enjoys a competitive advantage.
Formula.
The regional change in the variable e within industry i between the two years t and t+n is defined as the sum of the four shift-share effects: national growth effect (NSi), industry mix effect (IMi), regional share effect (RSi), and allocation effect (ALi).
formula_8
The beginning and ending values of the economic variable within a particular industry are eit and eit+n, respectively. The beginning value of the regional homothetic variable within a particular industry is hit. It is based on the regional and national values of the economic variable across all industries, et and Et respectively, and the industry-specific national value Eit.
formula_9
Each of the four shift-share effects is defined as a percentage of either the beginning value of the economic variable, the homothetic variable, or the difference of the two.
formula_10
formula_11
formula_12
formula_13
The total percent change in the economic variable nationwide for all industries combined is G, while the national and regional industry-specific percent changes are Gi and gi, respectively.
Arcelus Model.
In 1984, Francisco Arcelus built upon Esteban-Marquillas' use of the homothetic variables and extended the traditional model even further. He used this method to decompose the national share and industrial mix effects into "expected" and "differential" components. The expected component is based on the homothetic level of the variable, and is the effect not attributed to the regional specializations. The differential component is the remaining effect, which is attributable to the regional industrial mix.
Arcelus claimed that, even with the Esteban-Marquillas extension, the regional share effect is still related to the regional industry mix, and that the static model assumes all regional industries operate on a national market basis, focusing too heavily on the export markets and ignoring the local markets. In order to address these issues, Arcelus used a different method for separating the regional share effect, resulting in a "regional growth effect" and a "regional industry mix effect". Both of these are decomposed into expected and differential components using the homothetic variable.
Formula.
The regional change in the variable e within industry i between the two years t and t+n is defined as the sum of the eight shift-share effects: expected national growth effect (NSEi), differential national growth effect (NSDi), expected industry mix effect (IMEi), differential industry mix effect (IMDi), expected regional growth effect (RGEi), differential regional growth effect (RGDi), expected regional industry mix effect (RIEi), and differential regional industry mix effect (RIDi).
formula_14
The eight effects are related to the three traditional shift-share effects from the comparative static model.
formula_15
formula_16
formula_17
The homothetic variable is calculated the same as in the Esteban-Marquillas model. The beginning value of the regional homothetic variable within a particular industry is hit. It is based on the regional and national values of the economic variable across all industries, et and Et respectively, and the industry-specific national value Eit.
formula_9
Each of the eight shift-share effects is defined as a percentage of either the beginning value of the economic variable, the homothetic variable, or the difference of the two.
formula_18
formula_19
formula_20
formula_21
formula_22
formula_23
formula_24
formula_25
The total percent changes in the economic variable nationally and regionally for all industries combined are G and g respectively, while the national and regional industry-specific percent changes are Gi and gi, respectively.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\ne_i^{t+n} - e_i^t = NS_i + IM_i + RS_i\n"
},
{
"math_id": 1,
"text": "\nNS_i = e_i^t \\times G\n"
},
{
"math_id": 2,
"text": "\nIM_i = e_i^t \\times (G_i-G)\n"
},
{
"math_id": 3,
"text": "\nRS_i = e_i^t \\times (g_i-G_i)\n"
},
{
"math_id": 4,
"text": "\ne_i^{t+n} = e_i^t \\times (1+g_i)\n"
},
{
"math_id": 5,
"text": "\nNS_i = \\sum_{k=t+1}^{t+n} \\left[ e_i^{k-1} \\left( G^k \\right) \\right]\n"
},
{
"math_id": 6,
"text": "\nIM_i = \\sum_{k=t+1}^{t+n} \\left[ e_i^{k-1} \\left( G_i^k - G^k \\right) \\right]\n"
},
{
"math_id": 7,
"text": "\nRS_i = \\sum_{k=t+1}^{t+n} \\left[ e_i^{k-1} \\left( g_i^k - G_i^k \\right) \\right]\n"
},
{
"math_id": 8,
"text": "\ne_i^{t+n} - e_i^t = NS_i + IM_i + RS_i + AL_i\n"
},
{
"math_id": 9,
"text": "\nh_i^t = e^t \\times { E_i^t \\over E^t } \n"
},
{
"math_id": 10,
"text": "\nNS_i = e_i^t \\left( G \\right)\n"
},
{
"math_id": 11,
"text": "\nIM_i = e_i^t \\left( G_i-G \\right)\n"
},
{
"math_id": 12,
"text": "\nRS_i = h_i^t \\left( g_i-G_i \\right)\n"
},
{
"math_id": 13,
"text": "\nAL_i = \\left( e_i^t-h_i^t \\right) \\left( g_i-G_i \\right)\n"
},
{
"math_id": 14,
"text": "\ne_i^{t+n} - e_i^t = NSE_i + NSD_i + IME_i + IMD_i + RGE_i + RGD_i + RIE_i + RID_i\n"
},
{
"math_id": 15,
"text": "\nNS_i = NSE_i + NSD_i\n"
},
{
"math_id": 16,
"text": "\nIM_i = IME_i + IMD_i\n"
},
{
"math_id": 17,
"text": "\nRS_i = RGE_i + RGD_i + RIE_i + RID_i\n"
},
{
"math_id": 18,
"text": "\nNSE_i = h_i^t \\times G \n"
},
{
"math_id": 19,
"text": "\nNSD_i = \\left( e_i^t - h_i^t \\right) \\times G\n"
},
{
"math_id": 20,
"text": "\nIME_i = h_i^t \\times \\left( G_i - G \\right) \n"
},
{
"math_id": 21,
"text": "\nIMD_i = \\left( e_i^t - h_i^t \\right) \\times \\left( G_i - G \\right)\n"
},
{
"math_id": 22,
"text": "\nRGE_i = h_i^t \\times \\left( g - G \\right)\n"
},
{
"math_id": 23,
"text": "\nRGD_i = \\left( e_i^t - h_i^t \\right) \\times \\left( g - G \\right)\n"
},
{
"math_id": 24,
"text": "\nRIE_i = h_i^t \\times \\left( g_i - g - G_i + G \\right)\n"
},
{
"math_id": 25,
"text": "\nRID_i = \\left( e_i^t - h_i^t \\right) \\times \\left( g_i - g - G_i + G \\right)\n"
}
] | https://en.wikipedia.org/wiki?curid=1033299 |
10335099 | Jackson integral | In q-analog theory, the Jackson integral series in the theory of special functions that expresses the operation inverse to q-differentiation.
The Jackson integral was introduced by Frank Hilton Jackson. For methods of numerical evaluation, see and .
Definition.
Let "f"("x") be a function of a real variable "x". For "a" a real variable, the Jackson integral of "f" is defined by the following series expansion:
formula_0
Consistent with this is the definition for formula_1
formula_2
More generally, if "g"("x") is another function and "D""q""g" denotes its "q"-derivative, we can formally write
formula_3 or
formula_4
giving a "q"-analogue of the Riemann–Stieltjes integral.
Jackson integral as q-antiderivative.
Just as the ordinary antiderivative of a continuous function can be represented by its Riemann integral, it is possible to show that the Jackson integral gives a unique "q"-antiderivative
within a certain class of functions (see ).
Theorem.
Suppose that formula_5 If formula_6 is bounded on the interval formula_7 for some formula_8 then the Jackson integral converges to a function formula_9 on formula_7 which is a "q"-antiderivative of formula_10 Moreover, formula_9 is continuous at formula_11 with formula_12 and is a unique antiderivative of formula_13 in this class of functions. | [
{
"math_id": 0,
"text": " \\int_0^a f(x)\\,{\\rm d}_q x = (1-q)\\,a\\sum_{k=0}^{\\infty}q^k f(q^k a). "
},
{
"math_id": 1,
"text": " a \\to \\infty "
},
{
"math_id": 2,
"text": " \\int_0^\\infty f(x)\\,{\\rm d}_q x = (1-q)\\sum_{k=-\\infty}^{\\infty}q^k f(q^k ). "
},
{
"math_id": 3,
"text": " \\int f(x)\\,D_q g\\,{\\rm d}_q x = (1-q)\\,x\\sum_{k=0}^{\\infty}q^k f(q^k x)\\,D_q g(q^k x) = (1-q)\\,x\\sum_{k=0}^{\\infty}q^k f(q^k x)\\tfrac{g(q^{k}x)-g(q^{k+1}x)}{(1-q)q^k x}, "
},
{
"math_id": 4,
"text": " \\int f(x)\\,{\\rm d}_q g(x) = \\sum_{k=0}^{\\infty} f(q^k x)\\cdot(g(q^{k}x)-g(q^{k+1}x)), "
},
{
"math_id": 5,
"text": "0<q<1."
},
{
"math_id": 6,
"text": "|f(x)x^\\alpha|"
},
{
"math_id": 7,
"text": "[0,A)"
},
{
"math_id": 8,
"text": "0\\leq\\alpha<1, "
},
{
"math_id": 9,
"text": "F(x)"
},
{
"math_id": 10,
"text": "f(x)."
},
{
"math_id": 11,
"text": "x=0"
},
{
"math_id": 12,
"text": "F(0)=0"
},
{
"math_id": 13,
"text": "f(x)"
}
] | https://en.wikipedia.org/wiki?curid=10335099 |
103356 | Automata theory | Study of abstract machines and automata
<imagemap>
File:Automata theory.svgClasses of automata text to hide from printing but not mirroring
Automata theory is the study of abstract machines and automata, as well as the computational problems that can be solved using them. It is a theory in theoretical computer science with close connections to mathematical logic. The word "automata" comes from the Greek word αὐτόματος, which means "self-acting, self-willed, self-moving". An automaton (automata in plural) is an abstract self-propelled computing device which follows a predetermined sequence of operations automatically. An automaton with a finite number of states is called a finite automaton (FA) or finite-state machine (FSM). The figure on the right illustrates a finite-state machine, which is a well-known type of automaton. This automaton consists of states (represented in the figure by circles) and transitions (represented by arrows). As the automaton sees a symbol of input, it makes a transition (or jump) to another state, according to its transition function, which takes the previous state and current input symbol as its arguments.
Automata theory is closely related to formal language theory. In this context, automata are used as finite representations of formal languages that may be infinite. Automata are often classified by the class of formal languages they can recognize, as in the Chomsky hierarchy, which describes a nesting relationship between major classes of automata. Automata play a major role in the theory of computation, compiler construction, artificial intelligence, parsing and formal verification.
History.
The theory of abstract automata was developed in the mid-20th century in connection with finite automata. Automata theory was initially considered a branch of mathematical systems theory, studying the behavior of discrete-parameter systems. Early work in automata theory differed from previous work on systems by using abstract algebra to describe information systems rather than differential calculus to describe material systems. The theory of the finite-state transducer was developed under different names by different research communities. The earlier concept of Turing machine was also included in the discipline along with new forms of infinite-state automata, such as pushdown automata.
1956 saw the publication of "Automata Studies", which collected work by scientists including Claude Shannon, W. Ross Ashby, John von Neumann, Marvin Minsky, Edward F. Moore, and Stephen Cole Kleene. With the publication of this volume, "automata theory emerged as a relatively autonomous discipline". The book included Kleene's description of the set of regular events, or regular languages, and a relatively stable measure of complexity in Turing machine programs by Shannon.
In the same year, Noam Chomsky described the Chomsky hierarchy, a correspondence between automata and formal grammars, and Ross Ashby published "An Introduction to Cybernetics", an accessible textbook explaining automata and information using basic set theory.
The study of linear bounded automata led to the Myhill–Nerode theorem, which gives a necessary and sufficient condition for a formal language to be regular, and an exact count of the number of states in a minimal machine for the language. The pumping lemma for regular languages, also useful in regularity proofs, was proven in this period by Michael O. Rabin and Dana Scott, along with the computational equivalence of deterministic and nondeterministic finite automata.
In the 1960s, a body of algebraic results known as "structure theory" or "algebraic decomposition theory" emerged, which dealt with the realization of sequential machines from smaller machines by interconnection. While any finite automaton can be simulated using a universal gate set, this requires that the simulating circuit contain loops of arbitrary complexity. Structure theory deals with the "loop-free" realizability of machines.
The theory of computational complexity also took shape in the 1960s. By the end of the decade, automata theory came to be seen as "the pure mathematics of computer science".
Automata.
What follows is a general definition of an automaton, which restricts a broader definition of a system to one viewed as acting in discrete time-steps, with its state behavior and outputs defined at each step by unchanging functions of only its state and input.
Informal description.
An automaton "runs" when it is given some sequence of "inputs" in discrete (individual) "time steps" (or just "steps"). An automaton processes one input picked from a set of "symbols" or "letters", which is called an "input alphabet". The symbols received by the automaton as input at any step are a sequence of symbols called "words". An automaton has a set of "states". At each moment during a run of the automaton, the automaton is "in" one of its states. When the automaton receives new input, it moves to another state (or "transitions") based on a "transition function" that takes the previous state and current input symbol as parameters. At the same time, another function called the "output function" produces symbols from the "output alphabet", also according to the previous state and current input symbol. The automaton reads the symbols of the input word and transitions between states until the word is read completely, if it is finite in length, at which point the automaton "halts". A state at which the automaton halts is called the "final state".
To investigate the possible state/input/output sequences in an automaton using formal language theory, a machine can be assigned a "starting state" and a set of "accepting states". Then, depending on whether a run starting from the starting state ends in an accepting state, the automaton can be said to "accept" or "reject" an input sequence. The set of all the words accepted by an automaton is called the "language recognized by the automaton". A familiar example of a machine recognizing a language is an electronic lock, which accepts or rejects attempts to enter the correct code.
An automaton can be represented formally by a quintuple formula_0, where:
* formula_1 is a finite set of "symbols", called the "input alphabet" of the automaton,
* formula_2 is another finite set of symbols, called the "output alphabet" of the automaton,
* formula_3 is a set of "states",
* formula_4 is the "next-state function" or "transition function" formula_5 mapping state-input pairs to successor states,
* formula_6 is the "next-output function" formula_7 mapping state-input pairs to outputs.
If formula_3 is finite, then formula_8 is a finite automaton.
An automaton reads a finite string of symbols formula_9, where formula_10, which is called an "input word". The set of all words is denoted by formula_11.
A sequence of states formula_12, where formula_13 such that formula_14 for formula_15, is a "run" of the automaton on an input formula_16 starting from state formula_17. In other words, at first the automaton is at the start state formula_17, and receives input formula_18. For formula_18 and every following formula_19 in the input string, the automaton picks the next state formula_20 according to the transition function formula_21, until the last symbol formula_22 has been read, leaving the machine in the "final state" of the run, formula_23. Similarly, at each step, the automaton emits an output symbol according to the output function formula_24.
The transition function formula_4 is extended inductively into formula_25 to describe the machine's behavior when fed whole input words. For the empty string formula_26, formula_27 for all states formula_28, and for strings formula_29 where formula_30 is the last symbol and formula_31 is the (possibly empty) rest of the string, formula_32. The output function formula_6 may be extended similarly into formula_33, which gives the complete output of the machine when run on word formula_31 from state formula_28.
In order to study an automaton with the theory of formal languages, an automaton may be considered as an "acceptor", replacing the output alphabet and function formula_2 and formula_6 with
* formula_34, a designated "start state", and
* formula_35, a set of states of formula_3 (i.e. formula_36) called "accept states".
This allows the following to be defined:
A word formula_37 is an "accepting word" for the automaton if formula_38, that is, if after consuming the whole string formula_31 the machine is in an accept state.
The language formula_39 "recognized" by an automaton is the set of all the words that are accepted by the automaton, formula_40.
The recognizable languages are the set of languages that are recognized by some automaton. For "finite automata" the recognizable languages are regular languages. For different types of automata, the recognizable languages are different.
Variant definitions of automata.
Automata are defined to study useful machines under mathematical formalism. So the definition of an automaton is open to variations according to the "real world machine" that we want to model using the automaton. People have studied many variations of automata. The following are some popular variations in the definition of different components of automata.
Different combinations of the above variations produce many classes of automata.
Automata theory is a subject matter that studies properties of various types of automata. For example, the following questions are studied about a given type of automata.
Automata theory also studies the existence or nonexistence of any effective algorithms to solve problems similar to the following list:
Types of automata.
The following is an incomplete list of types of automata.
Discrete, continuous, and hybrid automata.
Normally automata theory describes the states of abstract machines but there are discrete automata, analog automata or continuous automata, or hybrid discrete-continuous automata, which use digital data, analog data or continuous time, or digital "and" analog data, respectively.
Hierarchy in terms of powers.
The following is an incomplete hierarchy in terms of powers of different types of virtual machines. The hierarchy reflects the nested categories of languages the machines are able to accept.
Applications.
Each model in automata theory plays important roles in several applied areas. Finite automata are used in text processing, compilers, and hardware design. Context-free grammar (CFGs) are used in programming languages and artificial intelligence. Originally, CFGs were used in the study of human languages. Cellular automata are used in the field of artificial life, the most famous example being John Conway's Game of Life. Some other examples which could be explained using automata theory in biology include mollusk and pine cone growth and pigmentation patterns. Going further, a theory suggesting that the whole universe is computed by some sort of a discrete automaton, is advocated by some scientists. The idea originated in the work of Konrad Zuse, and was popularized in America by Edward Fredkin. Automata also appear in the theory of finite fields: the set of irreducible polynomials that can be written as composition of degree two polynomials is in fact a regular language.
Another problem for which automata can be used is the induction of regular languages.
Automata simulators.
Automata simulators are pedagogical tools used to teach, learn and research automata theory. An automata simulator takes as input the description of an automaton and then simulates its working for an arbitrary input string. The description of the automaton can be entered in several ways. An automaton can be defined in a symbolic language or its specification may be entered in a predesigned form or its transition diagram may be drawn by clicking and dragging the mouse. Well known automata simulators include Turing's World, JFLAP, VAS, TAGS and SimStudio.
Category-theoretic models.
One can define several distinct categories of automata following the automata classification into different types described in the previous section. The mathematical category of deterministic automata, sequential machines or "sequential automata", and Turing machines with "automata homomorphisms" defining the arrows between automata is a Cartesian closed category, it has both categorical limits and colimits. An automata homomorphism maps a quintuple of an automaton "A""i" onto the quintuple of another automaton
" A""j". Automata homomorphisms can also be considered as "automata transformations" or as semigroup homomorphisms, when the state space, S, of the automaton is defined as a semigroup Sg. Monoids are also considered as a suitable setting for automata in monoidal categories.
One could also define a "variable automaton", in the sense of Norbert Wiener in his book on "The Human Use of Human Beings" "via" the endomorphisms formula_41. Then one can show that such variable automata homomorphisms form a mathematical group. In the case of non-deterministic, or other complex kinds of automata, the latter set of endomorphisms may become, however, a "variable automaton groupoid". Therefore, in the most general case, categories of variable automata of any kind are categories of groupoids or groupoid categories. Moreover, the category of reversible automata is then a
2-category, and also a subcategory of the 2-category of groupoids, or the groupoid category.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "M = \\langle \\Sigma, \\Gamma, Q, \\delta, \\lambda \\rangle"
},
{
"math_id": 1,
"text": "\\Sigma"
},
{
"math_id": 2,
"text": "\\Gamma"
},
{
"math_id": 3,
"text": "Q"
},
{
"math_id": 4,
"text": "\\delta"
},
{
"math_id": 5,
"text": "\\delta : Q \\times \\Sigma \\to Q"
},
{
"math_id": 6,
"text": "\\lambda"
},
{
"math_id": 7,
"text": "\\lambda : Q \\times \\Sigma \\to \\Gamma"
},
{
"math_id": 8,
"text": "M"
},
{
"math_id": 9,
"text": "a_1a_2...a_n"
},
{
"math_id": 10,
"text": "a_i \\in \\Sigma"
},
{
"math_id": 11,
"text": "\\Sigma^*"
},
{
"math_id": 12,
"text": "q_0,q_1,...,q_n"
},
{
"math_id": 13,
"text": "q_i \\in Q"
},
{
"math_id": 14,
"text": "q_i = \\delta(q_{i-1}, a_i)"
},
{
"math_id": 15,
"text": "0 < i \\le n"
},
{
"math_id": 16,
"text": "a_1a_2...a_n \\in \\Sigma^*"
},
{
"math_id": 17,
"text": "q_0"
},
{
"math_id": 18,
"text": "a_1"
},
{
"math_id": 19,
"text": "a_i"
},
{
"math_id": 20,
"text": "q_i"
},
{
"math_id": 21,
"text": "\\delta(q_{i-1},a_i)"
},
{
"math_id": 22,
"text": "a_n"
},
{
"math_id": 23,
"text": "q_n"
},
{
"math_id": 24,
"text": "\\lambda(q_{i-1},a_i)"
},
{
"math_id": 25,
"text": "\\overline\\delta: Q \\times \\Sigma^* \\to Q"
},
{
"math_id": 26,
"text": "\\varepsilon"
},
{
"math_id": 27,
"text": "\\overline\\delta(q, \\varepsilon) = q"
},
{
"math_id": 28,
"text": "q"
},
{
"math_id": 29,
"text": "wa"
},
{
"math_id": 30,
"text": "a"
},
{
"math_id": 31,
"text": "w"
},
{
"math_id": 32,
"text": "\\overline\\delta(q, wa) = \\delta(\\overline\\delta(q,w),a)"
},
{
"math_id": 33,
"text": "\\overline\\lambda(q,w)"
},
{
"math_id": 34,
"text": "q_0 \\in Q"
},
{
"math_id": 35,
"text": "F"
},
{
"math_id": 36,
"text": "F \\subseteq Q"
},
{
"math_id": 37,
"text": "w = a_1a_2...a_n \\in \\Sigma^*"
},
{
"math_id": 38,
"text": "\\overline\\delta(q_0,w) \\in F"
},
{
"math_id": 39,
"text": "L \\subseteq \\Sigma^*"
},
{
"math_id": 40,
"text": "L = \\{w \\in \\Sigma^* \\ |\\ \\overline\\delta(q_0,w) \\in F\\}"
},
{
"math_id": 41,
"text": "A_{i}\\to A_{i}"
}
] | https://en.wikipedia.org/wiki?curid=103356 |
1033664 | Morava K-theory | Cohomology theory
In stable homotopy theory, a branch of mathematics, Morava K-theory is one of a collection of cohomology theories introduced in algebraic topology by Jack Morava in unpublished preprints in the early 1970s. For every prime number "p" (which is suppressed in the notation), it consists of theories "K"("n") for each nonnegative integer "n", each a ring spectrum in the sense of homotopy theory. published the first account of the theories.
Details.
The theory "K"(0) agrees with singular homology with rational coefficients, whereas "K"(1) is a summand of mod-"p" complex K-theory. The theory "K"("n") has coefficient ring
F"p"["v""n","v""n"−1]
where "v""n" has degree 2("p""n" − 1). In particular, Morava K-theory is periodic with this period, in much the same way that complex K-theory has period 2.
These theories have several remarkable properties.
formula_0 | [
{
"math_id": 0,
"text": "K(n)_*(X \\times Y) \\cong K(n)_*(X) \\otimes_{K(n)_*} K(n)_*(Y)."
}
] | https://en.wikipedia.org/wiki?curid=1033664 |
1033666 | Complex cobordism | In mathematics, complex cobordism is a generalized cohomology theory related to cobordism of manifolds. Its spectrum is denoted by MU. It is an exceptionally powerful cohomology theory, but can be quite hard to compute, so often instead of using it directly one uses some slightly weaker theories derived from it, such as Brown–Peterson cohomology or Morava K-theory, that are easier to compute.
The generalized homology and cohomology complex cobordism theories were introduced by Michael Atiyah (1961) using the Thom spectrum.
Spectrum of complex cobordism.
The complex bordism formula_0 of a space formula_1 is roughly the group of bordism classes of manifolds over formula_1 with a complex linear structure on the stable normal bundle. Complex bordism is a generalized homology theory, corresponding to a spectrum MU that can be described explicitly in terms of Thom spaces as follows.
The space formula_2 is the Thom space of the universal formula_3-plane bundle over the classifying space formula_4 of the unitary group formula_5. The natural inclusion from formula_5 into formula_6 induces a map from the double suspension formula_7 to formula_8. Together these maps give the spectrum formula_9; namely, it is the homotopy colimit of formula_2.
Examples: formula_10 is the sphere spectrum. formula_11 is the desuspension formula_12 of formula_13.
The nilpotence theorem states that, for any ring spectrum formula_14, the kernel of formula_15 consists of nilpotent elements. The theorem implies in particular that, if formula_16 is the sphere spectrum, then for any formula_17, every element of formula_18 is nilpotent (a theorem of Goro Nishida). (Proof: if formula_19 is in formula_20, then formula_19 is a torsion but its image in formula_21, the Lazard ring, cannot be torsion since formula_22 is a polynomial ring. Thus, formula_19 must be in the kernel.)
Formal group laws.
John Milnor (1960) and Sergei Novikov (1960, 1962) showed that the coefficient ring formula_23 (equal to the complex cobordism of a point, or equivalently the ring of cobordism classes of stably complex manifolds) is a polynomial ring formula_24 on infinitely many generators formula_25 of positive even degrees.
Write formula_26 for infinite dimensional complex projective space, which is the classifying space for complex line bundles, so that tensor product of line bundles induces a map formula_27 A complex orientation on an associative commutative ring spectrum "E" is an element "x" in formula_28 whose restriction to formula_29
is 1, if the latter ring is identified with the coefficient ring of "E". A spectrum "E" with such an element "x" is called a complex oriented ring spectrum.
If "E" is a complex oriented ring spectrum, then
formula_30
formula_31
and formula_32 is a formal group law over the ring formula_33.
Complex cobordism has a natural complex orientation. Daniel Quillen (1969) showed that there is a natural isomorphism from its coefficient ring to Lazard's universal ring, making the formal group law of complex cobordism into the universal formal group law. In other words, for any formal group law "F" over any commutative ring "R", there is a unique ring homomorphism from MU*(point) to "R" such that "F" is the pullback of the formal group law of complex cobordism.
Brown–Peterson cohomology.
Complex cobordism over the rationals can be reduced to ordinary cohomology over the rationals, so the main interest is in the torsion of complex cobordism. It is often easier to study the torsion one prime at a time by localizing MU at a prime "p"; roughly speaking this means one kills off torsion prime to "p". The localization MU"p" of MU at a prime "p" splits as a sum of suspensions of a simpler cohomology theory called Brown–Peterson cohomology, first described by . In practice one often does calculations with Brown–Peterson cohomology rather than with complex cobordism. Knowledge of the Brown–Peterson cohomologies of a space for all primes "p" is roughly equivalent to knowledge of its complex cobordism.
Conner–Floyd classes.
The ring formula_34 is isomorphic to the formal power series ring formula_35 where the elements cf are called Conner–Floyd classes. They are the analogues of Chern classes for complex cobordism. They were introduced by .
Similarly formula_36 is isomorphic to the polynomial ring formula_37
Cohomology operations.
The Hopf algebra MU*(MU) is isomorphic to the polynomial algebra R[b1, b2, ...], where R is the reduced bordism ring of a 0-sphere.
The coproduct is given by
formula_38
where the notation ()2"i" means take the piece of degree 2"i". This can be interpreted as follows. The map
formula_39
is a continuous automorphism of the ring of formal power series in "x", and the coproduct of MU*(MU) gives the composition of two such automorphisms.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "MU^*(X)"
},
{
"math_id": 1,
"text": "X"
},
{
"math_id": 2,
"text": "MU(n)"
},
{
"math_id": 3,
"text": "n"
},
{
"math_id": 4,
"text": "BU(n)"
},
{
"math_id": 5,
"text": "U(n)"
},
{
"math_id": 6,
"text": "U(n+1)"
},
{
"math_id": 7,
"text": "\\Sigma^2MU(n)"
},
{
"math_id": 8,
"text": "MU(n+1)"
},
{
"math_id": 9,
"text": "MU"
},
{
"math_id": 10,
"text": "MU(0)"
},
{
"math_id": 11,
"text": "MU(1)"
},
{
"math_id": 12,
"text": "\\Sigma^{\\infty -2} \\mathbb{CP}^\\infty"
},
{
"math_id": 13,
"text": "\\mathbb{CP}^\\infty"
},
{
"math_id": 14,
"text": "R"
},
{
"math_id": 15,
"text": "\\pi_* R \\to \\operatorname{MU}_*(R)"
},
{
"math_id": 16,
"text": "\\mathbb{S}"
},
{
"math_id": 17,
"text": "n>0"
},
{
"math_id": 18,
"text": "\\pi_n \\mathbb{S}"
},
{
"math_id": 19,
"text": "x"
},
{
"math_id": 20,
"text": "\\pi_n S"
},
{
"math_id": 21,
"text": "\\operatorname{MU}_*(\\mathbb{S}) \\simeq L"
},
{
"math_id": 22,
"text": "L"
},
{
"math_id": 23,
"text": "\\pi_*(\\operatorname{MU})"
},
{
"math_id": 24,
"text": "\\Z[x_1,x_2,\\ldots]"
},
{
"math_id": 25,
"text": "x_i \\in \\pi_{2i}(\\operatorname{MU})"
},
{
"math_id": 26,
"text": "\\mathbb{CP}^{\\infty}"
},
{
"math_id": 27,
"text": "\\mu : \\mathbb{CP}^{\\infty} \\times \\mathbb{CP}^{\\infty}\\to \\mathbb{CP}^{\\infty}."
},
{
"math_id": 28,
"text": "E^2(\\mathbb{CP}^{\\infty})"
},
{
"math_id": 29,
"text": "E^2(\\mathbb{CP}^{1})"
},
{
"math_id": 30,
"text": "E^*(\\mathbb{CP}^\\infty) = E^*(\\text{point})[[x]]"
},
{
"math_id": 31,
"text": "E^*(\\mathbb{CP}^\\infty)\\times E^*(\\mathbb{CP}^\\infty) = E^*(\\text{point})[[x\\otimes1, 1\\otimes x]]"
},
{
"math_id": 32,
"text": "\\mu^*(x) \\in E^*(\\text{point})[[x\\otimes 1, 1\\otimes x]]"
},
{
"math_id": 33,
"text": "E^*(\\text{point}) = \\pi^*(E)"
},
{
"math_id": 34,
"text": "\\operatorname{MU}^*(BU)"
},
{
"math_id": 35,
"text": "\\operatorname{MU}^*(\\text{point})[[cf_1, cf_2, \\ldots]]"
},
{
"math_id": 36,
"text": "\\operatorname{MU}_*(BU)"
},
{
"math_id": 37,
"text": "\\operatorname{MU}_*(\\text{point})[[\\beta_1, \\beta_2, \\ldots]]"
},
{
"math_id": 38,
"text": "\\psi(b_k) = \\sum_{i+j=k}(b)_{2i}^{j+1}\\otimes b_j"
},
{
"math_id": 39,
"text": " x\\to x+b_1x^2+b_2x^3+\\cdots"
}
] | https://en.wikipedia.org/wiki?curid=1033666 |
1033697 | Spectrum (topology) | In algebraic topology, a branch of mathematics, a spectrum is an object representing a generalized cohomology theory. Every such cohomology theory is representable, as follows from Brown's representability theorem. This means that, given a cohomology theoryformula_0,there exist spaces formula_1 such that evaluating the cohomology theory in degree formula_2 on a space formula_3 is equivalent to computing the homotopy classes of maps to the space formula_1, that isformula_4.Note there are several different categories of spectra leading to many technical difficulties, but they all determine the same homotopy category, known as the stable homotopy category. This is one of the key points for introducing spectra because they form a natural home for stable homotopy theory.
The definition of a spectrum.
There are many variations of the definition: in general, a "spectrum" is any sequence formula_5 of pointed topological spaces or pointed simplicial sets together with the structure maps formula_6, where formula_7 is the smash product. The smash product of a pointed space formula_3 with a circle is homeomorphic to the reduced suspension of formula_3, denoted formula_8.
The following is due to Frank Adams (1974): a spectrum (or CW-spectrum) is a sequence formula_9 of CW complexes together with inclusions formula_10 of the suspension formula_11 as a subcomplex of formula_12.
For other definitions, see symmetric spectrum and simplicial spectrum.
Homotopy groups of a spectrum.
One of the most important invariants of spectra are the homotopy groups of the spectrum. These groups mirror the definition of the stable homotopy groups of spaces since the structure of the suspension maps is integral in its definition. Given a spectrum formula_13 define the homotopy group formula_14 as the colimitformula_15where the maps are induced from the composition of the map formula_16 (that is, formula_17 given by functoriality of formula_18) and the structure map formula_19. A spectrum is said to be connective if its formula_20 are zero for negative "k".
Examples.
Eilenberg–Maclane spectrum.
Consider singular cohomology formula_21 with coefficients in an abelian group formula_22. For a CW complex formula_3, the group formula_21 can be identified with the set of homotopy classes of maps from formula_3 to formula_23, the Eilenberg–MacLane space with homotopy concentrated in degree formula_24. We write this asformula_25Then the corresponding spectrum formula_26 has formula_24-th space formula_23; it is called the Eilenberg–MacLane spectrum of formula_22. Note this construction can be used to embed any ring formula_27 into the category of spectra. This embedding forms the basis of spectral geometry, a model for derived algebraic geometry. One of the important properties of this embedding are the isomorphismsformula_28showing the category of spectra keeps track of the derived information of commutative rings, where the smash product acts as the derived tensor product. Moreover, Eilenberg–Maclane spectra can be used to define theories such as topological Hochschild homology for commutative rings, a more refined theory than classical Hochschild homology.
Topological complex K-theory.
As a second important example, consider topological K-theory. At least for "X" compact, formula_29 is defined to be the Grothendieck group of the monoid of complex vector bundles on "X". Also, formula_30 is the group corresponding to vector bundles on the suspension of X. Topological K-theory is a generalized cohomology theory, so it gives a spectrum. The zeroth space is formula_31 while the first space is formula_32. Here formula_32 is the infinite unitary group and formula_33 is its classifying space. By Bott periodicity we get formula_34 and formula_35 for all "n", so all the spaces in the topological K-theory spectrum are given by either formula_31 or formula_32. There is a corresponding construction using real vector bundles instead of complex vector bundles, which gives an 8-periodic spectrum.
Sphere spectrum.
One of the quintessential examples of a spectrum is the sphere spectrum formula_36. This is a spectrum whose homotopy groups are given by the stable homotopy groups of spheres, soformula_37We can write down this spectrum explicitly as formula_38 where formula_39. Note the smash product gives a product structure on this spectrumformula_40induces a ring structure on formula_36. Moreover, if considering the category of symmetric spectra, this forms the initial object, analogous to formula_41 in the category of commutative rings.
Thom spectra.
Another canonical example of spectra come from the Thom spectra representing various cobordism theories. This includes real cobordism formula_42, complex cobordism formula_43, framed cobordism, spin cobordism formula_44, string cobordism formula_45, and so on. In fact, for any topological group formula_46 there is a Thom spectrum formula_47.
Suspension spectrum.
A spectrum may be constructed out of a space. The suspension spectrum of a space formula_3, denoted formula_48 is a spectrum formula_49 (the structure maps are the identity.) For example, the suspension spectrum of the 0-sphere is the sphere spectrum discussed above. The homotopy groups of this spectrum are then the stable homotopy groups of formula_3, soformula_50The construction of the suspension spectrum implies every space can be considered as a cohomology theory. In fact, it defines a functorformula_51from the homotopy category of CW complexes to the homotopy category of spectra. The morphisms are given byformula_52which by the Freudenthal suspension theorem eventually stabilizes. By this we meanformula_53 and formula_54for some finite integer formula_55. For a CW complex formula_3 there is an inverse construction formula_56 which takes a spectrum formula_13 and forms a spaceformula_57called the infinite loop space of the spectrum. For a CW complex formula_3formula_58and this construction comes with an inclusion formula_59 for every formula_24, hence gives a mapformula_60which is injective. Unfortunately, these two structures, with the addition of the smash product, lead to significant complexity in the theory of spectra because there cannot exist a single category of spectra which satisfies a list of five axioms relating these structures. The above adjunction is valid only in the homotopy categories of spaces and spectra, but not always with a specific category of spectra (not the homotopy category).
Ω-spectrum.
An Ω-spectrum is a spectrum such that the adjoint of the structure map (i.e., the mapformula_61) is a weak equivalence. The K-theory spectrum of a ring is an example of an Ω-spectrum.
Ring spectrum.
A ring spectrum is a spectrum "X" such that the diagrams that describe ring axioms in terms of smash products commute "up to homotopy" (formula_62 corresponds to the identity.) For example, the spectrum of topological "K"-theory is a ring spectrum. A module spectrum may be defined analogously.
For many more examples, see the list of cohomology theories.
Functions, maps, and homotopies of spectra.
There are three natural categories whose objects are spectra, whose morphisms are the functions, or maps, or homotopy classes defined below.
A function between two spectra "E" and "F" is a sequence of maps from "E""n" to "F""n" that commute with the
maps Σ"E""n" → "E""n"+1 and Σ"F""n" → "F""n"+1.
Given a spectrum formula_63, a subspectrum formula_64 is a sequence of subcomplexes that is also a spectrum. As each "i"-cell in formula_65 suspends to an ("i" + 1)-cell in formula_66, a cofinal subspectrum is a subspectrum for which each cell of the parent spectrum is eventually contained in the subspectrum after a finite number of suspensions. Spectra can then be turned into a category by defining a map of spectra formula_67 to be a function from a cofinal subspectrum formula_46 of formula_13 to formula_68, where two such functions represent the same map if they coincide on some cofinal subspectrum. Intuitively such a map of spectra does not need to be everywhere defined, just "eventually" become defined, and two maps that coincide on a cofinal subspectrum are said to be equivalent.
This gives the category of spectra (and maps), which is a major tool. There is a natural embedding of the category of pointed CW complexes into this category: it takes formula_69 to the "suspension spectrum" in which the "n"th complex is formula_70.
The smash product of a spectrum formula_13 and a pointed complex formula_3 is a spectrum given by formula_71 (associativity of the smash product yields immediately that this is indeed a spectrum). A homotopy of maps between spectra corresponds to a map formula_72, where formula_73 is the disjoint union formula_74 with formula_75 taken to be the basepoint.
The stable homotopy category, or homotopy category of (CW) spectra is defined to be the category whose objects are spectra and whose morphisms are homotopy classes of maps between spectra. Many other definitions of spectrum, some appearing very different, lead to equivalent stable homotopy categories.
Finally, we can define the suspension of a spectrum by formula_76. This translation suspension is invertible, as we can desuspend too, by setting formula_77.
The triangulated homotopy category of spectra.
The stable homotopy category is additive: maps can be added by using a variant of the track addition used to define homotopy groups. Thus homotopy classes from one spectrum to another form an abelian group. Furthermore the stable homotopy category is triangulated (Vogt (1970)), the shift being given by suspension and the distinguished triangles by the mapping cone sequences of spectra
formula_78.
Smash products of spectra.
The smash product of spectra extends the smash product of CW complexes. It makes the stable homotopy category into a monoidal category; in other words it behaves like the (derived) tensor product of abelian groups. A major problem with the smash product is that obvious ways of defining it make it associative and commutative only up to homotopy. Some more recent definitions of spectra, such as symmetric spectra, eliminate this problem, and give a symmetric monoidal structure at the level of maps, before passing to homotopy classes.
The smash product is compatible with the triangulated category structure. In particular the smash product of a distinguished triangle with a spectrum is a distinguished triangle.
Generalized homology and cohomology of spectra.
We can define the (stable) homotopy groups of a spectrum to be those given by
formula_79,
where formula_36 is the sphere spectrum and formula_80 is the set of homotopy classes of maps from formula_3 to formula_81.
We define the generalized homology theory of a spectrum "E" by
formula_82
and define its generalized cohomology theory by
formula_83
Here formula_3 can be a spectrum or (by using its suspension spectrum) a space.
Technical complexities with spectra.
One of the canonical complexities while working with spectra and defining a category of spectra comes from the fact each of these categories cannot satisfy five seemingly obvious axioms concerning the infinite loop space of a spectrum formula_84formula_85sendingformula_86a pair of adjoint functors formula_87, the and the smash product formula_7 in both the category of spaces and the category of spectra. If we let formula_88 denote the category of based, compactly generated, weak Hausdorff spaces, and formula_89 denote a category of spectra, the following five axioms can never be satisfied by the specific model of spectra:
Because of this, the study of spectra is fractured based upon the model being used. For an overview, check out the article cited above.
History.
A version of the concept of a spectrum was introduced in the 1958 doctoral dissertation of Elon Lages Lima. His advisor Edwin Spanier wrote further on the subject in 1959. Spectra were adopted by Michael Atiyah and George W. Whitehead in their work on generalized homology theories in the early 1960s. The 1964 doctoral thesis of J. Michael Boardman gave a workable definition of a category of spectra and of maps (not just homotopy classes) between them, as useful in stable homotopy theory as the category of CW complexes is in the unstable case. (This is essentially the category described above, and it is still used for many purposes: for other accounts, see Adams (1974) or Rainer Vogt (1970).) Important further theoretical advances have however been made since 1990, improving vastly the formal properties of spectra. Consequently, much recent literature uses modified definitions of spectrum: see Michael Mandell "et al." (2001) for a unified treatment of these new approaches.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathcal{E}^*:\\text{CW}^{op} \\to \\text{Ab}"
},
{
"math_id": 1,
"text": "E^k"
},
{
"math_id": 2,
"text": "k"
},
{
"math_id": 3,
"text": "X"
},
{
"math_id": 4,
"text": "\\mathcal{E}^k(X) \\cong \\left[X, E^k\\right]"
},
{
"math_id": 5,
"text": "X_n"
},
{
"math_id": 6,
"text": "S^1 \\wedge X_n \\to X_{n+1}"
},
{
"math_id": 7,
"text": "\\wedge"
},
{
"math_id": 8,
"text": "\\Sigma X"
},
{
"math_id": 9,
"text": "E:= \\{E_n\\}_{n\\in \\mathbb{N}} "
},
{
"math_id": 10,
"text": " \\Sigma E_n \\to E_{n+1} "
},
{
"math_id": 11,
"text": " \\Sigma E_n "
},
{
"math_id": 12,
"text": " E_{n+1} "
},
{
"math_id": 13,
"text": "E"
},
{
"math_id": 14,
"text": "\\pi_n(E)"
},
{
"math_id": 15,
"text": "\\begin{align}\n \\pi_n(E) &= \\lim_{\\to k} \\pi_{n+k}(E_k) \\\\\n &= \\lim_\\to \\left(\\cdots \\to \\pi_{n+k}(E_k) \\to \\pi_{n+k+1}(E_{k+1}) \\to \\cdots\\right)\n\\end{align}"
},
{
"math_id": 16,
"text": "\\Sigma: \\pi_{n+k}(E_n) \\to \\pi_{n+k+1}(\\Sigma E_n)"
},
{
"math_id": 17,
"text": " [S^{n+k}, E_n] \\to [S^{n+k+1}, \\Sigma E_n]"
},
{
"math_id": 18,
"text": "\\Sigma"
},
{
"math_id": 19,
"text": "\\Sigma E_n \\to E_{n+1}"
},
{
"math_id": 20,
"text": "\\pi_k"
},
{
"math_id": 21,
"text": " H^n(X;A) "
},
{
"math_id": 22,
"text": "A"
},
{
"math_id": 23,
"text": "K(A,n)"
},
{
"math_id": 24,
"text": "n"
},
{
"math_id": 25,
"text": "[X,K(A,n)] = H^n(X;A)"
},
{
"math_id": 26,
"text": "HA"
},
{
"math_id": 27,
"text": "R"
},
{
"math_id": 28,
"text": "\\begin{align}\n \\pi_i( H(R/I) \\wedge_R H(R/J) ) &\\cong H_i\\left(R/I\\otimes^{\\mathbf{L}}R/J\\right)\\\\\n &\\cong \\operatorname{Tor}_i^R(R/I,R/J)\n\\end{align}"
},
{
"math_id": 29,
"text": " K^0(X) "
},
{
"math_id": 30,
"text": " K^1(X) "
},
{
"math_id": 31,
"text": " \\mathbb{Z} \\times BU "
},
{
"math_id": 32,
"text": "U"
},
{
"math_id": 33,
"text": "BU"
},
{
"math_id": 34,
"text": " K^{2n}(X) \\cong K^0(X) "
},
{
"math_id": 35,
"text": " K^{2n+1}(X) \\cong K^1(X) "
},
{
"math_id": 36,
"text": "\\mathbb{S}"
},
{
"math_id": 37,
"text": "\\pi_n(\\mathbb{S}) = \\pi_n^{\\mathbb{S}}"
},
{
"math_id": 38,
"text": "\\mathbb{S}_i = S^i"
},
{
"math_id": 39,
"text": "\\mathbb{S}_0 = \\{0, 1\\}"
},
{
"math_id": 40,
"text": "S^n \\wedge S^m \\simeq S^{n+m}"
},
{
"math_id": 41,
"text": "\\mathbb{Z}"
},
{
"math_id": 42,
"text": "MO"
},
{
"math_id": 43,
"text": "MU"
},
{
"math_id": 44,
"text": "MSpin"
},
{
"math_id": 45,
"text": "MString"
},
{
"math_id": 46,
"text": "G"
},
{
"math_id": 47,
"text": "MG"
},
{
"math_id": 48,
"text": "\\Sigma^\\infty X"
},
{
"math_id": 49,
"text": "X_n = S^n \\wedge X"
},
{
"math_id": 50,
"text": "\\pi_n(\\Sigma^\\infty X) = \\pi_n^\\mathbb{S}(X)"
},
{
"math_id": 51,
"text": "\\Sigma^\\infty:h\\text{CW} \\to h\\text{Spectra}"
},
{
"math_id": 52,
"text": "[\\Sigma^\\infty X, \\Sigma^\\infty Y] = \\underset{\\to n}{\\operatorname{colim}{}}[\\Sigma^nX,\\Sigma^nY]"
},
{
"math_id": 53,
"text": "\\left[\\Sigma^N X, \\Sigma^N Y\\right] \\simeq \\left[\\Sigma^{N+1} X, \\Sigma^{N+1} Y\\right] \\simeq \\cdots"
},
{
"math_id": 54,
"text": "\\left[\\Sigma^\\infty X, \\Sigma^\\infty Y\\right] \\simeq \\left[\\Sigma^N X, \\Sigma^N Y\\right]"
},
{
"math_id": 55,
"text": "N"
},
{
"math_id": 56,
"text": "\\Omega^\\infty"
},
{
"math_id": 57,
"text": "\\Omega^\\infty E = \\underset{\\to n}{\\operatorname{colim}{}}\\Omega^n E_n"
},
{
"math_id": 58,
"text": "\\Omega^\\infty\\Sigma^\\infty X = \\underset{\\to}{\\operatorname{colim}{}} \\Omega^n\\Sigma^nX"
},
{
"math_id": 59,
"text": "X \\to \\Omega^n\\Sigma^n X"
},
{
"math_id": 60,
"text": "X \\to \\Omega^\\infty\\Sigma^\\infty X"
},
{
"math_id": 61,
"text": "X_n \\to \\Omega X_{n+1}"
},
{
"math_id": 62,
"text": "S^0 \\to X"
},
{
"math_id": 63,
"text": "E_n"
},
{
"math_id": 64,
"text": "F_n"
},
{
"math_id": 65,
"text": "E_j"
},
{
"math_id": 66,
"text": "E_{j+1}"
},
{
"math_id": 67,
"text": "f: E \\to F"
},
{
"math_id": 68,
"text": "F"
},
{
"math_id": 69,
"text": " Y "
},
{
"math_id": 70,
"text": " \\Sigma^n Y "
},
{
"math_id": 71,
"text": "(E \\wedge X)_n = E_n \\wedge X"
},
{
"math_id": 72,
"text": "(E \\wedge I^+) \\to F"
},
{
"math_id": 73,
"text": "I^+"
},
{
"math_id": 74,
"text": "[0, 1] \\sqcup \\{*\\}"
},
{
"math_id": 75,
"text": "*"
},
{
"math_id": 76,
"text": "(\\Sigma E)_n = E_{n+1}"
},
{
"math_id": 77,
"text": "(\\Sigma^{-1}E)_n = E_{n-1}"
},
{
"math_id": 78,
"text": "X\\rightarrow Y\\rightarrow Y\\cup CX \\rightarrow (Y\\cup CX)\\cup CY \\cong \\Sigma X"
},
{
"math_id": 79,
"text": "\\displaystyle \\pi_n E = [\\Sigma^n \\mathbb{S}, E]"
},
{
"math_id": 80,
"text": "[X, Y]"
},
{
"math_id": 81,
"text": "Y"
},
{
"math_id": 82,
"text": "E_n X = \\pi_n (E \\wedge X) = [\\Sigma^n \\mathbb{S}, E \\wedge X]"
},
{
"math_id": 83,
"text": "\\displaystyle E^n X = [\\Sigma^{-n} X, E]."
},
{
"math_id": 84,
"text": "Q"
},
{
"math_id": 85,
"text": "Q: \\text{Top}_* \\to \\text{Top}_*"
},
{
"math_id": 86,
"text": "QX = \\mathop{\\text{colim}}_{\\to n}\\Omega^n\\Sigma^n X"
},
{
"math_id": 87,
"text": "\\Sigma^\\infty: \\text{Top}_* \\leftrightarrows \\text{Spectra}_* : \\Omega^\\infty"
},
{
"math_id": 88,
"text": "\\text{Top}_*"
},
{
"math_id": 89,
"text": "\\text{Spectra}_*"
},
{
"math_id": 90,
"text": "\\Sigma^\\infty"
},
{
"math_id": 91,
"text": "\\Sigma^\\infty S^0 = \\mathbb{S}"
},
{
"math_id": 92,
"text": "\\phi: \\left(\\Omega^\\infty E\\right) \\wedge \\left(\\Omega^\\infty E'\\right) \\to \\Omega^\\infty\\left(E \\wedge E'\\right)"
},
{
"math_id": 93,
"text": "\\gamma: \\left(\\Sigma^\\infty E\\right) \\wedge \\left(\\Sigma^\\infty E'\\right) \\to \\Sigma^\\infty\\left(E \\wedge E'\\right)"
},
{
"math_id": 94,
"text": "\\theta: \\Omega^\\infty\\Sigma^\\infty X \\to QX"
},
{
"math_id": 95,
"text": "X \\in \\operatorname{Ob}(\\text{Top}_*)"
},
{
"math_id": 96,
"text": "\\begin{matrix}\n X & \\xrightarrow{\\eta} & \\Omega^\\infty\\Sigma^\\infty X \\\\\n \\mathord{=} \\downarrow & & \\downarrow \\theta \\\\\n X & \\xrightarrow{i} & QX\n\\end{matrix}"
},
{
"math_id": 97,
"text": "\\eta"
}
] | https://en.wikipedia.org/wiki?curid=1033697 |
1033865 | Reduction (mathematics) | In mathematics, reduction refers to the rewriting of an expression into a simpler form. For example, the process of rewriting a fraction into one with the smallest whole-number denominator possible (while keeping the numerator a whole number) is called "reducing a fraction". Rewriting a radical (or "root") expression with the smallest possible whole number under the radical symbol is called "reducing a radical". Minimizing the number of radicals that appear underneath other radicals in an expression is called denesting radicals.
Algebra.
In linear algebra, "reduction" refers to applying simple rules to a series of equations or matrices to change them into a simpler form. In the case of matrices, the process involves manipulating either the rows or the columns of the matrix and so is usually referred to as "row-reduction" or "column-reduction", respectively. Often the aim of reduction is to transform a matrix into its "row-reduced echelon form" or "row-echelon form"; this is the goal of Gaussian elimination.
Calculus.
In calculus, "reduction" refers to using the technique of integration by parts to evaluate integrals by reducing them to simpler forms.
Static (Guyan) reduction.
In dynamic analysis, static reduction refers to reducing the number of degrees of freedom. Static reduction can also be used in finite element analysis to refer to simplification of a linear algebraic problem. Since a static reduction requires several inversion steps it is an expensive matrix operation and is prone to some error in the solution. Consider the following system of linear equations in an FEA problem:
formula_0
where "K" and "F" are known and "K", "x" and "F" are divided into submatrices as shown above. If "F"2 contains only zeros, and only "x"1 is desired, "K" can be reduced to yield the following system of equations
formula_1
formula_2 is obtained by writing out the set of equations as follows:
Equation (2) can be solved for formula_3 (assuming invertibility of formula_4):
formula_5
And substituting into (1) gives
formula_6
Thus
formula_7
In a similar fashion, any row or column "i" of "F" with a zero value may be eliminated if the corresponding value of "x""i" is not desired. A reduced "K" may be reduced again. As a note, since each reduction requires an inversion, and each inversion is an operation with computational cost "O"("n"3), most large matrices are pre-processed to reduce calculation time.
History.
In the 9th century, Persian mathematician Al-Khwarizmi's "Al-Jabr" introduced the fundamental concepts of "reduction" and "balancing", referring to the transposition of subtracted terms to the other side of an equation and the cancellation of like terms on opposite sides of the equation. This is the operation which Al-Khwarizmi originally described as "al-jabr". The name "algebra" comes from the "al-jabr" in the title of his book.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\begin{bmatrix}\nK_{11} & K_{12} \\\\\nK_{21} & K_{22}\n\\end{bmatrix}\n\\begin{bmatrix} x_1 \\\\ x_2 \\end{bmatrix}\n=\n\\begin{bmatrix} F_1 \\\\ F_2 \\end{bmatrix}"
},
{
"math_id": 1,
"text": "\\begin{bmatrix}\nK_{11,\\text{reduced}}\n\\end{bmatrix}\\begin{bmatrix}\nx_1\n\\end{bmatrix} = \\begin{bmatrix}\nF_1\n\\end{bmatrix}"
},
{
"math_id": 2,
"text": "K_{11,\\text{reduced}}"
},
{
"math_id": 3,
"text": "x_2"
},
{
"math_id": 4,
"text": "K_{22}"
},
{
"math_id": 5,
"text": "-K_{22}^{-1} K_{21} x_1 = x_2. "
},
{
"math_id": 6,
"text": "K_{11}x_1 - K_{12} K_{22}^{-1} K_{21} x_1 = F_1."
},
{
"math_id": 7,
"text": "K_{11,\\text{reduced}} = K_{11} - K_{12} K_{22}^{-1} K_{21}."
}
] | https://en.wikipedia.org/wiki?curid=1033865 |
10338711 | Shapiro polynomials | In mathematics, the Shapiro polynomials are a sequence of polynomials which were first studied by Harold S. Shapiro in 1951 when considering the magnitude of specific trigonometric sums. In signal processing, the Shapiro polynomials have good autocorrelation properties and their values on the unit circle are small. The first few members of the sequence are:
formula_0
where the second sequence, indicated by "Q", is said to be "complementary" to the first sequence, indicated by "P".
Construction.
The Shapiro polynomials "P""n"("z") may be constructed from the Golay–Rudin–Shapiro sequence "a""n", which equals 1 if the number of pairs of consecutive ones in the binary expansion of "n" is even, and −1 otherwise. Thus "a"0 = 1, "a"1 = 1, "a"2 = 1, "a"3 = −1, etc.
The first Shapiro "P""n"("z") is the partial sum of order 2"n" − 1 (where "n" = 0, 1, 2, ...) of the power series
"f"("z") := "a"0 + "a"1 "z" + a2 "z"2 + ...
The Golay–Rudin–Shapiro sequence {"a""n"} has a fractal-like structure – for example, "a""n" = "a"2"n" – which implies that the subsequence ("a"0, "a"2, "a"4, ...) replicates the original sequence {"a""n"}. This in turn leads to remarkable
functional equations satisfied by "f"("z").
The second or complementary Shapiro polynomials "Q""n"("z") may be defined in terms of this sequence, or by the relation "Q""n"("z") = (1-)"n""z"2"n"-1"P""n"(-1/"z"), or by the recursions
formula_1
formula_2
formula_3
Properties.
The sequence of complementary polynomials "Q""n" corresponding to the "P""n" is uniquely characterized by the following properties:
The most interesting property of the {"P""n"} is that the absolute value of "P""n"("z") is bounded on the unit circle by the square root of 2("n" + 1), which is on the order
of the L2 norm of "P""n". Polynomials with coefficients from the set {−1, 1} whose maximum modulus on the unit circle is close to their mean modulus are useful for various applications in communication theory (e.g., antenna design and data compression). Property (iii) shows that ("P", "Q") form a Golay pair.
These polynomials have further properties:
formula_4
formula_5
formula_6
formula_7
formula_8
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\n\\begin{align}\nP_1(x) & {} =1 + x \\\\\nP_2(x) & {} =1 + x + x^2 - x^3 \\\\\nP_3(x) & {} =1 + x + x^2 - x^3 + x^4 + x^5 - x^6 + x^7 \\\\\n... \\\\\nQ_1(x) & {} =1 - x \\\\\nQ_2(x) & {} =1 + x - x^2 + x^3 \\\\\nQ_3(x) & {} =1 + x + x^2 - x^3 - x^4 - x^5 + x^6 - x^7 \\\\\n... \\\\\n\\end{align}\n"
},
{
"math_id": 1,
"text": "P_0(z)=1; ~~ Q_0(z) = 1 ; "
},
{
"math_id": 2,
"text": "P_{n+1}(z) = P_n(z) + z^{2^n} Q_n(z) ; "
},
{
"math_id": 3,
"text": "Q_{n+1}(z) = P_n(z) - z^{2^n} Q_n(z) . "
},
{
"math_id": 4,
"text": " P_{n+1}(z) = P_n(z^2) + z P_n(-z^2) ; \\, "
},
{
"math_id": 5,
"text": " Q_{n+1}(z) = Q_n(z^2) + z Q_n(-z^2) ; \\, "
},
{
"math_id": 6,
"text": "P_n(z) P_n(1/z) + Q_n(z) Q_n(1/z) = 2^{n+1} ; \\, "
},
{
"math_id": 7,
"text": "P_{n+k+1}(z) = P_n(z)P_k(z^{2^{n+1}}) + z^{2^n}Q_n(z)P_k(-z^{2^{n+1}}) ; \\, "
},
{
"math_id": 8,
"text": "P_n(1) = 2^{\\lfloor (n+1)/2 \\rfloor}; {~}{~} P_n(-1) = (1+(-1)^n)2^{\\lfloor n/2 \\rfloor - 1} . \\, "
}
] | https://en.wikipedia.org/wiki?curid=10338711 |
1033877 | Dixon's factorization method | In number theory, Dixon's factorization method (also Dixon's random squares method or Dixon's algorithm) is a general-purpose integer factorization algorithm; it is the prototypical factor base method. Unlike for other factor base methods, its run-time bound comes with a rigorous proof that does not rely on conjectures about the smoothness properties of the values taken by a polynomial.
The algorithm was designed by John D. Dixon, a mathematician at Carleton University, and was published in 1981.
Basic idea.
Dixon's method is based on finding a congruence of squares modulo the integer N which is intended to factor. Fermat's factorization method finds such a congruence by selecting random or pseudo-random "x" values and hoping that the integer "x"2 mod N is a perfect square (in the integers):
formula_0
For example, if "N"
84923, (by starting at 292, the first number greater than √"N" and counting up) the 5052 mod 84923 is 256, the square of 16. So (505 − 16)(505 + 16)
0 mod 84923. Computing the greatest common divisor of 505 − 16 and "N" using Euclid's algorithm gives 163, which is a factor of "N".
In practice, selecting random "x" values will take an impractically long time to find a congruence of squares, since there are only √"N" squares less than "N".
Dixon's method replaces the condition "is the square of an integer" with the much weaker one "has only small prime factors"; for example, there are 292 squares smaller than 84923; 662 numbers smaller than 84923 whose prime factors are only 2,3,5 or 7; and 4767 whose prime factors are all less than 30. (Such numbers are called "B-smooth" with respect to some bound "B".)
If there are many numbers formula_1 whose squares can be factorized as formula_2 for a fixed set formula_3 of small primes, linear algebra modulo 2 on the matrix formula_4 will give a subset of the formula_5 whose squares combine to a product of small primes to an even power — that is, a subset of the formula_5 whose squares multiply to the square of a (hopefully different) number mod N.
Method.
Suppose the composite number "N" is being factored. Bound "B" is chosen, and the "factor base" is identified (which is called "P"), the set of all primes less than or equal to "B". Next, positive integers "z" are sought such that "z"2 mod "N" is "B"-smooth. Therefore we can write, for suitable exponents "ai",
formula_6
When enough of these relations have been generated (it is generally sufficient that the number of relations be a few more than the size of "P"), the methods of linear algebra, such as Gaussian elimination, can be used to multiply together these various relations in such a way that the exponents of the primes on the right-hand side are all even:
formula_7
This yields a congruence of squares of the form "a"2 ≡ "b"2 (mod "N"), which can be turned into a factorization of "N", "N"
gcd("a" + "b", "N") × ("N"/gcd("a" + "b", "N")). This factorization might turn out to be trivial (i.e. "N"
"N" × 1), which can only happen if "a" ≡ ±"b" (mod "N"), in which case another try must be made with a different combination of relations; but if a nontrivial pair of factors of "N" is reached, the algorithm terminates.
Pseudocode.
input: positive integer formula_8
output: non-trivial factor of formula_8
Choose bound formula_9
Let formula_10 be all primes formula_11
repeat
for formula_12 to formula_13 do
Choose formula_14 such that formula_15 is formula_9-smooth
Let formula_16 such that formula_17
end for
Find non-empty formula_18 such that formula_19
Let formula_20
formula_21
while formula_22
return formula_23
Example.
This example will try to factor "N" = 84923 using bound "B" = 7. The factor base is then "P" = {2, 3, 5, 7}. A search can be made for integers between formula_24 and "N" whose squares mod "N" are "B"-smooth. Suppose that two of the numbers found are 513 and 537:
formula_25
formula_26
So
formula_27
Then
formula_28
That is, formula_29
The resulting factorization is 84923 = gcd(20712 − 16800, 84923) × gcd(20712 + 16800, 84923) = 163 × 521.
Optimizations.
The quadratic sieve is an optimization of Dixon's method. It selects values of "x" close to the square root of N such that "x2" modulo "N" is small, thereby largely increasing the chance of obtaining a smooth number.
Other ways to optimize Dixon's method include using a better algorithm to solve the matrix equation, taking advantage of the sparsity of the matrix: a number "z" cannot have more than formula_30 factors, so each row of the matrix is almost all zeros. In practice, the block Lanczos algorithm is often used. Also, the size of the factor base must be chosen carefully: if it is too small, it will be difficult to find numbers that factorize completely over it, and if it is too large, more relations will have to be collected.
A more sophisticated analysis, using the approximation that a number has all its prime factors less than formula_31 with probability about formula_32 (an approximation to the Dickman–de Bruijn function), indicates that choosing too small a factor base is much worse than too large, and that the ideal factor base size is some power of formula_33.
The optimal complexity of Dixon's method is
formula_34
in big-O notation, or
formula_35
in L-notation. | [
{
"math_id": 0,
"text": "x^2\\equiv y^2\\quad(\\hbox{mod }N),\\qquad x\\not\\equiv\\pm y\\quad(\\hbox{mod }N)."
},
{
"math_id": 1,
"text": "a_1 \\ldots a_n"
},
{
"math_id": 2,
"text": "a_i^2 \\mod N = \\prod_{j=1}^m b_j^{e_{ij}}"
},
{
"math_id": 3,
"text": "b_1 \\ldots b_m"
},
{
"math_id": 4,
"text": "e_{ij}"
},
{
"math_id": 5,
"text": "a_i"
},
{
"math_id": 6,
"text": "z^2 \\text{ mod } N = \\prod_{p_i\\in P} p_i^{a_i}"
},
{
"math_id": 7,
"text": "{z_1^2 z_2^2 \\cdots z_k^2 \\equiv \\prod_{p_i\\in P} p_i^{a_{i,1}+a_{i,2}+\\cdots+a_{i,k}}\\ \\pmod{N}\\quad (\\text{where } a_{i,1}+a_{i,2}+\\cdots+a_{i,k} \\equiv 0\\pmod{2}) }"
},
{
"math_id": 8,
"text": "N"
},
{
"math_id": 9,
"text": "B"
},
{
"math_id": 10,
"text": "P := \\{p_1, p_2, \\ldots, p_k\\}"
},
{
"math_id": 11,
"text": "\\leq B"
},
{
"math_id": 12,
"text": "i=1"
},
{
"math_id": 13,
"text": "k+1"
},
{
"math_id": 14,
"text": "0<z_i<N"
},
{
"math_id": 15,
"text": "z_i^2 \\text{ mod } N"
},
{
"math_id": 16,
"text": "a_i := \\{a_{i1}, a_{i2}, \\ldots, a_{ik}\\}"
},
{
"math_id": 17,
"text": "z_i^2 \\text{ mod } N = \\prod_{p_j \\in P} p_j^{a_{ij}}"
},
{
"math_id": 18,
"text": "T \\subseteq \\{1, 2, \\ldots, k+1\\}"
},
{
"math_id": 19,
"text": "\\sum_{i \\in T} a_i \\equiv \\vec{0} \\pmod{2}"
},
{
"math_id": 20,
"text": "x := \\left(\\prod_{i \\in T} z_i\\right) \\text{ mod } N"
},
{
"math_id": 21,
"text": "y := \\left(\\prod_{p_j \\in P} p_j^{\\left(\\sum_{i \\in T} a_{ij}\\right)/2} \\right) \\text{ mod } N"
},
{
"math_id": 22,
"text": "x \\equiv \\pm y \\pmod{N}"
},
{
"math_id": 23,
"text": "\\gcd(x+y, N)"
},
{
"math_id": 24,
"text": "\\left\\lceil\\sqrt{84923} \\right\\rceil = 292"
},
{
"math_id": 25,
"text": "513^2 \\mod 84923 = 8400 = 2^4 \\cdot 3 \\cdot 5^2 \\cdot 7"
},
{
"math_id": 26,
"text": "537^2 \\mod 84923 = 33600 = 2^6 \\cdot 3 \\cdot 5^2 \\cdot 7"
},
{
"math_id": 27,
"text": "(513 \\cdot 537)^2 \\mod 84923 = 2^{10} \\cdot 3^2 \\cdot 5^4 \\cdot 7^2 \\mod 84923"
},
{
"math_id": 28,
"text": "\n\\begin{align}\n& {} (513 \\cdot 537)^2 \\mod 84923 \\\\\n& = (275481)^2 \\mod 84923 \\\\\n& = (84923 \\cdot 3 + 20712)^2 \\mod 84923 \\\\\n& =(84923 \\cdot 3)^2 + 2\\cdot(84923\\cdot 3 \\cdot 20712) + 20712^2 \\mod 84923 \\\\\n& = 0 + 0 + 20712^2 \\mod 84923\n\\end{align}\n"
},
{
"math_id": 29,
"text": "20712^2 \\mod 84923 = (2^5 \\cdot 3 \\cdot 5^2 \\cdot 7)^2 \\mod 84923 = 16800^2 \\mod 84923."
},
{
"math_id": 30,
"text": "\\log_2 z"
},
{
"math_id": 31,
"text": "N^{1/a}"
},
{
"math_id": 32,
"text": "a^{-a}"
},
{
"math_id": 33,
"text": "\\exp\\left(\\sqrt{\\log N \\log \\log N}\\right)"
},
{
"math_id": 34,
"text": "O\\left(\\exp\\left(2 \\sqrt 2 \\sqrt{\\log n \\log \\log n}\\right)\\right)"
},
{
"math_id": 35,
"text": "L_n [1/2, 2 \\sqrt 2]"
}
] | https://en.wikipedia.org/wiki?curid=1033877 |
1033912 | Economic base analysis | Economic base analysis is a theory that posits that activities in an area divide into two categories: basic and nonbasic. Basic industries are those exporting from the region and bringing wealth from outside, while nonbasic (or service) industries support basic industries. Because export-import flows are usually not tracked at sub-national (regional) levels, it is not practical to study industry output and trade flows to and from a region. As an alternative, the concepts of basic and nonbasic are operationalized using employment data. The theory was developed by Robert Murray Haig in his work on the Regional Plan of New York in 1928.
Application of the analysis.
The basic industries of a region are identified by comparing employment in the region to national norms. If the national norm for employment in, for example, Egyptian woodwind manufacturing is 5 percent and the region's employment is 8 percent, then 3 percent of the region's woodwind employment is basic. Once basic employment is identified, the outlook for basic employment is investigated sector by sector and projections made sector by sector. In turn, this permits the projection of total employment in the region. Typically the basic/nonbasic employment ratio is about 1:1. Extending by manipulation of data and comparisons, conjectures may be made about population and income. This is a rough, serviceable procedure, and it remains in use today. It has the advantage of being readily operationalized, fiddled with, and understandable.
Formula.
The formula for computing location quotients can be written as:
formula_0
Where:
formula_1 Local employment in industry i
formula_2 Total local employment
formula_3 Reference area employment in industry i
formula_4 Total reference area employment
It is assumed that the base year is identical in all of the above variables.
Example and methodology.
Economic base ideas can be easy to understand, as are measures made of employment. For instance, it is well known that the economy of Seattle, Washington is tied to aircraft manufacturing, that of Detroit, Michigan, to automobiles, and that of Silicon Valley to high-tech manufacturing. When newspapers discuss the closing of military bases, they may say something like: "5,000 jobs at the base will be lost. That's going to hit the economy hard because it means a loss of 10,000 jobs in the community."
To forecast, the main procedure is to compare the region with the nation and national trends. If the economic base of a region is in industries that are declining nationwide, then the region faces a problem. If its economic base is concentrated in sectors that are growing, then it is in good shape.
Methodologically, economic base analysis views the region as if it were a small nation and uses notions of relative and comparative advantage from international trade theory (Charles Tiebout 1963). In a sense, the activity is macroeconomics "written small", and it has not been of much interest to urban economists in recent years because it does not get at within-city relationships. The analysis usually takes US growth patterns as a given. The fates of regions are determined by trends in the national economy.
Assumptions.
As H. Craig Davis points out, there are a number of assumptions on which economic base analysis is conducted. These include (1) that exports are the sole source of economic growth (investment, government spending, and household consumption are ignored); (2) that the export industry is homogeneous (i.e., that an increase or decrease of one export does not affect another); (3) the constancy of the export/service ratio; (4) that there is no inter-regional feedback; and (5) that there is a pool of underutilized resources. | [
{
"math_id": 0,
"text": "LQ = \\frac{e_i/e}{E_i/E}"
},
{
"math_id": 1,
"text": "e_i ="
},
{
"math_id": 2,
"text": "e ="
},
{
"math_id": 3,
"text": "E_i ="
},
{
"math_id": 4,
"text": "E ="
}
] | https://en.wikipedia.org/wiki?curid=1033912 |
10340791 | Streamline (swimming) | Body form used by swimmers
Streamline form is a swimming technique that is used underwater in every stroke. At the start of a race or on a turn, streamline form is used, usually along with a dolphin kick or flutter kick, to create the least amount of resistance to help the swimmer propel as far as they can. Many factors contribute to the perfect streamline form and mastering this method increases a swimmer's speed. Streamline is one of the key fundamentals to mastering any stroke.
Technique.
The streamline position consists of a person placing hand over hand, fingers over fingers and raising their arms above their head so the biceps are tucked close to the ears. The belly is sucked back to decrease curvature of the spine in the lower back and the swimmer's head is brought back to ensure that neck is in line with the spine Pinching the shoulder blades together is helpful in aligning the spine to straighten out the back. Legs are straight and feet are pointed. In theory, a perfect, straight line will be made down the backside of a swimmer from their head to their feet. The body should be on a horizontal plane under the water, with the legs kicking straight from the thighs and hips, not the knees. A great deal of flexibility is usually needed to reach the goal of a perfect streamline, particularly flexibility of the shoulders. Kicking in the streamline position underwater can be substantially faster than swimming any of the other aquatic strokes, competitive or otherwise. For this reason, competitive swimmers often try to kick in a streamline position off a wall or the starting block for as long as they can be underwater before coming up for their first stroke. This is why many swimmers spend a lot of time perfecting the form and technique of streamline.
Hydrodynamics and Speed.
There are three main resistances caused by drag on a swimmer which are caused from friction, form, and wave-making forces. The most detrimental force to streamline would be the resistance caused by form. Bad form will cause more drag on a body in water (resistance) resulting in more work needing to be done to cover the same amount of distance. The amount of resistance on an object can be determined by the formula,
formula_0
D is the constant for the viscosity of the fluid, p is the density of the water, A is the surface area of the body traveling through the water, and v is the velocity of the body.
Because the velocity is squared, the resistance will be exponentially affected by the value of velocity, which is why it is important to minimize the surface area as much as possible. Minimizing surface area is directly proportional to technique. Timing in the transition from the glide to the kick is crucial for a swimmer to keep up their momentum. Switching to the kick too early will cause an increase in resistance. A transition that is performed too late will result in loss of speed due to the decrease in momentum, resulting in a waste of energy while returning to race speed. With all aspects of streamline brought together, it makes it the most hydrodynamic position one can assume in the water.
Competition and Rules.
Streamline position is mostly used at the start of the race once a swimmer dives into the water off of the blocks. It is most common for the swimmer to dive into the water head first with their arms above their head and assume the streamline form at entry. The other common occurrence of streamline in a competitive race is after a swimmer completes a flip turn and pushes off of the wall. Once they have completely turned over to the opposing direction, the swimmer will then get into streamline position and push off of the wall to maximize the distance and speed out of the turn. Streamline position is the basis of the spinal axis strokes, backstroke and freestyle, as well. A swimmer will try to maintain a straight back and legs to minimize 'drag' during the stroke.
The Fédération Internationale de Natation (F.I.N.A.), otherwise known as the International Swimming Federation, has strict rules on how and when streamline may be performed in competition. According to FINA, no swimmer may travel more than 15 meters (16.4 yards) off of a start or turn in the backstroke, butterfly and freestyle underwater. Breaststroke is only allowed one complete arm stroke followed by a butterfly kick and breaststroke kick. After fifteen meters, the swimmer must break the surface of the water. This rule applies to all races done in compliance with FINA rules whether it is short course or long course. Swimmers in a race will usually maintain the streamline form and perform a butterfly kick for the full fifteen meters due to the fact that there is less resistance than there is on top of the water due to the lack of drag created by waves.
There is no specified limit in this stroke, but, since the number of underwater strokes and kicks are regulated, this becomes a moot point to competitive swimming. It is not hydrodynamic to maintain this position past a certain distance, which is invariably less than the length of a short-course pool.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "R=1/2 DpAv^2"
}
] | https://en.wikipedia.org/wiki?curid=10340791 |
10343119 | Alex Wilkie | Alex James Wilkie FRS (born 1948 in Northampton) is a British mathematician known for his contributions to model theory and logic. Previously Reader in Mathematical Logic at the University of Oxford, he was appointed to the Fielden Chair of Pure Mathematics at the University of Manchester in 2007.
Education.
Alex Wilkie attended Aylesbury Grammar School and went on to gain his BSc in mathematics with first class honours from University College London in 1969, his MSc (in mathematical logic) from the University of London in 1970, and his PhD from the Bedford College, University of London in 1973 under the supervision of Wilfrid Hodges with a dissertation titled "Models of Number Theory".
Career and research.
After his PhD he went on to an appointment as a lecturer in mathematics at Leicester University from 1972 to 1973, then a research fellow at the Open University from 1973 until 1978. He spent two periods as a junior lecturer in mathematics at Oxford University (1978–80 and 1981-2) with (1980–1) as a visiting assistant professor at Yale University. In 1980 Wilkie solved Tarski's high school algebra problem.
In October 1982 Wilkie was appointed as a research fellow in the department of mathematics at the University of Paris VII, then returned to England the following year to take up a three-year SERC (now EPSRC) advanced research fellowship at the University of Manchester. After two years he was appointed lecturer in the Department of Mathematics. In 1986 he went on to Oxford where he was appointed to the readership in mathematical logic there which had become vacant upon the retirement of Robin Gandy. He remained in this post until appointment to the Fielden Chair at Manchester.
Awards and honours.
Wilkie was elected a Fellow of the Royal Society in 2001. To quote the citation
"Wilkie has combined logical techniques and differential-geometric techniques to establish fundamental Finiteness Theorems for sets definable using the exponential function, and more general Pfaffian functions. The results, going far beyond those obtained by conventional methods, have already had striking applications to Lie groups."
Wilkie received the Carol Karp Prize (the highest award made by the Association for Symbolic Logic, every five years) jointly with Ehud Hrushovski in 1993. He was elected to the Council of the London Mathematical Society in 2007, vice-president of the Association for Symbolic Logic (2006) and president of the Association for Symbolic Logic in 2009. In 2012 he became a fellow of the American Mathematical Society. He received the Karp Prize again in 2013, jointly with Moti Gitik, Ya'acov Peterzil, Jonathan Pila, and Sergei Starchenko. In 2017, Wilkie was awarded the Pólya Prize.
He was an Invited Speaker of the International Congress of Mathematicians in Berkeley in 1986 and in Berlin in 1998.
In 2015, Wilkie held the Gödel Lecture titled "Complex continuations of functions definable in formula_0 with a diophantine application."
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbb{R}_{an, exp}"
}
] | https://en.wikipedia.org/wiki?curid=10343119 |
1034470 | Scattering amplitude | Probability amplitude in quantum scattering theory
In quantum physics, the scattering amplitude is the probability amplitude of the outgoing spherical wave relative to the incoming plane wave in a stationary-state scattering process.
At large distances from the centrally symmetric scattering center, the plane wave is described by the wavefunction
formula_0
where formula_1 is the position vector; formula_2; formula_3 is the incoming plane wave with the wavenumber k along the z axis; formula_4 is the outgoing spherical wave; θ is the scattering angle (angle between the incident and scattered direction); and formula_5 is the scattering amplitude. The dimension of the scattering amplitude is length. The scattering amplitude is a probability amplitude; the differential cross-section as a function of scattering angle is given as its modulus squared,
formula_6
The asymptotic form of the wave function in arbitrary external field takes the form
formula_7
where formula_8 is the direction of incidient particles and formula_9 is the direction of scattered particles.
Unitary condition.
When conservation of number of particles holds true during scattering, it leads to a unitary condition for the scattering amplitude. In the general case, we have
formula_10
Optical theorem follows from here by setting formula_11
In the centrally symmetric field, the unitary condition becomes
formula_12
where formula_13 and formula_14 are the angles between formula_15 and formula_16 and some direction formula_17. This condition puts a constraint on the allowed form for formula_5, i.e., the real and imaginary part of the scattering amplitude are not independent in this case. For example, if formula_18 in formula_19 is known (say, from the measurement of the cross section), then formula_20 can be determined such that formula_5 is uniquely determined within the alternative formula_21.
Partial wave expansion.
In the partial wave expansion the scattering amplitude is represented as a sum over the partial waves,
formula_22,
where "fℓ" is the partial scattering amplitude and "Pℓ" are the Legendre polynomials. The partial amplitude can be expressed via the partial wave S-matrix element "Sℓ" (formula_23) and the scattering phase shift "δℓ" as
formula_24
Then the total cross section
formula_25,
can be expanded as
formula_26
is the partial cross section. The total cross section is also equal to formula_27 due to optical theorem.
For formula_28, we can write
formula_29
X-rays.
The scattering length for X-rays is the Thomson scattering length or classical electron radius, r0.
Neutrons.
The nuclear neutron scattering process involves the coherent neutron scattering length, often described by b.
Quantum mechanical formalism.
A quantum mechanical approach is given by the S matrix formalism.
Measurement.
The scattering amplitude can be determined by the scattering length in the low-energy regime.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\n\\psi(\\mathbf{r}) = e^{ikz} + f(\\theta)\\frac{e^{ikr}}{r} \\;,\n"
},
{
"math_id": 1,
"text": "\\mathbf{r}\\equiv(x,y,z)"
},
{
"math_id": 2,
"text": "r\\equiv|\\mathbf{r}|"
},
{
"math_id": 3,
"text": "e^{ikz}"
},
{
"math_id": 4,
"text": "e^{ikr}/r"
},
{
"math_id": 5,
"text": "f(\\theta)"
},
{
"math_id": 6,
"text": "\nd\\sigma = |f(\\theta)|^2 \\;d\\Omega.\n"
},
{
"math_id": 7,
"text": "\\psi = e^{ikr\\mathbf n\\cdot\\mathbf n'} + f(\\mathbf n,\\mathbf n') \\frac{e^{ikr}}{r}"
},
{
"math_id": 8,
"text": "\\mathbf n"
},
{
"math_id": 9,
"text": "\\mathbf n'"
},
{
"math_id": 10,
"text": "f(\\mathbf{n},\\mathbf{n}') -f^*(\\mathbf{n}',\\mathbf{n})= \\frac{ik}{2\\pi} \\int f(\\mathbf{n},\\mathbf{n}'')f^*(\\mathbf{n},\\mathbf{n}'')\\,d\\Omega''"
},
{
"math_id": 11,
"text": "\\mathbf n=\\mathbf n'."
},
{
"math_id": 12,
"text": "\\mathrm{Im} f(\\theta)=\\frac{k}{4\\pi}\\int f(\\gamma)f(\\gamma')\\,d\\Omega''"
},
{
"math_id": 13,
"text": "\\gamma"
},
{
"math_id": 14,
"text": "\\gamma'"
},
{
"math_id": 15,
"text": "\\mathbf{n}"
},
{
"math_id": 16,
"text": "\\mathbf{n}'"
},
{
"math_id": 17,
"text": "\\mathbf{n}''"
},
{
"math_id": 18,
"text": "|f(\\theta)|"
},
{
"math_id": 19,
"text": "f=|f|e^{2i\\alpha}"
},
{
"math_id": 20,
"text": "\\alpha(\\theta)"
},
{
"math_id": 21,
"text": "f(\\theta)\\rightarrow -f^*(\\theta)"
},
{
"math_id": 22,
"text": "f=\\sum_{\\ell=0}^\\infty (2\\ell+1) f_\\ell P_\\ell(\\cos \\theta)"
},
{
"math_id": 23,
"text": "=e^{2i\\delta_\\ell}"
},
{
"math_id": 24,
"text": "f_\\ell = \\frac{S_\\ell-1}{2ik} = \\frac{e^{2i\\delta_\\ell}-1}{2ik} = \\frac{e^{i\\delta_\\ell} \\sin\\delta_\\ell}{k} = \\frac{1}{k\\cot\\delta_\\ell-ik} \\;."
},
{
"math_id": 25,
"text": "\\sigma = \\int |f(\\theta)|^2d\\Omega "
},
{
"math_id": 26,
"text": "\\sigma = \\sum_{l=0}^\\infty \\sigma_l, \\quad \\text{where} \\quad \\sigma_l = 4\\pi(2l+1)|f_l|^2=\\frac{4\\pi}{k^2}(2l+1)\\sin^2\\delta_l"
},
{
"math_id": 27,
"text": "\\sigma=(4\\pi/k)\\,\\mathrm{Im} f(0)"
},
{
"math_id": 28,
"text": "\\theta\\neq 0"
},
{
"math_id": 29,
"text": "f=\\frac{1}{2ik}\\sum_{\\ell=0}^\\infty (2\\ell+1) e^{2i\\delta_l} P_\\ell(\\cos \\theta)."
}
] | https://en.wikipedia.org/wiki?curid=1034470 |
10346 | Gravitational redshift | Shift of wavelength of a photon to longer wavelength
In physics and general relativity, gravitational redshift (known as Einstein shift in older literature) is the phenomenon that electromagnetic waves or photons travelling out of a gravitational well lose energy. This loss of energy corresponds to a decrease in the wave frequency and increase in the wavelength, known more generally as a "redshift". The opposite effect, in which photons gain energy when travelling into a gravitational well, is known as a gravitational blueshift (a type of "blueshift"). The effect was first described by Einstein in 1907, eight years before his publication of the full theory of relativity.
Gravitational redshift can be interpreted as a consequence of the equivalence principle (that gravity and acceleration are equivalent and the redshift is caused by the Doppler effect) or as a consequence of the mass–energy equivalence and conservation of energy ('falling' photons gain energy), though there are numerous subtleties that complicate a rigorous derivation. A gravitational redshift can also equivalently be interpreted as gravitational time dilation at the source of the radiation: if two oscillators (attached to transmitters producing electromagnetic radiation) are operating at different gravitational potentials, the oscillator at the higher gravitational potential (farther from the attracting body) will tick faster; that is, when observed from the same location, it will have a higher measured frequency than the oscillator at the lower gravitational potential (closer to the attracting body).
To first approximation, gravitational redshift is proportional to the difference in gravitational potential divided by the speed of light squared, formula_0, thus resulting in a very small effect. Light escaping from the surface of the Sun was predicted by Einstein in 1911 to be redshifted by roughly 2 ppm or 2 × 10−6. Navigational signals from GPS satellites orbiting at 20,000 km altitude are perceived blueshifted by approximately 0.5 ppb or 5 × 10−10, corresponding to a (negligible) increase of less than 1 Hz in the frequency of a 1.5 GHz GPS radio signal (however, the accompanying gravitational time dilation affecting the atomic clock in the satellite "is" crucially important for accurate navigation). On the surface of the Earth the gravitational potential is proportional to height, formula_1, and the corresponding redshift is roughly 10−16 (0.1 parts per quadrillion) per meter of change in elevation and/or altitude.
In astronomy, the magnitude of a gravitational redshift is often expressed as the velocity that would create an equivalent shift through the relativistic Doppler effect. In such units, the 2 ppm sunlight redshift corresponds to a 633 m/s receding velocity, roughly of the same magnitude as convective motions in the Sun, thus complicating the measurement. The GPS satellite gravitational blueshift velocity equivalent is less than 0.2 m/s, which is negligible compared to the actual Doppler shift resulting from its orbital velocity. In astronomical objects with strong gravitational fields the redshift can be much greater; for example, light from the surface of a white dwarf is gravitationally redshifted on average by around 50 km/s/c (around 170 ppm).
Observing the gravitational redshift in the Solar System is one of the classical tests of general relativity. Measuring the gravitational redshift to high precision with atomic clocks can serve as a test of Lorentz symmetry and guide searches for dark matter.
Prediction by the equivalence principle and general relativity.
Uniform gravitational field or acceleration.
Einstein's theory of general relativity incorporates the equivalence principle, which can be stated in various different ways. One such statement is that gravitational effects are locally undetectable for a free-falling observer. Therefore, in a laboratory experiment at the surface of the Earth, all gravitational effects should be equivalent to the effects that would have been observed if the laboratory had been accelerating through outer space at "g". One consequence is a gravitational Doppler effect. If a light pulse is emitted at the floor of the laboratory, then a free-falling observer says that by the time it reaches the ceiling, the ceiling has accelerated away from it, and therefore when observed by a detector fixed to the ceiling, it will be observed to have been Doppler shifted toward the red end of the spectrum. This shift, which the free-falling observer considers to be a kinematical Doppler shift, is thought of by the laboratory observer as a gravitational redshift. Such an effect was verified in the 1959 Pound–Rebka experiment. In a case such as this, where the gravitational field is uniform, the change in wavelength is given by
formula_2
where formula_3 is the change in height. Since this prediction arises directly from the equivalence principle, it does not require any of the mathematical apparatus of general relativity, and its verification does not specifically support general relativity over any other theory that incorporates the equivalence principle.
On Earth's surface (or in a spaceship accelerating at 1 g), the gravitational redshift is approximately 1.1 × 10−16, the equivalent of a 3.3 × 10−8 m/s Doppler shift, for every meter of height differential.
Spherically symmetric gravitational field.
When the field is not uniform, the simplest and most useful case to consider is that of a spherically symmetric field. By Birkhoff's theorem, such a field is described in general relativity by the Schwarzschild metric, formula_4, where formula_5 is the clock time of an observer at distance "R" from the center, formula_6 is the time measured by an observer at infinity, formula_7 is the Schwarzschild radius formula_8, "..." represents terms that vanish if the observer is at rest, formula_9 is Newton's gravitational constant, formula_10 the mass of the gravitating body, and formula_11 the speed of light. The result is that frequencies and wavelengths are shifted according to the ratio
formula_12
where
This can be related to the redshift parameter conventionally defined as formula_16.
In the case where neither the emitter nor the observer is at infinity, the transitivity of Doppler shifts allows us to generalize the result to formula_17. The redshift formula for the frequency formula_18 is formula_19. When formula_20 is small, these results are consistent with the equation given above based on the equivalence principle.
The redshift ratio may also be expressed in terms of a (Newtonian) escape velocity formula_21 at formula_22, resulting in the corresponding Lorentz factor:
formula_23.
For an object compact enough to have an event horizon, the redshift is not defined for photons emitted inside the Schwarzschild radius, both because signals cannot escape from inside the horizon and because an object such as the emitter cannot be stationary inside the horizon, as was assumed above. Therefore, this formula only applies when formula_15 is larger than formula_7. When the photon is emitted at a distance equal to the Schwarzschild radius, the redshift will be "infinitely" large, and it will not escape to "any" finite distance from the Schwarzschild sphere. When the photon is emitted at an infinitely large distance, there is no redshift.
Newtonian limit.
In the Newtonian limit, i.e. when formula_15 is sufficiently large compared to the Schwarzschild radius formula_7, the redshift can be approximated as
formula_24
where formula_25 is the gravitational acceleration at formula_15. For Earth's surface with respect to infinity, "z" is approximately 7 × 10−10 (the equivalent of a 0.2 m/s radial Doppler shift); for the Moon it is approximately 3 × 10−11 (about 1 cm/s). The value for the surface of the Sun is about 2 × 10−6, corresponding to 0.64 km/s. (For non-relativistic velocities, the radial Doppler equivalent velocity can be approximated by multiplying "z" with the speed of light.)
The z-value can be expressed succinctly in terms of the escape velocity at formula_15, since the gravitational potential is equal to half the square of the escape velocity, thus:
formula_26
where formula_21 is the escape velocity at formula_15.
It can also be related to the circular orbit velocity formula_27 at formula_15, which equals formula_28, thus
formula_29.
For example, the gravitational blueshift of distant starlight due to the Sun's gravity, which the Earth is orbiting at about 30 km/s, would be approximately 1 × 10−8 or the equivalent of a 3 m/s radial Doppler shift.
For an object in a (circular) orbit, the gravitational redshift is of comparable magnitude as the transverse Doppler effect, formula_30 where "β"="v"/"c", while both are much smaller than the radial Doppler effect, for which formula_31.
Prediction of the Newtonian limit using the properties of photons.
The formula for the gravitational red shift in the Newtonian limit can also be derived using the properties of a photon:
In a gravitational field formula_32 a particle of mass formula_33 and velocity formula_34 changes it's energy formula_35 according to:
formula_36.
For a massless photon described by its energy formula_37 and momentum formula_38 this equation becomes after dividing by Planck's constant formula_39:
formula_40
Inserting the gravitational field of a spherical body of mass formula_10 within the distance formula_41
formula_42
and the wave vector of a photon leaving the gravitational field in radial direction
formula_43
the energy equation becomes
formula_44
Using formula_45 an ordinary differential equation which is only dependent on the radial distance formula_46 is obtained:
formula_47
For a photon starting at the surface of a spherical body with a Radius formula_48 with a frequency formula_49 the analytical solution is:
formula_50
In a large distance from the body formula_51 an observer measures the frequency :
formula_52
Therefore the red shift is:
formula_53
In the linear approximation
formula_54
the Newtonian limit for the graviational red shift of General Relativity is obtained.
Experimental verification.
Astronomical observations.
A number of experimenters initially claimed to have identified the effect using astronomical measurements, and the effect was considered to have been finally identified in the spectral lines of the star Sirius B by W.S. Adams in 1925. However, measurements by Adams have been criticized as being too low and these observations are now considered to be measurements of spectra that are unusable because of scattered light from the primary, Sirius A. The first accurate measurement of the gravitational redshift of a white dwarf was done by Popper in 1954, measuring a 21 km/s gravitational redshift of 40 Eridani B. The redshift of Sirius B was finally measured by Greenstein "et al." in 1971, obtaining the value for the gravitational redshift of 89±16 km/s, with more accurate measurements by the Hubble Space Telescope, showing 80.4±4.8 km/s.
James W. Brault, a graduate student of Robert Dicke at Princeton University, measured the gravitational redshift of the sun using optical methods in 1962. In 2020, a team of scientists published the most accurate measurement of the solar gravitational redshift so far, made by analyzing Fe spectral lines in sunlight reflected by the Moon; their measurement of a mean global 638 ± 6 m/s lineshift is in agreement with the theoretical value of 633.1 m/s. Measuring the solar redshift is complicated by the Doppler shift caused by the motion of the Sun's surface, which is of similar magnitude as the gravitational effect.
In 2011, the group of Radek Wojtak of the Niels Bohr Institute at the University of Copenhagen collected data from 8000 galaxy clusters and found that the light coming from the cluster centers tended to be red-shifted compared to the cluster edges, confirming the energy loss due to gravity.
In 2018, the star S2 made its closest approach to Sgr A*, the 4-million solar mass supermassive black hole at the centre of the Milky Way, reaching 7650 km/s or about 2.5% of the speed of light while passing the black hole at a distance of just 120 AU, or 1400 Schwarzschild radii. Independent analyses by the GRAVITY collaboration (led by Reinhard Genzel) and the KECK/UCLA Galactic Center Group (led by Andrea Ghez) revealed a combined transverse Doppler and gravitational redshift up to 200 km/s/c, in agreement with general relativity predictions.
In 2021, Mediavilla (IAC, Spain) & Jiménez-Vicente (UGR, Spain) were able to use measurements of the gravitational redshift in quasars up to cosmological redshift of z~3 to confirm the predictions of Einstein's Equivalence Principle and the lack of cosmological evolution within 13%.
In 2024, Padilla et al. have estimated the gravitational redshifts of supermassive black holes (SMBH) in eight thousand quasars and one hundred Seyfert type 1 galaxies from the full width at half maximum (FWHM) of their emission lines, finding log z ≈ -4, compatible with SMBHs of ~ 1 billion solar masses and broadline regions of ~ 1 parsec radius. This same gravitational redshift was directly measured by these authors in the SAMI sample of LINER galaxies, using the redshift differences between lines emitted in central and outer regions.
Terrestrial tests.
The effect is now considered to have been definitively verified by the experiments of Pound, Rebka and Snider between 1959 and 1965. The Pound–Rebka experiment of 1959 measured the gravitational redshift in spectral lines using a terrestrial 57Fe gamma source over a vertical height of 22.5 metres. This paper was the first determination of the gravitational redshift which used measurements of the change in wavelength of gamma-ray photons generated with the Mössbauer effect, which generates radiation with a very narrow line width. The accuracy of the gamma-ray measurements was typically 1%.
An improved experiment was done by Pound and Snider in 1965, with an accuracy better than the 1% level.
A very accurate gravitational redshift experiment was performed in 1976, where a hydrogen maser clock on a rocket was launched to a height of 10,000 km, and its rate compared with an identical clock on the ground. It tested the gravitational redshift to 0.007%.
Later tests can be done with the Global Positioning System (GPS), which must account for the gravitational redshift in its timing system, and physicists have analyzed timing data from the GPS to confirm other tests. When the first satellite was launched, it showed the predicted shift of 38 microseconds per day. This rate of the discrepancy is sufficient to substantially impair the function of GPS within hours if not accounted for. An excellent account of the role played by general relativity in the design of GPS can be found in Ashby 2003.
In 2010, an experiment placed two aluminum-ion quantum clocks close to each other, but with the second elevated 33 cm compared to the first, making the gravitational red shift effect visible in everyday lab scales.
In 2020, a group at the University of Tokyo measured the gravitational redshift of two strontium-87 optical lattice clocks. The measurement took place at Tokyo Skytree where the clocks were separated by approximately 450 m and connected by telecom fibers. The gravitational redshift can be expressed as
formula_55,
where formula_56 is the gravitational redshift, formula_57 is the optical clock transition frequency, formula_58 is the difference in gravitational potential, and formula_59 denotes the violation from general relativity. By Ramsey spectroscopy of the strontium-87 optical clock transition (429 THz, 698 nm) the group determined the gravitational redshift between the two optical clocks to be 21.18 Hz, corresponding to a "z"-value of approximately 5 × 10−14. Their measured value of formula_59, formula_60, is an agreement with recent measurements made with hydrogen masers in elliptical orbits.
In October 2021, a group at JILA led by physicist Jun Ye reported a measurement of gravitational redshift in the submillimeter scale. The measurement is done on the 87Sr clock transition between the top and the bottom of a millimeter-tall ultracold cloud of 100,000 strontium atoms in an optical lattice.
Early historical development of the theory.
The gravitational weakening of light from high-gravity stars was predicted by John Michell in 1783 and Pierre-Simon Laplace in 1796, using Isaac Newton's concept of light corpuscles (see: emission theory) and who predicted that some stars would have a gravity so strong that light would not be able to escape. The effect of gravity on light was then explored by Johann Georg von Soldner (1801), who calculated the amount of deflection of a light ray by the Sun, arriving at the Newtonian answer which is half the value predicted by general relativity. All of this early work assumed that light could slow down and fall, which is inconsistent with the modern understanding of light waves.
Once it became accepted that light was an electromagnetic wave, it was clear that the frequency of light should not change from place to place, since waves from a source with a fixed frequency keep the same frequency everywhere. One way around this conclusion would be if time itself were altered – if clocks at different points had different rates. This was precisely Einstein's conclusion in 1911. He considered an accelerating box, and noted that according to the special theory of relativity, the clock rate at the "bottom" of the box (the side away from the direction of acceleration) was slower than the clock rate at the "top" (the side toward the direction of acceleration). Indeed, in a frame moving (in formula_61 direction) with velocity formula_62 relative to the rest frame, the clocks at a nearby position formula_63 are ahead by formula_64 (to the first order); so an acceleration formula_25 (that changes speed by formula_65 per time formula_6) makes clocks at the position formula_63 to be ahead by formula_66, that is, tick at a rate
formula_67
The equivalence principle implies that this change in clock rate is the same whether the acceleration formula_25 is that of an accelerated frame without gravitational effects, or caused by a gravitational field in a stationary frame. Since acceleration due to gravitational potential formula_68 is formula_69, we get
formula_70
so – in weak fields – the change formula_71 in the clock rate is equal to formula_72.
Since the light would be slowed down by gravitational time dilation (as seen by outside observer), the regions with lower gravitational potential would act like a medium with higher refractive index causing light to deflect. This reasoning allowed Einstein in 1911 to reproduce the incorrect Newtonian value for the deflection of light. At the time he only considered the time-dilating manifestation of gravity, which is the dominating contribution at non-relativistic speeds; however relativistic objects travel through space a comparable amount as they do though time, so purely spatial curvature becomes just as important. After constructing the full theory of general relativity, Einstein solved in 1915 the full post-Newtonian approximation for the Sun's gravity and calculated the correct amount of light deflection – double the Newtonian value. Einstein's prediction was confirmed by many experiments, starting with Arthur Eddington's 1919 solar eclipse expedition.
The changing rates of clocks allowed Einstein to conclude that light waves change frequency as they move, and the frequency/energy relationship for photons allowed him to see that this was best interpreted as the effect of the gravitational field on the mass–energy of the photon. To calculate the changes in frequency in a nearly static gravitational field, only the time component of the metric tensor is important, and the lowest order approximation is accurate enough for ordinary stars and planets, which are much bigger than their Schwarzschild radius.
Citations.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "z = \\Delta U / c^2"
},
{
"math_id": 1,
"text": "\\Delta U = g \\Delta h"
},
{
"math_id": 2,
"text": "z = \\frac{\\Delta\\lambda}{\\lambda}\\approx \\frac{g\\Delta y}{c^2},"
},
{
"math_id": 3,
"text": "<math>\\Delta y</Math>"
},
{
"math_id": 4,
"text": "d\\tau^2 = \\left(1 - r_\\text{S}/R\\right)dt^2 + \\ldots"
},
{
"math_id": 5,
"text": "d\\tau"
},
{
"math_id": 6,
"text": "dt"
},
{
"math_id": 7,
"text": "r_\\text{S}"
},
{
"math_id": 8,
"text": "2GM/c^2"
},
{
"math_id": 9,
"text": "G"
},
{
"math_id": 10,
"text": "M"
},
{
"math_id": 11,
"text": "c"
},
{
"math_id": 12,
"text": "1 + z = \\frac{\\lambda_\\infty}{\\lambda_\\text{e}} = \\left(1 - \\frac{r_\\text{S}}{R_\\text{e}}\\right)^{-\\frac{1}{2}}"
},
{
"math_id": 13,
"text": "\\lambda_\\infty\\,"
},
{
"math_id": 14,
"text": "\\lambda_\\text{e}\\,"
},
{
"math_id": 15,
"text": "R_\\text{e}"
},
{
"math_id": 16,
"text": "z = \\lambda_\\infty/\\lambda_\\text{e} - 1"
},
{
"math_id": 17,
"text": "\\lambda_1/\\lambda_2 = \\left[\\left(1 - r_\\text{S}/R_1\\right)/\\left(1 - r_\\text{S}/R_2\\right)\\right]^{1/2}"
},
{
"math_id": 18,
"text": "\\nu = c/\\lambda"
},
{
"math_id": 19,
"text": "\\nu_o/\\nu_\\text{e} = \\lambda_\\text{e}/\\lambda_o"
},
{
"math_id": 20,
"text": "R_1 - R_2"
},
{
"math_id": 21,
"text": "v_\\text{e}"
},
{
"math_id": 22,
"text": "R_\\text{e} = 2GM/v_\\text{e}^2"
},
{
"math_id": 23,
"text": "1 + z = \\gamma_\\text{e} = \\frac{1}{\\sqrt{1 - (v_\\text{e}/c)^2}}"
},
{
"math_id": 24,
"text": "z = \\frac{\\Delta\\lambda}{\\lambda} \\approx \\frac{1}{2}\\frac{r_\\text{S}}{R_\\text{e}} = \\frac{GM}{R_\\text{e} c^2} = \\frac{g R_\\text{e}}{c^2}"
},
{
"math_id": 25,
"text": "g"
},
{
"math_id": 26,
"text": "z \\approx \\frac{1}{2}\\left( \\frac{v_\\text{e}}{c} \\right)^2"
},
{
"math_id": 27,
"text": "v_\\text{o}"
},
{
"math_id": 28,
"text": "v_\\text{e}/\\sqrt{2}"
},
{
"math_id": 29,
"text": "z \\approx \\left( \\frac{v_\\text{o}}{c} \\right)^2"
},
{
"math_id": 30,
"text": "z \\approx \\tfrac{1}{2} \\beta^2"
},
{
"math_id": 31,
"text": "z \\approx \\beta"
},
{
"math_id": 32,
"text": "\\vec{g}"
},
{
"math_id": 33,
"text": "m"
},
{
"math_id": 34,
"text": "\\vec{v}"
},
{
"math_id": 35,
"text": "E"
},
{
"math_id": 36,
"text": "\\frac{\\mathrm dE}{\\mathrm dt} = m \\vec{g}\\cdot \\vec{v} = \\vec{g}\\cdot\\vec{p}"
},
{
"math_id": 37,
"text": "E = h \\nu = \\hbar \\omega"
},
{
"math_id": 38,
"text": "\\vec{p} = \\hbar\\vec{k}"
},
{
"math_id": 39,
"text": "\\hbar"
},
{
"math_id": 40,
"text": "\\frac{\\mathrm d \\omega}{\\mathrm dt} = \\vec{g}\\cdot \\vec{k}"
},
{
"math_id": 41,
"text": "\\vec{r}"
},
{
"math_id": 42,
"text": "\\vec{g} = -G M \\frac{\\vec{r}}{r^3}"
},
{
"math_id": 43,
"text": "\\vec{k} = \\frac{\\omega}{c} \\frac{\\vec{r}}{r}"
},
{
"math_id": 44,
"text": "\\frac{\\mathrm d \\omega}{\\mathrm dt} = -\\frac{G M}{c} \\frac{\\omega}{r^2}."
},
{
"math_id": 45,
"text": "\\mathrm dr = c \\,\\mathrm dt"
},
{
"math_id": 46,
"text": "r"
},
{
"math_id": 47,
"text": "\\frac{\\mathrm d \\omega}{\\mathrm dr} = -\\frac{G M}{c^2} \\frac{\\omega}{r^2} "
},
{
"math_id": 48,
"text": "R_e"
},
{
"math_id": 49,
"text": "\\omega_0 = 2 \\pi \\nu_0"
},
{
"math_id": 50,
"text": "\\frac{\\mathrm d \\omega}{\\mathrm dr} = -\\frac{G M}{c^2} \\frac{\\omega}{r^2} \\quad \\Rightarrow \\quad \\omega(r) = \\omega_0 \\exp \\left ( -\\frac{G M}{c^2} \\left( \\frac{1}{R_e} - \\frac{1}{r} \\right) \\right) "
},
{
"math_id": 51,
"text": "r \\rightarrow \\infty"
},
{
"math_id": 52,
"text": "\\omega_\\text{obs} = \\omega_0 \\exp \\left ( -\\frac{G M}{c^2} \\left( \\frac{1}{R_e} \\right) \\right) \\simeq \\omega_0 \\left( 1 - \\frac{G M}{R_e c^2} + \\frac{1}{2} \\frac{G^2 M^2}{R_e^2 c^4} - \\ldots \\right). "
},
{
"math_id": 53,
"text": " z = \\frac{\\omega_0 - \\omega_\\text{obs}}{\\omega_\\text{obs}}\n= \\frac{1 - \\exp \\left( -\\frac{G M}{R_e c^2} \\right)}{\\exp \\left( -\\frac{G M}{R_e c^2} \\right)} = \\frac{1 - \\exp \\left( -\\frac{r_S}{2 R_e} \\right)}{\\exp \\left( -\\frac{r_S}{2 R_e} \\right)} "
},
{
"math_id": 54,
"text": "z = \\frac{ \\frac{G M}{R_e c^2} - \\frac{1}{2} \\frac{G^2 M^2}{R_e^2 c^4} + \\dots}{ 1 - \\frac{G M}{R_e c^2} + \\frac{1}{2} \\frac{G^2 M^2}{R_e^2 c^4} - \\ldots } \\simeq \\frac{ \\frac{G M}{R_e c^2} }{ 1 - \\frac{G M}{R_e c^2} + \\frac{1}{2} \\frac{G^2 M^2}{R_e^2 c^4} - \\dots} \\simeq \\frac{G M}{c^2 R_e} "
},
{
"math_id": 55,
"text": " z = \\frac{\\Delta\\nu}{\\nu_{1}} = (1+\\alpha)\\frac{\\Delta U}{c^2} "
},
{
"math_id": 56,
"text": "\\Delta\\nu=\\nu_{2}-\\nu_{1}"
},
{
"math_id": 57,
"text": "\\nu_{1}"
},
{
"math_id": 58,
"text": "\\Delta U= U_{2}- U_{1}"
},
{
"math_id": 59,
"text": "\\alpha"
},
{
"math_id": 60,
"text": "(1.4 \\pm 9.1)\\times 10^{-5} "
},
{
"math_id": 61,
"text": "x"
},
{
"math_id": 62,
"text": "v"
},
{
"math_id": 63,
"text": "dx"
},
{
"math_id": 64,
"text": "(dx/c)(v/c)"
},
{
"math_id": 65,
"text": "g/dt"
},
{
"math_id": 66,
"text": "(dx/c)(g/c)dt"
},
{
"math_id": 67,
"text": "\nR=1+(g/c^2)dx\n"
},
{
"math_id": 68,
"text": "V"
},
{
"math_id": 69,
"text": "-dV/dx"
},
{
"math_id": 70,
"text": "\n{dR \\over dx} = g/c^2 = - {dV/c^2 \\over dx}\n\\,"
},
{
"math_id": 71,
"text": "\\Delta R"
},
{
"math_id": 72,
"text": "-\\Delta V/c^2"
}
] | https://en.wikipedia.org/wiki?curid=10346 |
10346175 | MEMO model (wind-flow simulation) | The MEMO model (version 6.2) is a Eulerian non-hydrostatic prognostic mesoscale model for wind-flow simulation. It was developed by the Aristotle University of Thessaloniki in collaboration with the Universität Karlsruhe. The MEMO Model together with the photochemical dispersion model MARS are the two core models of the European zooming model (EZM). This model belongs to the family of models designed for describing atmospheric transport phenomena in the local-to-regional scale, frequently referred to as mesoscale air pollution models.
History.
Initially, EZM was developed for modelling the transport and chemical transformation of pollutants in selected European regions in the frame of the EUROTRAC sub-project EUMAC and therefore it was formerly called the EUMAC Zooming Model (EUROTRAC, 1992). EZM has evolved to be one of the most frequently applied mesoscale air pollution model systems in Europe. It has already been successfully applied for various European airsheds including the Upper Rhine valley and the areas of Basel, Graz, Barcelona, Lisbon, Madrid, Milano, London, Cologne, Lyon, The Hague, Athens (Moussiopoulos, 1994; Moussiopoulos, 1995) and Thessaloniki. More details are to be found elsewhere (Moussiopoulos 1989), (Flassak 1990), (Moussiopoulos et al. 1993).
Model equations.
The prognostic mesoscale model MEMO describes the dynamics of the atmospheric boundary layer. In the present model version, air is assumed to be unsaturated. The model solves the continuity equation, the momentum equations and several transport equations for scalars (including the thermal energy equation and, as options, transport equations for water vapour, the turbulent kinetic energy and pollutant concentrations).
Transformation to terrain-following coordinates.
The lower boundary of the model domain coincides with the ground. Because of the inhomogeneity of the terrain, it is not possible to impose boundary conditions at that boundary with respect to Cartesian coordinates. Therefore, a transformation of the vertical coordinate to a terrain-following one is performed. Hence, the originally irregularly bounded physical domain is mapped onto one consisting of unit cubes.
Numerical solution of the equation system.
The discretized equations are solved numerically on a staggered grid, i.e. the scalar quantities formula_0, formula_1 and formula_2 are defined at the cell centre while the velocity components formula_3, formula_4 and formula_5 are defined at the centre of the appropriate interfaces.
Temporal discretization of the prognostic equations is based on the explicit second order Adams-Bashforth scheme. There are two deviations from the Adams-Bashforth scheme: The first refers to the implicit treatment of the nonhydrostatic part of the mesoscale pressure perturbation formula_6. To ensure non-divergence of the flow field, an elliptic equation is solved. The elliptic equation is derived from the continuity equation wherein velocity components are expressed in terms of formula_6. Since the elliptic equation is derived from the discrete form of the continuity equation and the discrete form of the pressure gradient, conservativity is guaranteed (Flassak and Moussiopoulos, 1988). The discrete pressure equation is solved numerically with a fast elliptic solver in conjunction with a generalized conjugate gradient method. The fast elliptic solver is based on fast Fourier analysis in both horizontal directions and Gaussian elimination in the vertical direction (Moussiopoulos and Flassak, 1989).
The second deviation from the explicit treatment is related to the turbulent diffusion in vertical direction. In case of an explicit treatment of this term, the stability requirement may necessitate an unacceptable abridgement of the time increment. To avoid this, vertical turbulent diffusion is treated using the second order Crank–Nicolson method.
On principle, advective terms can be computed using any suitable advection scheme. In the present version of MEMO, a 3D second-order total-variation-diminishing (TVD) scheme is implemented which is based on the 1D scheme proposed by Harten (1986). It achieves a fair (but not any) reduction of numerical diffusion, the solution being independent of the magnitude of the scalar (preserving transportivity).
Parameterizations.
Turbulence and radiative transfer are the most important physical processes that have to be parameterized in a prognostic mesoscale model. In the MEMO model, radiative transfer is calculated with an efficient scheme based on the emissivity method for longwave radiation and an implicit multilayer method for shortwave radiation (Moussiopoulos 1987).
The diffusion terms may be represented as the divergence of the corresponding fluxes. For turbulence parameterizations, K-theory is applied. In case of MEMO turbulence can be treated either with a zero-, one- or two-equation turbulence model. For most applications a one-equation model is used, where a conservation equation for the turbulent kinetic energy is solved.
Initial and boundary conditions.
In MEMO, initialization is performed with suitable diagnostic methods: a mass-consistent initial wind field is formulated using an objective analysis model and scalar fields are initialized using appropriate interpolating techniques (Kunz, R., 1991). Data needed to apply the diagnostic methods may be derived either from observations or from larger scale simulations.
Suitable boundary conditions have to be imposed for the wind velocity components formula_3, formula_4 and formula_5, the potential temperature formula_2 and pressure formula_1 at all boundaries. At open boundaries, wave reflection and deformation may be minimized by the use of so-called "radiation conditions" (Orlanski 1976).
According to the experience gained so far with the model MEMO, neglecting large scale environmental information might result in instabilities in case of simulations over longer time periods.
For the nonhydrostatic part of the mesoscale pressure perturbation, homogeneous Neumann boundary conditions are used at lateral boundaries. With these conditions, the wind velocity component perpendicular to the boundary remains unaffected by the pressure change.
At the upper boundary, Neumann boundary conditions are imposed for the horizontal velocity components and the potential temperature. To ensure non-reflectivity, a radiative condition is used for the hydrostatic part of the mesoscale pressure perturbation
formula_7 at that boundary. Hence, vertically propagating internal gravity waves are allowed to leave the computational domain (Klemp and Durran 1983). For the nonhydrostatic part of the mesoscale pressure perturbation, homogeneous staggered Dirichlet conditions are imposed. Being justified by the fact that nonhydrostatic effects are negligible at large heights, this condition is necessary, if singularity of the elliptic pressure equation is to be avoided in view of the Neumann boundary conditions at all other boundaries.
The lower boundary coincides with the ground (or, more precisely, a height above ground corresponding to its aerodynamic roughness). For the non-hydrostatic part of the mesoscale pressure perturbation, inhomogeneous Neumann conditions are imposed at that boundary. All other conditions at the lower boundary follow from the assumption that the –Obukhov similarity theory is valid.
The one way interactive nesting facility is possible within MEMO. Thus, successive simulations on grids of increasing resolution are possible. During these simulations, the results of the application to a coarse grid are used as boundary conditions for the application to the finer grid (Kunz and Moussiopoulos, 1995).
Grid definition.
The governing equations are solved numerically on a staggered grid. Scalar quantities as the temperature, pressure, density and also the cell volume are defined at the centre of a grid cell and the velocity components formula_3, formula_4 and formula_5 at the centre of the appropriate interface. Turbulent fluxes are defined at different locations: Shear fluxes are defined at the centre of the appropriate edges of a grid cell and normal stress fluxes at scalar points. With this definition, the outgoing fluxes of momentum, mass, heat and also turbulent fluxes of a grid cell are identical to incoming flux of the adjacent grid cell. So the numerical method is conservative.
Topography and surface type.
For calculations with MEMO, a file must be provided which contains orography height and surface type for each grid location The following surface types are distinguished and must be stored as percentage:
Only surface types 1–6 have to be stored. Type 7 is the difference between 100% and the sum of types 1–6. If the percentage of a surface type is 100%, then write the number 10 and for all other surface types the number 99.
The orography height is the mean height for each grid location above sea level in meter.
Meteorological data.
The prognostic model MEMO is a set of partial differential equations in three spatial directions and in time. To solve these equations, information about the initial state in the whole domain and about the development of all relevant quantities at the lateral boundaries is required.
Initial state.
To generate an initial state for the prognostic model, a diagnostic model (Kunz, R., 1991) is applied using measured temperature and wind data. Both data can be provided as:
Time-dependent boundary conditions.
Information about quantities at the lateral boundaries can be taken into account as surface measurements and upper air soundings. Therefore, a key word and the time when boundary data is given must occur in front of a set of boundary information.
Nesting facility.
In MEMO, a one-way interactive nesting scheme is implemented. With this nesting scheme a coarse grid and a fine grid simulation can be nested. During the coarse grid simulation, data is interpolated and written to a file. A consecutive fine grid simulation uses this data as lateral boundary values. | [
{
"math_id": 0,
"text": "\\rho"
},
{
"math_id": 1,
"text": "p"
},
{
"math_id": 2,
"text": "\\theta"
},
{
"math_id": 3,
"text": "u"
},
{
"math_id": 4,
"text": "v"
},
{
"math_id": 5,
"text": "w"
},
{
"math_id": 6,
"text": "p_{{nh}}"
},
{
"math_id": 7,
"text": "p_h"
}
] | https://en.wikipedia.org/wiki?curid=10346175 |
10346518 | Dimethyl malonate | <templatestyles src="Chembox/styles.css"/>
Chemical compound
Dimethyl malonate is a diester derivative of malonic acid. It is a common reagent for organic synthesis used, for example, as a precursor for barbituric acid. It is also used in the malonic ester synthesis. It can be synthesized from dimethoxymethane and carbon monoxide.
formula_0
Dimethyl malonate is used extensively in the fragrance industry as a raw material in the synthesis of jasmonates. For example, methyl dihydrojasmonate is synthesized from cyclopentanone, pentanal and dimethyl malonate. Hedione is used in almost all fine fragrances and is found in Christian Dior's "Eau Sauvage" and "Diorella", Hermes' "Voyage d'Hermes Parfum", Calvin Klein's "CKOne", Chanel's "Chanel No. 19", and Mark Jacob's "Blush", among others. As of 2009, Hedione was Firmenich's top selling compound by volume.
Hebei Chengxin is the world's largest producer of dimethyl malonate by volume and uses a chloroacetic acid/sodium cyanide process developed in the 1940s.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathrm{H_2C(OCH_3)_2 + 2 \\ CO \\longrightarrow CH_2(CO_2CH_3)_2}"
}
] | https://en.wikipedia.org/wiki?curid=10346518 |
1034699 | Constitutive equation | Relation between two physical quantities which is specific to a substance
In physics and engineering, a constitutive equation or constitutive relation is a relation between two or more physical quantities (especially kinetic quantities as related to kinematic quantities) that is specific to a material or substance or field, and approximates its response to external stimuli, usually as applied fields or forces. They are combined with other equations governing physical laws to solve physical problems; for example in fluid mechanics the flow of a fluid in a pipe, in solid state physics the response of a crystal to an electric field, or in structural analysis, the connection between applied stresses or loads to strains or deformations.
Some constitutive equations are simply phenomenological; others are derived from first principles. A common approximate constitutive equation frequently is expressed as a simple proportionality using a parameter taken to be a property of the material, such as electrical conductivity or a spring constant. However, it is often necessary to account for the directional dependence of the material, and the scalar parameter is generalized to a tensor. Constitutive relations are also modified to account for the rate of response of materials and their non-linear behavior. See the article Linear response function.
Mechanical properties of matter.
The first constitutive equation (constitutive law) was developed by Robert Hooke and is known as Hooke's law. It deals with the case of linear elastic materials. Following this discovery, this type of equation, often called a "stress-strain relation" in this example, but also called a "constitutive assumption" or an "equation of state" was commonly used. Walter Noll advanced the use of constitutive equations, clarifying their classification and the role of invariance requirements, constraints, and definitions of terms
like "material", "isotropic", "aeolotropic", etc. The class of "constitutive relations" of the form "stress rate = f (velocity gradient, stress, density)" was the subject of Walter Noll's dissertation in 1954 under Clifford Truesdell.
In modern condensed matter physics, the constitutive equation plays a major role. See Linear constitutive equations and Nonlinear correlation functions.
Deformation of solids.
Friction.
Friction is a complicated phenomenon. Macroscopically, the friction force "F" between the interface of two materials can be modelled as proportional to the reaction force "R" at a point of contact between two interfaces through a dimensionless coefficient of friction "μ"f, which depends on the pair of materials:
formula_0
This can be applied to static friction (friction preventing two stationary objects from slipping on their own), kinetic friction (friction between two objects scraping/sliding past each other), or rolling (frictional force which prevents slipping but causes a torque to exert on a round object).
Stress and strain.
The stress-strain constitutive relation for linear materials is commonly known as Hooke's law. In its simplest form, the law defines the spring constant (or elasticity constant) "k" in a scalar equation, stating the tensile/compressive force is proportional to the extended (or contracted) displacement "x":
formula_1
meaning the material responds linearly. Equivalently, in terms of the stress "σ", Young's modulus "E", and strain "ε" (dimensionless):
formula_2
In general, forces which deform solids can be normal to a surface of the material (normal forces), or tangential (shear forces), this can be described mathematically using the stress tensor:
formula_3
where "C" is the elasticity tensor and "S" is the compliance tensor.
Solid-state deformations.
Several classes of deformations in elastic materials are the following:
; Viscoelastic: If the time-dependent resistive contributions are large, and cannot be neglected. Rubbers and plastics have this property, and certainly do not satisfy Hooke's law. In fact, elastic hysteresis occurs.
; Anelastic: If the material is close to elastic, but the applied force induces additional time-dependent resistive forces (i.e. depend on rate of change of extension/compression, in addition to the extension/compression). Metals and ceramics have this characteristic, but it is usually negligible, although not so much when heating due to friction occurs (such as vibrations or shear stresses in machines).
; Hyperelastic: The applied force induces displacements in the material following a strain energy density function.
Collisions.
The relative speed of separation "v"separation of an object A after a collision with another object B is related to the relative speed of approach "v"approach by the coefficient of restitution, defined by Newton's experimental impact law:
formula_4
which depends on the materials A and B are made from, since the collision involves interactions at the surfaces of A and B. Usually 0 ≤ "e" ≤ 1, in which "e" = 1 for completely elastic collisions, and "e" = 0 for completely inelastic collisions. It is possible for "e" ≥ 1 to occur – for superelastic (or explosive) collisions.
Deformation of fluids.
The drag equation gives the drag force "D" on an object of cross-section area "A" moving through a fluid of density "ρ" at velocity "v" (relative to the fluid)
formula_5
where the drag coefficient (dimensionless) "cd" depends on the geometry of the object and the drag forces at the interface between the fluid and object.
For a Newtonian fluid of viscosity "μ", the shear stress "τ" is linearly related to the strain rate (transverse flow velocity gradient) ∂"u"/∂"y" (units "s"−1). In a uniform shear flow:
formula_6
with "u"("y") the variation of the flow velocity "u" in the cross-flow (transverse) direction "y". In general, for a Newtonian fluid, the relationship between the elements "τ""ij" of the shear stress tensor and the deformation of the fluid is given by
formula_7 with formula_8 and formula_9
where "v""i" are the components of the flow velocity vector in the corresponding "x""i" coordinate directions, "e""ij" are the components of the strain rate tensor, Δ is the volumetric strain rate (or dilatation rate) and "δ""ij" is the Kronecker delta.
The "ideal gas law" is a constitutive relation in the sense the pressure "p" and volume "V" are related to the temperature "T", via the number of moles "n" of gas:
formula_10
where "R" is the gas constant (J⋅K−1⋅mol−1).
Electromagnetism.
Constitutive equations in electromagnetism and related areas.
In both classical and quantum physics, the precise dynamics of a system form a set of coupled differential equations, which are almost always too complicated to be solved exactly, even at the level of statistical mechanics. In the context of electromagnetism, this remark applies to not only the dynamics of free charges and currents (which enter Maxwell's equations directly), but also the dynamics of bound charges and currents (which enter Maxwell's equations through the constitutive relations). As a result, various approximation schemes are typically used.
For example, in real materials, complex transport equations must be solved to determine the time and spatial response of charges, for example, the Boltzmann equation or the Fokker–Planck equation or the Navier–Stokes equations. For example, see magnetohydrodynamics, fluid dynamics, electrohydrodynamics, superconductivity, plasma modeling. An entire physical apparatus for dealing with these matters has developed. See for example, linear response theory, Green–Kubo relations and Green's function (many-body theory).
These complex theories provide detailed formulas for the constitutive relations describing the electrical response of various materials, such as permittivities, permeabilities, conductivities and so forth.
It is necessary to specify the relations between displacement field D and E, and the magnetic H-field H and B, before doing calculations in electromagnetism (i.e. applying Maxwell's macroscopic equations). These equations specify the response of bound charge and current to the applied fields and are called constitutive relations.
Determining the constitutive relationship between the auxiliary fields D and H and the E and B fields starts with the definition of the auxiliary fields themselves:
formula_11
where P is the polarization field and M is the magnetization field which are defined in terms of microscopic bound charges and bound current respectively. Before getting to how to calculate M and P it is useful to examine the following special cases.
Without magnetic or dielectric materials.
In the absence of magnetic or dielectric materials, the constitutive relations are simple:
formula_12
where "ε"0 and "μ"0 are two universal constants, called the permittivity of free space and permeability of free space, respectively.
Isotropic linear materials.
In an (isotropic) linear material, where P is proportional to E, and M is proportional to B, the constitutive relations are also straightforward. In terms of the polarization P and the magnetization M they are:
formula_13
where "χ"e and "χ"m are the electric and magnetic susceptibilities of a given material respectively. In terms of D and H the constitutive relations are:
formula_14
where "ε" and "μ" are constants (which depend on the material), called the permittivity and permeability, respectively, of the material. These are related to the susceptibilities by:
formula_15
General case.
For real-world materials, the constitutive relations are not linear, except approximately. Calculating the constitutive relations from first principles involves determining how P and M are created from a given E and B. These relations may be empirical (based directly upon measurements), or theoretical (based upon statistical mechanics, transport theory or other tools of condensed matter physics). The detail employed may be macroscopic or microscopic, depending upon the level necessary to the problem under scrutiny.
In general, the constitutive relations can usually still be written:
formula_16
but "ε" and "μ" are not, in general, simple constants, but rather functions of E, B, position and time, and tensorial in nature. Examples are:
As a variation of these examples, in general materials are bianisotropic where D and B depend on both E and H, through the additional "coupling constants" "ξ" and "ζ":
formula_17
In practice, some materials properties have a negligible impact in particular circumstances, permitting neglect of small effects. For example: optical nonlinearities can be neglected for low field strengths; material dispersion is unimportant when frequency is limited to a narrow bandwidth; material absorption can be neglected for wavelengths for which a material is transparent; and metals with finite conductivity often are approximated at microwave or longer wavelengths as perfect metals with infinite conductivity (forming hard barriers with zero skin depth of field penetration).
Some man-made materials such as metamaterials and photonic crystals are designed to have customized permittivity and permeability.
Calculation of constitutive relations.
The theoretical calculation of a material's constitutive equations is a common, important, and sometimes difficult task in theoretical condensed-matter physics and materials science. In general, the constitutive equations are theoretically determined by calculating how a molecule responds to the local fields through the Lorentz force. Other forces may need to be modeled as well such as lattice vibrations in crystals or bond forces. Including all of the forces leads to changes in the molecule which are used to calculate P and M as a function of the local fields.
The local fields differ from the applied fields due to the fields produced by the polarization and magnetization of nearby material; an effect which also needs to be modeled. Further, real materials are not continuous media; the local fields of real materials vary wildly on the atomic scale. The fields need to be averaged over a suitable volume to form a continuum approximation.
These continuum approximations often require some type of quantum mechanical analysis such as quantum field theory as applied to condensed matter physics. See, for example, density functional theory, Green–Kubo relations and Green's function.
A different set of "homogenization methods" (evolving from a tradition in treating materials such as conglomerates and laminates) are based upon approximation of an inhomogeneous material by a homogeneous "effective medium" (valid for excitations with wavelengths much larger than the scale of the inhomogeneity).
The theoretical modeling of the continuum-approximation properties of many real materials often rely upon experimental measurement as well. For example, "ε" of an insulator at low frequencies can be measured by making it into a parallel-plate capacitor, and "ε" at optical-light frequencies is often measured by ellipsometry.
Thermoelectric and electromagnetic properties of matter.
These constitutive equations are often used in crystallography, a field of solid-state physics.
Photonics.
Refractive index.
The (absolute) refractive index of a medium "n" (dimensionless) is an inherently important property of geometric and physical optics defined as the ratio of the luminal speed in vacuum "c"0 to that in the medium "c":
formula_18
where "ε" is the permittivity and "ε"r the relative permittivity of the medium, likewise "μ" is the permeability and "μ"r are the relative permeability of the medium. The vacuum permittivity is "ε"0 and vacuum permeability is "μ"0. In general, "n" (also "ε"r) are complex numbers.
The relative refractive index is defined as the ratio of the two refractive indices. Absolute is for one material, relative applies to every possible pair of interfaces;
formula_19
Speed of light in matter.
As a consequence of the definition, the speed of light in matter is
formula_20
for special case of vacuum; "ε" = "ε"0 and "μ" = "μ"0,
formula_21
Piezooptic effect.
The piezooptic effect relates the stresses in solids "σ" to the dielectric impermeability "a", which are coupled by a fourth-rank tensor called the piezooptic coefficient Π (units K−1):
formula_22
Transport phenomena.
Definitive laws.
There are several laws which describe the transport of matter, or properties of it, in an almost identical way. In every case, in words they read:
"Flux (density) is proportional to a gradient, the constant of proportionality is the characteristic of the material."
In general the constant must be replaced by a 2nd rank tensor, to account for directional dependences of the material.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "F = \\mu_\\text{f} R. "
},
{
"math_id": 1,
"text": "F_i=-k x_i "
},
{
"math_id": 2,
"text": "\\sigma = E \\, \\varepsilon "
},
{
"math_id": 3,
"text": "\\sigma_{ij} = C_{ijkl} \\, \\varepsilon_{kl} \\, \\rightleftharpoons \\, \\varepsilon_{ij} = S_{ijkl} \\, \\sigma_{kl} "
},
{
"math_id": 4,
"text": " e = \\frac{|\\mathbf{v}|_\\text{separation}}{| \\mathbf{v}|_\\text{approach}} "
},
{
"math_id": 5,
"text": "D=\\frac{1}{2}c_d \\rho A v^2 "
},
{
"math_id": 6,
"text": "\\tau = \\mu \\frac{\\partial u}{\\partial y},"
},
{
"math_id": 7,
"text": "\\tau_{ij} = 2 \\mu \\left( e_{ij} - \\frac13 \\Delta \\delta_{ij} \\right)"
},
{
"math_id": 8,
"text": "e_{ij}=\\frac12 \\left( \\frac {\\partial v_i}{\\partial x_j} + \\frac {\\partial v_j}{\\partial x_i} \\right)"
},
{
"math_id": 9,
"text": "\\Delta = \\sum_k e_{kk} = \\text{div}\\; \\mathbf{v},"
},
{
"math_id": 10,
"text": "pV = nRT"
},
{
"math_id": 11,
"text": "\\begin{align}\n \\mathbf{D}(\\mathbf{r}, t) &= \\varepsilon_0 \\mathbf{E}(\\mathbf{r}, t) + \\mathbf{P}(\\mathbf{r}, t) \\\\\n \\mathbf{H}(\\mathbf{r}, t) &= \\frac{1}{\\mu_0} \\mathbf{B}(\\mathbf{r}, t) - \\mathbf{M}(\\mathbf{r}, t),\n\\end{align}"
},
{
"math_id": 12,
"text": "\\mathbf{D} = \\varepsilon_0\\mathbf{E} ,\\quad \\mathbf{H} = \\mathbf{B}/\\mu_0"
},
{
"math_id": 13,
"text": "\\mathbf{P} = \\varepsilon_0\\chi_e\\mathbf{E} ,\\quad \\mathbf{M} = \\chi_m\\mathbf{H},"
},
{
"math_id": 14,
"text": "\\mathbf{D} = \\varepsilon\\mathbf{E} ,\\quad \\mathbf{H} = \\mathbf{B}/\\mu,"
},
{
"math_id": 15,
"text": "\\varepsilon/\\varepsilon_0 = \\varepsilon_r = \\chi_e + 1 ,\\quad \\mu / \\mu_0 = \\mu_r = \\chi_m + 1"
},
{
"math_id": 16,
"text": "\\mathbf{D} = \\varepsilon\\mathbf{E} ,\\quad \\mathbf{H} = \\mu^{-1}\\mathbf{B}"
},
{
"math_id": 17,
"text": "\\mathbf{D}=\\varepsilon \\mathbf{E} + \\xi \\mathbf{H} \\,,\\quad \\mathbf{B} = \\mu \\mathbf{H} + \\zeta \\mathbf{E}."
},
{
"math_id": 18,
"text": " n = \\frac{c_0}{c} = \\sqrt{\\frac{\\varepsilon \\mu}{\\varepsilon_0 \\mu_0}} = \\sqrt{\\varepsilon_r \\mu_r} "
},
{
"math_id": 19,
"text": " n_{AB} = \\frac{n_A}{n_B} "
},
{
"math_id": 20,
"text": "c = \\frac{1}{\\sqrt{\\varepsilon \\mu}}"
},
{
"math_id": 21,
"text": "c_0 = \\frac{1}{\\sqrt{\\varepsilon_0\\mu_0}}"
},
{
"math_id": 22,
"text": "a_{ij} = \\Pi_{ijpq}\\sigma_{pq} "
}
] | https://en.wikipedia.org/wiki?curid=1034699 |
1034887 | Content theory | Subset of motivational theories
Content theory is a subset of motivational theories that try to define what motivates people. Content theories of motivation often describe a system of needs that motivate peoples' actions. While process theories of motivation attempt to explain how and why our motivations affect our behaviors, content theories of motivation attempt to define what those motives or needs are. Content theory includes the work of David McClelland, Abraham Maslow and other psychologists.
McGregor's Theory X and Theory Y.
Douglas McGregor proposed two different motivational theories. Managers tend to believe one or the other and treat their employees accordingly. Theory X states that employees dislike and try to avoid work, so they must be coerced into doing it. Most workers do not want responsibilities, lack ambition, and value job security more than anything else.
McGregor personally held that the more optimistic theory, Y, was more valid. This theory holds that employees can view work as natural, are creative, can be self-motivated, and appreciate responsibility. This type of thinking is popular now, with people becoming more aware of the productivity of self-empowered work teams.
ERG theory.
ERG Theory was introduced by Clayton Alderfer as an extension to the famous Maslow's Hierarchy of Needs. In this theory, the existence or physiological needs are at the base. These include the needs for things such as food, drink, shelter, and safety. Next come the Relatedness Needs, the need to feel connected to other individuals or a group. These needs are fulfilled by establishing and maintaining relationships.
At the top of the hierarchy are Growth Needs, the needs for personal achievement and self-actualization. If a person is continuously frustrated in trying to satisfy growth needs, relatedness needs will re-emerge. This phenomenon is known as the frustration-regression process.
Herzberg's Motivation-Hygiene theory (Two-factor theory).
Frederick Herzberg felt that job satisfaction and dissatisfaction do not exist on the same continuum, but on dual scales. In other words, certain things, which Herzberg called hygiene factors, could cause a person to become unhappy with their job. These things, including pay, job security, and physical work environment, could never bring about job satisfaction. Motivating factors, on the other hand, can increase job satisfaction. Giving employees things such as a sense of recognition, responsibility, or achievement can bring satisfaction about.
Need theory.
David McClelland proposed a context for understanding the needs in people, which holds significance in understanding their motivations and behaviors. It is subdivided into three categories: the Need for Achievement, the Need for Affiliation, and the Need for Power.
The Need for Achievement refers to the notion of getting ahead and succeeding. The Need for Affiliation is the desire to be around people and be well received socially. It also includes the desire for being a member in a group and conformity. The Need for Power is the desire for control over others and over yourself. It confers the need to be able to exercise direction in the world surrounding you, and cause things to happen. Individuals who have high needs for achievement will tend to engage in competitive activities in order to fulfill this desire. Individuals who need to feel affiliated will tend to join clubs, groups and teams to satiate that want. Individuals who have the need for power will seek activities which likewise satisfy this need, such as, running for high positions in organizations and seeking opportunities to exercise that dominance.
This is not to say that one person cannot have needs spanning all three categories. A person may have the need for affiliation at the same time they have the need for power. While this may initially seem contradictory, there are instances where both needs can be fulfilled. Also, timing may connote different strengths of needs at different moments. So, while a person may strongly feel the need to affiliate during times of loneliness, they may at another time feel the strong need for power when instructed to organize an event. Needs may arise and change along with a change of context.
Maslow's hierarchy of needs.
Content theory of human motivation includes both Abraham Maslow's hierarchy of needs and Herzberg's two-factor theory. Maslow's theory is one of the most widely discussed theories of motivation. Abraham Maslow believed that man is inherently good and argued that individuals possess a constantly growing inner drive that has great potential. The needs hierarchy system is a commonly used scheme for classifying human motives.
The American motivation psychologist Abraham H. Maslow (1954) developed the hierarchy of needs consisting of five hierarchic classes. According to Maslow, people are motivated by unsatisfied needs. The needs, listed from basic (lowest-earliest) to most complex (highest-latest) are as follows:
The basic requirements build upon the first step in the pyramid: physiology. If there are deficits on this level, all behavior will be oriented to satisfy this deficit. Essentially, if you have not slept or eaten adequately, you won't be interested in your self-esteem desires. Subsequently, we have the second level, which awakens a need for security. After securing those two levels, the motives shift to the social sphere, the third level. Psychological requirements comprise the fourth level, while the top of the hierarchy consists of self-realization and self-actualization.
Maslow's hierarchy of needs theory can be summarized as follows:
Sex, Hedonism, and Evolution.
One of the first influential figures to discuss the topic of hedonism was Socrates, and he did so around 470–399 BCE in ancient Greece. Hedonism, as Socrates described it, is the motivation wherein a person will behave in a manner that will maximize pleasure and minimize pain. The only instance in which a person will behave in a manner that results in more pain than pleasure is when the knowledge of the effects of the behavior is lacking. Sex is one of the pleasures people pursue.
Sex is on the first level of Maslow's hierarchy of needs. It is a necessary physiological need like air, warmth, or sleep, and if the body lacks it will not function optimally. Without the orgasm that comes with sex, a person will experience "pain," and as hedonism would predict, a person will minimize this pain by pursuing sex. That being said, sex as a basic need is different from the need for sexual intimacy, which is located on the third level in Maslow's hierarchy.
There are multiple theories for why sex is a strong motivation, and many fall under the theory of evolution. On an evolutionary level, the motivation for sex likely has to do with a species' ability to reproduce. Species that reproduce more, survive and pass on their genes. Therefore, species have a sexual desire that leads to sexual intercourse as a means to create more offspring. Without this innate motivation, a species may determine that attaining intercourse is too costly in terms of effort, energy, and danger.
In addition to sexual desire, the motivation for romantic love runs parallel in having an evolutionary function for the survival of a species. On an emotional level, romantic love satiates a psychological need for belonging. Therefore, this is another hedonistic pursuit of pleasure. From the evolutionary perspective, romantic love creates bonds with the parents of offspring. This bond will make it so that the parents will stay together and take care of and protect the offspring until it is independent. By rearing the child together, it increases the chances that the offspring will survive and pass on its genes themselves, therefore continuing the survival of the species. Without the romantic love bond, the male will pursue satiation of his sexual desire with as many mates as possible, leaving behind the female to rear the offspring by herself. Child-rearing with one parent is more difficult and provides less assurance of the offspring's survival than with two parents. Romantic love therefore solves the commitment problem of parents needing to be together; individuals that are loyal and faithful to one another will have mutual survival benefits.
Additionally, under the umbrella of evolution, is Darwin's term sexual selection. This refers to how the female selects the male for reproduction. The male is motivated to attain sex because of all the aforementioned reasons, but how he attains it can vary based on his qualities. For some females, they are motivated by the will to survive mostly, and will prefer a mate that can physically defend her, or financially provide for her (among humans). Some females are more attracted to charm, as it is an indicator of being a good loyal lover that will in turn make for a dependable child-rearing partner. Altogether, sex is a hedonistic pleasure-seeking behavior that satiates physical and psychological needs and is instinctively guided by principles of evolution.
Self-determination theory.
Since the early 1970s Deci and Ryan have developed and tested their self-determination theory (SDT). SDT identifies three innate needs that, if satisfied, allow optimal function and growth: competence, relatedness, and autonomy. These three psychological needs are suggested to be essential for psychological health & well-being along with behavioral motivation. There are three essential elements to the theory:
Within Self-Determination Theory, Deci & Ryan distinguish between four different types of extrinsic motivation, differing in their levels of perceived autonomy:
"16 basic desires" theory.
Starting from studies involving more than 6,000 people, Reiss proposed that 16 basic desires guide nearly all human behavior. In this model the basic desires that motivate our actions and define our personalities are:
Natural theories.
The natural system assumes that people have higher-order needs, which contrasts with the rational theory that suggests that people dislike work and only respond to rewards and punishment.<ref name="Lecture 10/1">Dobbin, Frank. "From Incentives to Teamwork: Rational and Natural Management Systems." Lecture. Harvard University. Cambridge, Massachusetts. 1 October 2012.</ref> According to McGregor's Theory Y, human behavior is based on satisfying a hierarchy of needs: physiological, safety, social, ego, and self-fulfillment.
Physiological needs are the lowest and the most important level. These fundamental requirements include food, rest, shelter, and exercise. After the physiological needs are satisfied, employees can focus on safety needs, which include "protection against danger, threat and deprivation." However, if management makes arbitrary or biased employment decisions, then an employee's safety needs are unfulfilled.
The next set of needs is social, which refers to the desire for acceptance, affiliation, reciprocal friendships, and love. As such, the natural system of management assumes that close-knit work teams are productive. Accordingly, if an employee's social needs are unmet, then he will act disobediently.
There are two types of egoistic needs, the second-highest order of needs. The first type refers to one's self-esteem, which encompasses self-confidence, independence, achievement, competence, and knowledge. The second type of needs deals with reputation, status, recognition, and respect from colleagues. Egoistic needs are much more difficult to satisfy.
The highest order of needs is for self-fulfillment, including recognition of one's full potential, areas for self-improvement, and the opportunity for creativity. This differs from the rational system, which assumes that people prefer routine and security to creativity. Unlike the rational management system, which assumes that humans don't care about these higher-order needs, the natural system is based on these needs as a means for motivation.
The author of the reductionist motivation model is Sigmund Freud. According to the model, physiological needs raise tension, thereby forcing an individual to seek an outlet by satisfying those needs
Self-management through teamwork.
To successfully manage and motivate employees, the natural system posits that being a part of a group is necessary. Because of structural changes in the social order, the workplace is more fluid and adaptive according to Mayo. As a result, individual employees have lost their sense of stability and security, which can be provided by being a member of a group. However, if teams continuously change within jobs, then employees feel anxious, empty, and irrational and become harder to work with. The innate desire for lasting human association and management "is not related to single workers, but always to working groups." In groups, employees will self-manage and form relevant customs, duties, and traditions.
Wage incentives.
Humans are motivated by additional factors besides wage incentives. Unlike the rational theory of motivation, people are not driven toward economic interests per the natural system. For instance, the straight piecework system pays employees based on each unit of their output. Based on studies such as the Bank Wiring Observation Room, using a piece rate incentive system does not lead to higher production. Employees actually set upper limits on each person's daily output. These actions stand "in direct opposition to the ideas underlying their system of financial incentive, which countenanced no upper limit to performance other than the physical capacity of the individual." Therefore, as opposed to the rational system that depends on economic rewards and punishments, the natural system of management assumes that humans are also motivated by non-economic factors.
Autonomy: increased motivation for autonomous tasks.
Employees seek autonomy and responsibility in their work, contrary to assumptions of the rational theory of management. Because supervisors have direct authority over employees, they must ensure that the employee's actions are in line with the standards of efficient conduct. This creates a sense of restriction on the employee and these constraints are viewed as "annoying and seemingly functioned only as subordinating or differentiating mechanisms." Accordingly, the natural management system assumes that employees prefer autonomy and responsibility on the job and dislike arbitrary rules and overwhelming supervision. An individual's motivation to complete a task is increased when the task is autonomous. When the motivation to complete a task comes from an "external pressure" that pressure then "undermines" the person's motivation, and as a result decreases the person's desire to complete the task.
Rational motivations.
The idea that human beings are rational and that the human behavior is guided by reason is an old one. However, recent research (on satisfying for example) has significantly undermined the idea of homo economicus or of perfect rationality in favor of a more bounded rationality. The field of behavioral economics is particularly concerned with the limits of rationality in economic agents.
Incentive theories: intrinsic and extrinsic motivation.
Motivation can be divided into two different theories known as "intrinsic" (internal or inherent in the activity itself) motivation and "extrinsic" (contingent on external rewards or punishment) motivation.
Intrinsic motivation.
Intrinsic motivation has been studied since the early 1970s. Intrinsic motivation is a behavior that is driven by satisfying internal rewards. For example, an athlete may enjoy playing football for the experience, rather than for an award. It is an interest or enjoyment in the task itself, and exists within the individual rather than relying on external pressures or a desire for consideration. Deci (1971) explained that some activities provide their own inherent reward, meaning certain activities are not dependent on external rewards. The phenomenon of intrinsic motivation was first acknowledged within experimental studies of animal behavior. In these studies, it was evident that the organisms would engage in playful and curiosity-driven behaviors in the absence of reward. Intrinsic motivation is a natural motivational tendency and is a critical element in cognitive, social, and physical development. The two necessary elements for intrinsic motivation are self-determination and an increase in perceived competence. In short, the cause of the behavior must be internal, known as internal locus of causality, and the individual who engages in the behavior must perceive that the task increases their competence. According to various research reported by Deci's published findings in 1971, and 1972, tangible rewards could actually undermine the intrinsic motivation of college students. However, these studies didn't just affect college students, Kruglanski, Friedman, and Zeevi (1971) repeated this study and found that symbolic and material rewards can undermine not just high school students, but preschool students as well.
Students who are intrinsically motivated are more likely to engage in the task willingly as well as work to improve their skills, which will increase their capabilities. Students are likely to be intrinsically motivated if they...
An example of intrinsic motivation is when an employee becomes an IT professional because he or she wants to learn about how computer users interact with computer networks. The employee has the intrinsic motivation to gain more knowledge, and will continue to want to learn even in the face of failure. Art for art's sake is an example of intrinsic motivation in the domain of art.
Traditionally, researchers thought of motivations to use computer systems to be primarily driven by extrinsic purposes; however, many modern systems have their use driven primarily by intrinsic motivations. Examples of such systems used primarily to fulfill users' intrinsic motivations, include on-line gaming, virtual worlds, online shopping, learning/education, online dating, digital music repositories, social networking, online pornography, gamified systems, and general gamification. Even traditional management information systems (e.g., ERP, CRM) are being 'gamified' such that both extrinsic and intrinsic motivations must increasingly be considered. Deci's findings didn't come without controversy. Articles stretching over the span of 25 years from the perspective of behavioral theory argue that there isn't enough evidence to explain intrinsic motivation and this theory would inhibit "scientific progress." As stated above, we now can see technology such as various forms of computer systems are highly intrinsic.
Not only can intrinsic motivation be used in a personal setting, but it can also be implemented and utilized in a social environment. Instead of attaining mature desires, such as those presented above via the internet which can be attained on one's own, intrinsic motivation can be used to assist extrinsic motivation to attain a goal. For example, Eli, a 4-year-old with autism, wants to achieve the goal of playing with a toy train. To get the toy, he must first communicate to his therapist that he wants it. His desire to play is strong enough to be considered intrinsic motivation because it is a natural feeling, and his desire to communicate with his therapist to get the train can be considered extrinsic motivation because the outside object is a reward (see incentive theory). Communicating with the therapist is the first, the slightly more challenging goal that stands in the way of achieving his larger goal of playing with the train. Achieving these goals in attainable pieces is also known as the goal-setting theory. The three elements of goal-setting (STD) are Specific, Time-bound, and Difficult. Specifically, goals should be set in the 90th percentile of difficulty.
Intrinsic motivation comes from one's desire to achieve or attain a goal. Pursuing challenges and goals come easier and more enjoyable when one is intrinsically motivated to complete a certain objective because the individual is more interested in learning, rather than achieving the goal. Edward Deci and Richard Ryan's theory of intrinsic motivation is essentially examining the conditions that "elicit and sustain" this phenomenon. Deci and Ryan coined the term "cognitive evaluation theory" which concentrates on the needs of competence and autonomy. The CET essentially states that social-contextual events like feedback and reinforcement can cause feelings of competence and therefore increase intrinsic motivation. However, feelings of competence will not increase intrinsic motivation if there is no sense of autonomy. In situations where choices, feelings, and opportunities are present, intrinsic motivation is increased because people feel a greater sense of autonomy. Offering people choices, responding to their feelings, and opportunities for self-direction have been reported to enhance intrinsic motivation via increased autonomy.
An advantage (relative to extrinsic motivation) is that intrinsic motivators can be long-lasting, self-sustaining, and satisfying. For this reason, efforts in education sometimes attempt to modify intrinsic motivation with the goal of promoting future student learning performance, creativity, and learning via long-term modifications in interests. Intrinsic motivation has been found to be hard to modify, and attempts to recruit existing intrinsic motivators require a non-trivially difficult individualized approach, identifying and making relevant the different motivators needed to motivate different students, possibly requiring additional skills and intrinsic motivation from the instructor. In a workplace situation, intrinsic motivation is likely to be rare and risks being falsely identified, as most workers will always be subject to extrinsic motivation such as the fear of unemployment, the need to gain a living and fear of rejection by coworkers in cases of poor performance.
Extrinsic motivation.
Extrinsic motivation comes from influences outside of the individual. In extrinsic motivation, the harder question to answer is where do people get the motivation to carry out and continue to push with persistence. Usually, extrinsic motivation is used to attain outcomes that a person wouldn't get from intrinsic motivation. Common extrinsic motivations are rewards (for example money or grades) for showing the desired behavior, and the threat of punishment following misbehavior. Competition is an extrinsic motivator because it encourages the performer to win and to beat others, not simply to enjoy the intrinsic rewards of the activity. A cheering crowd and the desire to win a trophy are also extrinsic incentives. For example, if an individual plays the sport tennis to receive an award, that would be extrinsic motivation. VS. if the individual plays because he or she enjoys the game, which would be intrinsic motivation.
The most simple distinction between extrinsic and intrinsic motivation is the type of reasons or goals that lead to an action. While intrinsic motivation refers to doing something because it is inherently interesting or enjoyable and satisfying, extrinsic motivation, refers to doing something because it leads to a separable outcome. Extrinsic motivation thus contrasts with intrinsic motivation, which is doing an activity simply for the enjoyment of the activity itself, instead of for its instrumental value.
Social psychological research has indicated that extrinsic rewards can lead to overjustification and a subsequent reduction in intrinsic motivation. In one study demonstrating this effect, children who expected to be (and were) rewarded with a ribbon and a gold star for drawing pictures spent less time playing with the drawing materials in subsequent observations than children who were assigned to an unexpected reward condition. This shows how if an individual expects an award they don't care about the outcome. VS. if an individual doesn't expect a reward they will care more about the task. However, another study showed that third graders who were rewarded with a book showed more reading behavior in the future, implying that some rewards do not undermine intrinsic motivation. While the provision of extrinsic rewards might reduce the desirability of an activity, the use of extrinsic constraints, such as the threat of punishment, against performing an activity has actually been found to increase one's intrinsic interest in that activity. In one study, when children were given mild threats against playing with an attractive toy, it was found that the threat actually served to increase the child's interest in the toy, which was previously undesirable to the child in the absence of threat.
Advantages of extrinsic motivators are that they easily promote motivation to work and persist to goal completion. Rewards are tangible and beneficial. A disadvantage for extrinsic motivators relative to internal is that work does not persist long once external rewards are removed. As the task is completed for the reward, the quality of work may need to be monitored, and it has been suggested that extrinsic motivators may diminish in value over time.
Flow theory.
Flow theory refers to desirable subjective state a person experiences when completely involved in some challenging activity that matches the individual's skills.
Mihaly Csikszentmihalyi described Flow theory as "A state in which people are so involved in an activity that nothing else seems to matter; the experience is so enjoyable that people will continue to do it even at great cost, for the sheer sake of doing it."
The idea of flow theory was first conceptualized by Csikszentmihalyi. Flow in the context of motivation can be seen as an activity that is not too hard, frustrating or madding, or too easy boring and done too fast. If one has achieved perfect flow, then the activity has reached maximum potential.
Flow is a part of something called positive psychology of the psychology of happiness. Positive psychology looks into what makes a person happy. Flow can be considered as achieving happiness or at the very least positive feelings. A study that was published in the journal "Emotion" looked at flow experienced in college students playing Tetris. The students that were being evaluated on looks then told to wait and play Tetris. There were three categories; Easy, normal, and hard. The students that played Tetris on normal level experienced flow and were less stressed about the evaluation.
Csikszentmihalyi describes 8 characteristics of flow as - the complete concentration on the task, clarity of goals & reward in mind and immediate feedback, transformation of time (speeding up/slowing down of time), the experience is intrinsically rewarding, effortlessness & ease, a balance between challenge and skills, merged actions and awareness, loss of self-conscious rumination and a feeling of control over the task.
The activity no longer becomes something seen as a means to an end and it becomes something an individual wants to do. This can be seen as someone who likes to run for the sheer joy of running and not because they need to do it for exercise or because they want to brag about it. Peak flow can be different for each person. It could take an individual years to reach flow or only moments. If an individual becomes too good at an activity they can become bored. If the challenge becomes too hard then the individual could become discouraged and want to quit.
Behaviorist theories.
While many theories on motivation have a mentalistic perspective, behaviorists focus only on the observable behavior and the theories founded on experimental evidence. In the view of behaviorism, motivation is understood as a question about what factors cause, prevent, or withhold various behaviors, while the question of, for instance, conscious motivation would be ignored. Where others would speculate about such things as values, drives, or needs, that may not be observed directly, behaviorists are interested in the observable variables that affect the type, intensity, frequency, and duration of the observable behavior. Through the basic research of such scientists as Pavlov, Watson and Skinner, several basic mechanisms that govern behavior have been identified. The most important of these are classical conditioning and operant conditioning.
Classical and operant conditioning.
In classical (or respondent) conditioning, behavior is understood as responses triggered by certain environmental or physical stimuli. They can be "unconditioned", such as in-born reflexes, or learned through the pairing of an unconditioned stimulus with a different stimulus, which then becomes a conditioned stimulus. In relation to motivation, classical conditioning might be seen as one explanation as to why an individual performs certain responses and behaviors in certain situations. For instance, a dentist might wonder why a patient does not seem motivated to show up for an appointment, with the explanation being that the patient has associated the dentist (conditioned stimulus) with the pain (unconditioned stimulus) that elicits a fear response (conditioned response), leading to the patient being reluctant to visit the dentist.
In operant conditioning, the type and frequency of behavior are determined mainly by its consequences. If a certain behavior, in the presence of a certain stimulus, is followed by a desirable consequence (a reinforcer), the emitted behavior will increase in frequency in the future, in the presence of the stimulus that preceded the behavior (or a similar one). Conversely, if the behavior is followed by something undesirable (a punisher), the behavior is less likely to occur in the presence of the stimulus. In a similar manner, the removal of a stimulus directly following the behavior might either increase or decrease the frequency of that behavior in the future (negative reinforcement or punishment). For instance, a student that gained praise and a good grade after turning in a paper, might seem more motivated in writing papers in the future (positive reinforcement); if the same student put in a lot of work on a task without getting any praise for it, he or she might seem less motivated to do school work in the future (negative punishment). If a student starts to cause trouble in the class gets punished with something he or she dislikes, such as detention (positive punishment), that behavior would decrease in the future. The student might seem more motivated to behave in class, presumably in order to avoid further detention (negative reinforcement).
The strength of reinforcement or punishment is dependent on schedule and timing. A reinforcer or punisher affects the future frequency of a behavior most strongly if it occurs within seconds of the behavior. A behavior that is reinforced intermittently, at unpredictable intervals, will be more robust and persistent, compared to the ones that are reinforced every time the behavior is performed. For example, if the misbehaving student in the above example was punished a week after the troublesome behavior, that might not affect future behavior.
In addition to these basic principles, environmental stimuli also affect behavior. Behavior is punished or reinforced in the context of whatever stimuli were present just before the behavior was performed, which means that a particular behavior might not be affected in every environmental context, or situation, after it is punished or reinforced in one specific context. A lack of praise for school-related behavior might, for instance, not decrease after-school sports-related behavior that is usually reinforced by praise.
The various mechanisms of operant conditioning may be used to understand the motivation for various behaviors by examining what happens just after the behavior (the consequence), in what context the behavior is performed or not performed (the antecedent), and under what circumstances (motivating operators).
Incentive motivation.
Incentive theory is a specific theory of motivation, derived partly from behaviorist principles of reinforcement, which concerns an incentive or motive to do something. The most common incentive would be a compensation. Compensation can be tangible or intangible. It helps in motivating the employees in their corporate life, students in academics, and inspire them to do more and more to achieve profitability in every field. Studies show that if the person receives the reward immediately, the effect is greater, and decreases as delay lengthens. Repetitive action-reward combination can cause the action to become a habit
"Reinforcers and reinforcement principles of behavior differ from the hypothetical construct of reward." A reinforcer is anything that follows an action, with the intention that the action will now occur more frequently. From this perspective, the concept of distinguishing between intrinsic and extrinsic forces is irrelevant.
Incentive theory in psychology treats motivation and behavior of the individual as they are influenced by beliefs, such as engaging in activities that are expected to be profitable. Incentive theory is promoted by behavioral psychologists, such as B.F. Skinner. Incentive theory is especially supported by Skinner in his philosophy of Radical behaviorism, meaning that a person's actions always have social ramifications: and if actions are positively received, people are more likely to act in this manner, or if negatively received people are less likely to act in this manner.
Incentive theory distinguishes itself from other motivation theories, such as drive theory, in the direction of the motivation. In incentive theory, stimuli "attract" a person towards them, and push them towards the stimulus. In terms of behaviorism, incentive theory involves positive reinforcement: the reinforcing stimulus has been conditioned to make the person happier. As opposed to in drive theory, which involves negative reinforcement: a stimulus has been associated with the removal of the punishment—the lack of homeostasis in the body. For example, a person has come to know that if they eat when hungry, it will eliminate that negative feeling of hunger, or if they drink when thirsty, it will eliminate that negative feeling of thirst.
Motivating operations.
Motivating operations, MOs, relate to the field of motivation in that they help improve understanding aspects of behavior that are not covered by operant conditioning. In operant conditioning, the function of the reinforcer is to influence "future behavior". The presence of a stimulus believed to function as a reinforcer does not according to this terminology explain the current behavior of an organism – only previous instances of reinforcement of that behavior (in the same or similar situations) do. Through the behavior-altering effect of MOs, it is possible to affect the current behavior of an individual, giving another piece of the puzzle of motivation.
Motivating operations are factors that affect learned behavior in a certain context. MOs have two effects: a value-altering effect, which increases or decreases the efficiency of a reinforcer, and a behavior-altering effect, which modifies learned behavior that has previously been punished or reinforced by a particular stimulus.
When a motivating operation causes an increase in the effectiveness of a reinforcer or amplifies a learned behavior in some way (such as increasing frequency, intensity, duration, or speed of the behavior), it functions as an establishing operation, EO. A common example of this would be food deprivation, which functions as an EO in relation to food: the food-deprived organism will perform behaviors previously related to the acquisition of food more intensely, frequently, longer, or faster in the presence of food, and those behaviors would be especially strongly reinforced. For instance, a fast-food worker earning minimal wage, forced to work more than one job to make ends meet, would be highly motivated by a pay raise, because of the current deprivation of money (a conditioned establishing operation). The worker would work hard to try to achieve the raise, and getting the raise would function as an especially strong reinforcer of work behavior.
Conversely, a motivating operation that causes a decrease in the effectiveness of a reinforcer, or diminishes a learned behavior related to the reinforcer, functions as an abolishing operation, AO. Again using the example of food, satiation of food prior to the presentation of a food stimulus would produce a decrease on food-related behaviors, and diminish or completely abolish the reinforcing effect of acquiring and ingesting the food. Consider the board of a large investment bank, concerned with a too small profit margin, deciding to give the CEO a new incentive package in order to motivate him to increase firm profits. If the CEO already has a lot of money, the incentive package might not be a very good way to motivate him, because he would be satiated on the money. Getting even more money wouldn't be a strong reinforcer for profit-increasing behavior, and wouldn't elicit increased intensity, frequency, or duration of profit-increasing behavior.
Motivation and psychotherapy.
Motivation lies at the core of many behaviorist approaches to psychological treatment. A person with autism-spectrum the disorder is seen as lacking motivation to perform socially relevant behaviors – social stimuli are not as reinforcing for people with autism compared to other people. Depression is understood as a lack of reinforcement (especially positive reinforcement) leading to the extinction of behavior in the depressed individual. A patient with specific phobia is not motivated to seek out the phobic stimulus because it acts as a punisher, and is over-motivated to avoid it (negative reinforcement). In accordance, therapies have been designed to address these problems, such as EIBI and CBT for major depression and specific phobia.
Socio-cultural theory.
Sociocultural theory (also known as Social Motivation) emphasizes the impact of activity and actions mediated through social interaction, and within social contexts. Sociocultural theory represents a shift from traditional theories of motivation, which view the individual's innate drives or mechanistic operand learning as primary determinants of motivation. Critical elements to socio-cultural theory applied to motivation include, but are not limited to, the role of social interactions and the contributions from culturally-based knowledge and practice. Sociocultural theory extends the social aspects of Cognitive Evaluation Theory, which espouses the important role of positive feedback from others during the action, but requires the individual as the internal locus of causality. Sociocultural theory predicts that motivation has an external locus of causality, and is socially distributed among the social group.
Motivation can develop through an individual's involvement within their cultural group. Personal motivation often comes from activities a person believes to be central to the everyday occurrences in their community. An example of socio-cultural theory would be social settings where people work together to solve collective problems. Although individuals will have internalized goals, they will also develop internalized goals of others, as well as new interests and goals collectively with those that they feel socially connected to. Oftentimes, it is believed that all cultural groups are motivated in the same way. However, motivation can come from different child-rearing practices and cultural behaviors that greatly vary between cultural groups.
In some indigenous cultures, collaboration between children and adults in the community and household tasks is seen as very important A child from an indigenous community may spend a great deal of their time alongside family and community members doing different tasks and chores that benefit the community. After having seen the benefits of collaboration and work, and also have the opportunity to be included, the child will be intrinsically motivated to participate in similar tasks. In this example, because the adults in the community do not impose the tasks upon the children, the children therefore feel self-motivated and have a desire to participate and learn through the task. As a result of the community values that surround the child, their source of motivation may vary according to the different communities and their different values.
In more Westernized communities, segregation between adults and children participating in work-related tasks is a common practice. As a result of this, these adolescents demonstrate less internalized motivation to do things within their environment than their parents. However, when the motivation to participate in activities is a prominent belief within the family, the adolescents autonomy is significantly higher. This therefore demonstrates that when collaboration and non-segregative tasks are norms within a child's upbringing, their internal motivation to participate in community tasks increases. When given opportunities to work collaboratively with adults on shared tasks during childhood, children will therefore become more intrinsically motivated through adulthood.
Social motivation is tied to one's activity in a group. It cannot form from a single mind alone. For example, bowling alone is naught but the dull act of throwing a ball into pins, and so people are much less likely to smile during the activity alone, even upon getting a strike because their satisfaction or dissatisfaction does not need to be communicated, and so it is internalized. However, when with a group, people are more inclined to smile regardless of their results because it acts as a positive communication that is beneficial for pleasurable interaction and teamwork. Thus the act of bowling becomes a social activity as opposed to a dull action because it becomes an exercise in interaction, competition, team building and sportsmanship. It is because of this phenomenon that studies have shown that people are more intrigued in performing mundane activities so long as there is company because it provides the opportunity to interact in one way or another, be it for bonding, amusement, collaboration, or alternative perspectives. Examples of activities that one may not be motivated to do alone but could be done with others for the social benefit are things such as throwing and catching a baseball with a friend, making funny faces with children, building a treehouse, and performing a debate.
Push and pull.
Push.
Push motivations are those where people push themselves towards their goals or to achieve something, such as the desire for escape, rest & relaxation, prestige, health & fitness, adventure, and social interaction.
However, with push motivation, it's also easy to get discouraged when there are obstacles present in the path of achievement. Push motivation acts as a willpower and people's willpower is only as strong as the desire behind the willpower.
Additionally, a study has been conducted on social networking and its push and pull effects. One thing that is mentioned is "Regret and dissatisfaction correspond to push factors because regret and dissatisfaction are the negative factors that compel users to leave their current service provider." So we now know that Push motivations can also be a negative force. In this case, that negative force is regret and dissatisfaction.
Pull.
Pull motivation is the opposite of push. It is a type of motivation that is much stronger. "Some of the factors are those that emerge as a result of the attractiveness of a destination as it is perceived by those with the propensity to travel. They include both tangible resources, such as beaches, recreation facilities, and cultural attractions, and traveler's perceptions and expectation, such as novelty, benefit expectation, and marketing image." Pull motivation can be seen as the desire to achieve a goal so badly that it seems that the goal is pulling us toward it. That is why pull motivation is stronger than push motivation. It is easier to be drawn to something rather than to push yourself for something you desire.
It can also be an alternative force when compared to negative force. From the same study as previously mentioned, "Regret and dissatisfaction with an existing SNS service provider may trigger a heightened interest toward switching service providers, but such a motive will likely translate into reality in the presence of a good alternative. Therefore, alternative attractiveness can moderate the effects of regret and dissatisfaction with switching intention" And so, pull motivation can be an attracting desire when negative influences come into the picture.
Self-control.
The self-control aspect of motivation is increasingly considered to be a subset of emotional intelligence; it is suggested that although a person may be classed as highly intelligent (as measured by many traditional intelligence tests), they may remain unmotivated to pursue intellectual endeavors. Vroom's "expectancy theory" provides an account of when people may decide to exert self-control in pursuit of a particular goal.
Drives.
A drive or desire can be described as a deficiency or need that activates behavior that is aimed at a goal or an incentive.<ref name="Drive/Desire"></ref> These drives are thought to originate within the individual and may not require external stimuli to encourage the behavior. Basic drives could be sparked by deficiencies such as hunger, which motivates a person to seek food whereas more subtle drives might be the desire for praise and approval, which motivates a person to behave in a manner pleasing to others.
Another basic drive is the sexual drive which just like food motivates us because it is essential to our survival. The desire for sex is wired deep into the brain of all human beings as glands secrete hormones that travel through the blood to the brain and stimulates the onset of sexual desire. The hormone involved in the initial onset of sexual desire is called Dehydroepiandrosterone (DHEA). The hormonal basis of both men and women's sex drives is testosterone. Men naturally have more testosterone than women do and so are more likely than women to think about sex.
Drive-reduction theory.
Drive theory grows out of the concept that people have certain biological drives, such as hunger and thirst. As time passes, the strength of the drive increases if it is not satisfied (in this case by eating). Upon satisfying a drive, the drive's strength is reduced. Created by Clark Hull and further developed by Kenneth Spence, the theory became well known in the 1940s and 1950s. Many of the motivational theories that arose during the 1950s and 1960s were either based on Hull's original theory or were focused on providing alternatives to the drive-reduction theory, including Abraham Maslow's hierarchy of needs, which emerged as an alternative to Hull's approach.
Drive theory has some intuitive validity. For instance, when preparing food, the drive model appears to be compatible with sensations of rising hunger as the food is prepared, and, after the food has been consumed, a decrease in subjective hunger. There are several problems, however, that leave the validity of drive reduction open for debate.
Cognitive dissonance theory.
As suggested by Leon Festinger, cognitive dissonance occurs when an individual experiences some degree of discomfort resulting from an inconsistency between two cognitions: their views on the world around them and their own personal feelings and actions. For example, a consumer may seek to reassure themselves regarding a purchase, feeling that another decision may have been preferable. Their feeling that another purchase would have been preferable is inconsistent with their action of purchasing the item. The difference between their feelings and beliefs causes dissonance, so they seek to reassure themselves.
While not a theory of motivation, per se, the theory of cognitive dissonance proposes that people have a motivational drive to reduce dissonance. The cognitive miser perspective makes people want to justify things in a simple way in order to reduce the effort they put into cognition. They do this by changing their attitudes, beliefs, or actions, rather than facing the inconsistencies, because dissonance is a mental strain. Dissonance is also reduced by justifying, blaming, and denying. It is one of the most influential and extensively studied theories in social psychology.
Temporal motivation theory.
A recent approach in developing a broad, integrative theory of motivation is temporal motivation theory. Introduced in a 2006 "Academy of Management Review" article, it synthesizes into a single formulation, the primary aspects of several other major motivational theories, including Incentive Theory, Drive Theory, Need Theory, Self-Efficacy and Goal Setting. It simplifies the field of motivation and allows findings from one theory to be translated into the terms of another. Another journal article that helped to develop temporal motivation theory, "The Nature of Procrastination", which received American Psychological Association's George A. Miller award for outstanding contribution to general science.
formula_0
where "Motivation" is the desire for a particular outcome, "Expectancy" or self-efficacy is the probability of success, "Value" is the reward associated with the outcome, "Impulsiveness" is the individual's sensitivity to delay and "Delay" is the time to realization.
Achievement motivation.
Achievement motivation is an integrative perspective based on the premise that performance motivation results from the way broad components of personality are directed towards performance. As a result, it includes a range of dimensions that are relevant to success at work but which are not conventionally regarded as being part of performance motivation. The emphasis on performance seeks to integrate formerly separate approaches as need for achievement with, for example, social motives like dominance. Personality is intimately tied to performance and achievement motivation, including such characteristics as tolerance for risk, fear of failure, and others.
Achievement motivation can be measured by The Achievement Motivation Inventory, which is based on this theory and assesses three factors (in 17 separated scales) relevant to vocational and professional success. This motivation has repeatedly been linked with adaptive motivational patterns, including working hard, a willingness to pick learning tasks with much difficulty, and attributing success to effort.
Achievement motivation was studied intensively by David C. McClelland, John W. Atkinson and their colleagues since the early 1950s. This type of motivation is a drive that is developed from an emotional state. One may feel the drive to achieve by striving for success and avoiding failure. In achievement motivation, one would hope that they excel in what they do and not think much about the failures or the negatives. Their research showed that business managers who were successful demonstrated a high need to achieve no matter the culture.
There are three major characteristics of people who have a great need to achieve according to McClelland's research.
Cognitive theories.
Cognitive theories define motivation in terms of how people think about situations. Cognitive theories of motivation include goal-setting theory and expectancy theory.
Goal-setting theory.
Goal-setting theory is based on the idea that individuals have a drive to reach a clearly defined end state. Often, this end state is a reward in itself. A goal's efficiency is affected by three features: proximity, difficulty, and specificity. One common goal setting methodology incorporates the SMART criteria, in which goals are: specific, measurable, attainable/achievable, relevant, and time-bound. Time management is an important aspect, when regarding time as a contributing factor to goal achievement. Having too much time allows for distraction and procrastination, which also serves as a distraction to the subject by steering their attention away from the original goal. An ideal goal should present a situation where the time between the beginning of the effort and the end state is close. With an overly restricting time restraint, the subject could potentially feel overwhelmed, which could deter the subject from achieving the goal because the amount of time provided is not sufficient or rational. This explains why some children are more motivated to learn how to ride a bike than to master algebra. A goal should be moderate, not too hard, or too easy to complete.
Most people are not optimally motivated, as many want a challenge (which assumes some kind of insecurity of success). At the same time, people want to feel that there is a substantial probability that they will succeed. The goal should be objectively defined and understandable for the individual. Similarly to Maslow's Hierarchy of Needs, a larger end goal is easier to achieve if the subject has smaller, more attainable yet still challenging goals to achieve first in order to advance over a period of time. A classic example of a poorly specified goal is trying to motivate oneself to run a marathon when s/he has not had proper training. A smaller, more attainable goal is to first motivate oneself to take the stairs instead of an elevator or to replace a stagnant activity, like watching television, with a mobile one, like spending time walking and eventually working up to a jog.
Expectancy theory.
Expectancy theory was proposed by Victor H. Vroom in 1964. Expectancy theory explains the behavior process in which an individual selects a behavior option over another, and why/how this decision is made in relation to their goal.
There's also an equation for this theory which goes as follows:
formula_1 or
formula_2
Procrastination.
Procrastination is the act to voluntarily postponing or delaying an intended course of action despite anticipating that you will be worse off because of that delay. While procrastination was once seen as a harmless habit, recent studies indicate otherwise. In a 1997 study conducted by Dianne Tice and William James Fellow Roy Baumeister at Case Western University, college students were given ratings on an established scale of procrastination and tracked their academic performance, stress, and health throughout the semester. While procrastinators experienced some initial benefit in the form of lower stress levels (presumably by putting off their work at first), they ultimately earned lower grades and reported higher levels of stress and illness.
Procrastination can be seen as a defense mechanism. Because it is less demanding to simply avoid a task instead of dealing with the possibility of failure, procrastinators choose the short-term gratification of delaying a task over the long-term uncertainty of undertaking it. Procrastination can also be a justification for when the user ultimately has no choice but to undertake a task and performs below their standard. For example, a term paper could be seen as a daunting task. If the user puts it off until the night before, they can justify their poor score by telling themselves that they would have done better with more time. This kind of justification is extremely harmful and only helps to perpetuate the cycle of procrastination.
Over the years, scientists have determined that not all procrastination is the same. The first type is chronic procrastinators whom exhibit a combination of qualities from the other, more specialized types of procrastinators. "Arousal" types are usually self-proclaimed "pressure performers" and relish the exhilaration of completing tasks close to the deadline. "Avoider" types procrastinate to avoid the outcome of whatever task they are pushing back - whether it be a potential failure or success. "Avoider" types are usually very self-conscious and care deeply about other people's opinions. Lastly, "Decisional" procrastinators avoid making decisions in order to protect themselves from the responsibility that follows the outcome of events.
Models of behavior change.
Social-cognitive models of behavior change include the constructs of motivation and volition. Motivation is seen as a process that leads to the forming of behavioral intentions. Volition is seen as a process that leads from intention to actual behavior. In other words, motivation and volition refer to goal setting and goal pursuit, respectively. Both processes require self-regulatory efforts. Several self-regulatory constructs are needed to operate in orchestration to attain goals. An example of such a motivational and volitional construct is perceived self-efficacy. Self-efficacy is supposed to facilitate the forming of behavioral intentions, the development of action plans, and the initiation of action. It can support the translation of intentions into action.
John W. Atkinson, David Birch and their colleagues developed the theory of "Dynamics of Action" to mathematically model change in behavior as a consequence of the interaction of motivation and associated tendencies toward specific actions. The theory posits that change in behavior occurs when the tendency for a new, unexpressed behavior becomes dominant over the tendency currently motivating action. In the theory, the strength of tendencies rises and falls as a consequence of internal and external stimuli (sources of instigation), inhibitory factors, and consummatory in factors such as performing an action. In this theory, there are three causes responsible for behavior and change in behavior:
Thematic apperception test.
Thematic Apperception Test (TAT) was developed by American psychologists Henry A. Murray and Christina D. Morgan at Harvard during the early 1930s. Their underlying goal was to test and discover the dynamics of personality such as internal conflict, dominant drives, and motives. Testing is derived from asking the individual to tell a story, given 31 pictures that they must choose ten to describe. To complete the assessment, each story created by the test subject must be carefully recorded and monitored to uncover underlying needs and patterns of reactions each subject perceives. After evaluation, two common methods of research, Defense Mechanisms Manual (DMM) and Social Cognition and Object Relations (SCOR), are used to score each test subject on different dimensions of the object and relational identification. From this, the underlying dynamics of each specific personality and specific motives and drives can be determined.
Attribution theory.
Attribution theory describes individual's motivation to formulate explanatory attributions ("reasons") for events they experience, and how these beliefs affect their emotions and motivations. Attributions are predicted to alter behavior, for instance attributing failure on a test to a lack of study might generate emotions of shame and motivate harder study. Important researchers include Fritz Heider and Bernard Weiner. Weiner's theory differentiates intrapersonal and interpersonal perspectives. Intrapersonal includes self-directed thoughts and emotions that are attributed to the self. The interpersonal perspective includes beliefs about the responsibility of others and emotions directed at other people, for instance attributing blame to another individual.
Approach versus avoidance.
Approach motivation (i.e., incentive salience) can be defined as when a certain behavior or reaction to a situation/environment is rewarded or results in a positive or desirable outcome. In contrast, avoidance motivation (i.e., aversive salience) can be defined as when a certain behavior or reaction to a situation/environment is punished or results in a negative or undesirable outcome. Research suggests that, all else being equal, avoidance motivations tend to be more powerful than approach motivations. Because people expect losses to have more powerful emotional consequences than equal-size gains, they will take more risks to avoid a loss than to achieve a gain.
Conditioned taste aversion.
<templatestyles src="Template:Blockquote/styles.css" />
Conditioned taste aversion is the only type of conditioning that only needs one exposure. It does not need to be the specific food or drinks that cause the taste. Conditioned taste aversion can also be attributed to extenuating circumstances. An example of this can be eating a rotten apple. Eating the apple and then immediately throwing up. Now it is hard to even be near an apple without feeling sick. Conditioned taste aversion can also come about by the mere associations of two stimuli. Eating a peanut butter and jelly sandwich, but also having the flu. Eating the sandwich makes one feel nauseous, so one throws up, now one cannot smell peanut butter without feeling queasy. Though eating the sandwich does not cause one to through up, they are still linked.
Unconscious Motivation.
In his book "A General Introduction to Psychoanalysis", Sigmund Freud explained his theory on the conscious-unconscious distinction. To explain this relationship, he used a two-room metaphor. The smaller of the two rooms is filled with a person's preconscious, which is the thoughts, emotions, and memories that are available to a person's consciousness. This room also houses a person's consciousness, which is the part of the preconscious that is the focus at that given time. Connected to the small room is a much larger room that houses a person's unconscious. This part of the mind is unavailable to a person's consciousness and consists of impulses and repressed thoughts. The door between these two rooms acts as the person's mental censor. Its job is to keep anxiety-inducing thoughts and socially unacceptable behaviors or desires out of the preconscious. Freud describes the event of a thought or impulse being denied at the door as repression, one of the many defense mechanisms. This process is supposed to protect the individual from any embarrassment that could come from acting on these impulses or thoughts that exist in the unconscious.
In terms of motivation, Freud argues that unconscious instinctual impulses can still have great influence on behavior even though the person is not aware of the source. When these instincts serve as a motive, the person is only aware of the goal of the motive, and not its actual source. He divides these instincts into sexual instincts, death instincts, and ego or self-preservation instincts. Sexual instincts are those that motivate humans to stay alive and ensure the continuation of mankind. On the other hand, Freud also maintains that humans have an inherent drive for self-destruction, or the death instinct. Similar to the devil and angel that everyone has on their shoulder, the sexual instinct and death instinct are constantly battling each other to both be satisfied. The death instinct can be closely related to Freud's other concept, the id, which is our need to experience pleasure immediately, regardless of the consequences. The last type of instinct that contributes to motivation is the ego or self-preservation instinct. This instinct is geared towards assuring that a person feels validated in whatever behavior or thought they have. The mental censor, or door between the unconscious and preconscious, helps satisfy this instinct. For example, one may be sexually attracted to a person, due to their sexual instinct, but the self-preservation instinct prevents them to act on this urge until that person finds that it is socially acceptable to do so. Quite similarly to his psychic theory that deals with the id, ego, and superego, Freud's theory of instincts highlights the interdependence of these three instincts. All three instincts serve as checks and balances system to control what instincts are acted on and what behaviors are used to satisfy as many of them at once.
Priming.
Priming is a phenomenon, often used as an experimental technique, whereby a specific stimulus sensitizes the subject to later presentation of a similar stimulus.
"Priming refers to an increased sensitivity to certain stimuli, resulting from prior exposure to related visual or audio messages. When an individual is exposed to the word "cancer," for example, and then offered the choice to smoke a cigarette, we expect that there is a greater probability that they will choose not to smoke as a result of the earlier exposure."
Priming can affect motivation, in the way that we can be motived to do things by an outside source.
Priming can be linked with the mere exposure theory. People tend to like things that they have been exposed to before. Mere exposer theory is used by advertising companies to get people to buy their products. An example of this is seeing a picture of the product on a signboard and then buying that product later. If an individual is in a room with two strangers they are more likely to gravitate towards the person that they occasionally pass on the street, than the person that they have never seen before. An example of the use of mere exposure theory can be seen in product placements in movies and TV shows. We see a product that is in our favorite movie, and hence we are more inclined to buy that product when we see it again.
Priming can fit into these categories; Semantic Priming, Visual Priming, Response Priming, Perceptual and Conceptual Priming, Positive and Negative Priming, Associative and Context Priming, and Olfactory Priming. Visual and Semantic priming is the most used in motivation. Most priming is linked with emotion, the stronger the emotion, the stronger the connection between memory and the stimuli.
Priming also has an effect on drug users. In this case, it can be defined as, the reinstatement or increase in drug craving by a small dose of the drug or by stimuli associated with the drug. If a former drug user is in a place where they formerly did drugs, then they are tempted to do that same thing again even if they have been clean for years.
Conscious Motivation.
Freud relied heavily upon the theories of unconscious motivation as explained above, but Allport (a researcher in 1967) looked heavily into the powers of conscious motivation and the effect it can have upon goals set for an individual. This is not to say that unconscious motivation should be ignored with this theory, but instead, it focuses on the thought that if we are aware of our surroundings and our goals, we can then actively and consciously take steps towards them.
He also believed that there are three hierarchical tiers of personality traits that affect this motivation:
Mental Fatigue.
Mental fatigue is being tired, exhausted, or not functioning effectively. Not wanting to proceed further with the current mental course of action is in contrast with physical fatigue, because in most cases no physical activity is done. This is best seen in the workplace or schools. A perfect example of mental fatigue is seen in college students just before finals approach. One will notice that students start eating more than they usually do and care less about interactions with friends and classmates. Mental fatigue arises when an individual becomes involved in a complex task but does no physical activity and is still worn out, the reason for this is because the brain uses about 20 percent of the human body's metabolic heart rate. The brain consumes about 10.8 calories every hour. Meaning that a typical human adult brain runs on about twelve watts of electricity or a fifth of the power need to power a standard light bulb. These numbers represent an individual's brain working on routine tasks, things that are not challenging. One study suggests that after engaging in a complex task, an individual tends to consume about two hundred more calories than if they had been resting or relaxing; however, this appeared to be due to stress, not higher caloric expenditure.
The symptoms of mental fatigue can range from low motivation and loss of concentration to the more severe symptoms of headaches, dizziness, and impaired decision making and judgment. Mental fatigue can affect an individual's life by causing a lack of motivation, avoidance of friends and family members, and changes in one's mood. To treat mental fatigue, one must figure out what is causing the fatigue. Once the cause of the stress has been identified the individual must determine what they can do about it. Most of the time mental fatigue can be fixed by a simple life change like being more organized or learning to say no. According to the study: Mental fatigue caused by prolonged cognitive load associated with sympathetic hyperactivity, "there is evidence that decreased parasympathetic activity and increased relative sympathetic activity are associated with mental fatigue induced by a prolonged cognitive load in healthy adults." this means that though no physical activity was done, the sympathetic nervous system was triggered. An individual who is experiencing mental fatigue will not feel relaxed but feel the physical symptoms of stress.
Learned Industriousness.
Learned industriousness theory is the theory about an acquired ability to sustain the physical or mental effort. It can also be described as being persistent despite the building up subjective fatigue. This is the ability to push through to the end for a greater or bigger reward. The more significant or more rewarding the incentive, the more the individual is willing to do to get to the end of a task. This is one of the reasons that college students will go on to graduate school. The students may be worn out, but they are willing to go through more school for the reward of getting a higher paying job when they are out of school.
Reversal Theory.
Reversal theory, first introduced by Dr. Michael Apter and Dr. Ken Smith in the 1970s, is a structural, phenomenological explanation of psychological states and their dynamic interplay. The theory contributes to an understanding of emotions and personality in which endogenous (cognitive) and exogenous (environmental) implications are considered.
The theory proposes eight meta-motivational states arranged into four pairs that drive and respond to all human experience. When a state is interrupted or satiated, one "reverses" to the other state in the pair (domain). Unlike many theories related to personality, reversal theory proposes that human behavior is better understood by studying dynamic states than by the average of behavior over time trait theory.
Another distinction of reversal theory is its direct contrast with the Hebbian version of the Yerkes–Dodson law of arousal, which can be found in many forms of psychotherapy. Optimal arousal theory proposes that the most comfortable or desirable arousal level is not too high or too low. Reversal theory proposes in its principle of bistability that any level of arousal or stimulation may be found either desirable or undesirable depending on the meta-motivational state one is in.
Reversal theory has been academically supported and put to practical use in more than 30 fields (e.g., sports psychology, business, medical care, addiction, and stress) and in over 30 countries.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathrm{Motivation} = \\frac{\\mbox{Expectancy × Value}}{\\mbox{1 + Impulsiveness × Delay}}"
},
{
"math_id": 1,
"text": "{\\text{M}}= {\\text{E}} \\times {\\text{I}} \\times {\\text{V}}"
},
{
"math_id": 2,
"text": "{\\text{Motivation}}= {\\text{Expectancy}} \\times {\\text{Instrumentality}} \\times {\\text{Valence}}"
}
] | https://en.wikipedia.org/wiki?curid=1034887 |
10349343 | Kantorovich theorem | About the convergence of Newton's method
The Kantorovich theorem, or Newton–Kantorovich theorem, is a mathematical statement on the semi-local convergence of Newton's method. It was first stated by Leonid Kantorovich in 1948. It is similar to the form of the Banach fixed-point theorem, although it states existence and uniqueness of a zero rather than a fixed point.
Newton's method constructs a sequence of points that under certain conditions will converge to a solution formula_0 of an equation formula_1 or a vector solution of a system of equation formula_2. The Kantorovich theorem gives conditions on the initial point of this sequence. If those conditions are satisfied then a solution exists close to the initial point and the sequence converges to that point.
Assumptions.
Let formula_3 be an open subset and formula_4 a differentiable function with a Jacobian formula_5 that is locally Lipschitz continuous (for instance if formula_6 is twice differentiable). That is, it is assumed that for any formula_7 there is an open subset formula_8 such that formula_9 and there exists a constant formula_10 such that for any formula_11
formula_12
holds. The norm on the left is the operator norm. In other words, for any vector formula_13 the inequality
formula_14
must hold.
Now choose any initial point formula_15. Assume that formula_16 is invertible and construct the Newton step formula_17
The next assumption is that not only the next point formula_18 but the entire ball formula_19 is contained inside the set formula_20. Let formula_21 be the Lipschitz constant for the Jacobian over this ball (assuming it exists).
As a last preparation, construct recursively, as long as it is possible, the sequences formula_22, formula_23, formula_24 according to
formula_25
Statement.
Now if formula_26 then
A statement that is more precise but slightly more difficult to prove uses the roots formula_31 of the quadratic polynomial
formula_32,
formula_33
and their ratio
formula_34
Then
Corollary.
In 1986, Yamamoto proved that the error evaluations of the Newton method such as Doring (1969), Ostrowski (1971, 1973), Gragg-Tapia (1974), Potra-Ptak (1980), Miel (1981), Potra (1984), can be derived from the Kantorovich theorem.
Generalizations.
There is a "q"-analog for the Kantorovich theorem. For other generalizations/variations, see Ortega & Rheinboldt (1970).
Applications.
Oishi and Tanabe claimed that the Kantorovich theorem can be applied to obtain reliable solutions of linear programming.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "x"
},
{
"math_id": 1,
"text": "f(x)=0"
},
{
"math_id": 2,
"text": "F(x)=0"
},
{
"math_id": 3,
"text": "X\\subset\\R^n"
},
{
"math_id": 4,
"text": "F:X \\subset \\R^n \\to\\R^n"
},
{
"math_id": 5,
"text": "F^{\\prime}(\\mathbf x)"
},
{
"math_id": 6,
"text": "F"
},
{
"math_id": 7,
"text": "x \\in X"
},
{
"math_id": 8,
"text": "U\\subset X"
},
{
"math_id": 9,
"text": "x \\in U"
},
{
"math_id": 10,
"text": "L>0"
},
{
"math_id": 11,
"text": "\\mathbf x,\\mathbf y\\in U"
},
{
"math_id": 12,
"text": "\\|F'(\\mathbf x)-F'(\\mathbf y)\\|\\le L\\;\\|\\mathbf x-\\mathbf y\\|"
},
{
"math_id": 13,
"text": "\\mathbf v\\in\\R^n"
},
{
"math_id": 14,
"text": "\\|F'(\\mathbf x)(\\mathbf v)-F'(\\mathbf y)(\\mathbf v)\\|\\le L\\;\\|\\mathbf x-\\mathbf y\\|\\,\\|\\mathbf v\\|"
},
{
"math_id": 15,
"text": "\\mathbf x_0\\in X"
},
{
"math_id": 16,
"text": "F'(\\mathbf x_0)"
},
{
"math_id": 17,
"text": "\\mathbf h_0=-F'(\\mathbf x_0)^{-1}F(\\mathbf x_0)."
},
{
"math_id": 18,
"text": "\\mathbf x_1=\\mathbf x_0+\\mathbf h_0"
},
{
"math_id": 19,
"text": "B(\\mathbf x_1,\\|\\mathbf h_0\\|)"
},
{
"math_id": 20,
"text": "X"
},
{
"math_id": 21,
"text": "M"
},
{
"math_id": 22,
"text": "(\\mathbf x_k)_k"
},
{
"math_id": 23,
"text": "(\\mathbf h_k)_k"
},
{
"math_id": 24,
"text": "(\\alpha_k)_k"
},
{
"math_id": 25,
"text": "\\begin{alignat}{2}\n\\mathbf h_k&=-F'(\\mathbf x_k)^{-1}F(\\mathbf x_k)\\\\[0.4em]\n\\alpha_k&=M\\,\\|F'(\\mathbf x_k)^{-1}\\|\\,\\|\\mathbf h_k\\|\\\\[0.4em]\n\\mathbf x_{k+1}&=\\mathbf x_k+\\mathbf h_k.\n\\end{alignat}"
},
{
"math_id": 26,
"text": "\\alpha_0\\le\\tfrac12"
},
{
"math_id": 27,
"text": "\\mathbf x^*"
},
{
"math_id": 28,
"text": "F(\\mathbf x^*)=0"
},
{
"math_id": 29,
"text": "\\bar B(\\mathbf x_1,\\|\\mathbf h_0\\|)"
},
{
"math_id": 30,
"text": "\\mathbf x_0"
},
{
"math_id": 31,
"text": "t^\\ast\\le t^{**}"
},
{
"math_id": 32,
"text": "\np(t)\n =\\left(\\tfrac12L\\|F'(\\mathbf x_0)^{-1}\\|^{-1}\\right)t^2\n -t+\\|\\mathbf h_0\\|\n"
},
{
"math_id": 33,
"text": "t^{\\ast/**}=\\frac{2\\|\\mathbf h_0\\|}{1\\pm\\sqrt{1-2\\alpha_0}}"
},
{
"math_id": 34,
"text": "\n\\theta\n =\\frac{t^*}{t^{**}}\n =\\frac{1-\\sqrt{1-2\\alpha_0}}{1+\\sqrt{1-2\\alpha_0}}.\n"
},
{
"math_id": 35,
"text": "\\bar B(\\mathbf x_1,\\theta\\|\\mathbf h_0\\|)\\subset\\bar B(\\mathbf x_0,t^*)"
},
{
"math_id": 36,
"text": "B(\\mathbf x_0,t^{*\\ast})"
},
{
"math_id": 37,
"text": "p(t)"
},
{
"math_id": 38,
"text": "t^\\ast"
},
{
"math_id": 39,
"text": "t_0=0,\\,t_{k+1}=t_k-\\tfrac{p(t_k)}{p'(t_k)}"
},
{
"math_id": 40,
"text": "\\|\\mathbf x_{k+p}-\\mathbf x_k\\|\\le t_{k+p}-t_k."
},
{
"math_id": 41,
"text": "\n \\|\\mathbf x_{n+1}-\\mathbf x^*\\|\n \\le \\theta^{2^n}\\|\\mathbf x_{n+1}-\\mathbf x_n\\|\n \\le\\frac{\\theta^{2^n}}{2^n}\\|\\mathbf h_0\\|.\n"
}
] | https://en.wikipedia.org/wiki?curid=10349343 |
1034969 | Levelling | Surveying technique
Levelling or leveling (American English; see spelling differences) is a branch of surveying, the object of which is to establish or verify or measure the height of specified points relative to a datum. It is widely used in geodesy and cartography to measure vertical position with respect to a vertical datum, and in construction to measure height differences of construction artifacts.
Optical levelling.
Optical levelling, also known as spirit levelling and differential levelling, employs an "optical level", which consists of a precision telescope with crosshairs and stadia marks. The cross hairs are used to establish the level point on the target, and the stadia allow range-finding; stadia are usually at ratios of 100:1, in which case one metre between the stadia marks on the "level staff" (or "rod") represents 100metres from the target.
The complete unit is normally mounted on a "tripod", and the telescope can freely rotate 360° in a horizontal plane. The surveyor adjusts the instrument's level by coarse adjustment of the tripod legs and fine adjustment using three precision levelling screws on the instrument to make the rotational plane horizontal. The surveyor does this with the use of a "bull's eye level" built into the instrument mount.
Procedure.
The surveyor looks through the eyepiece of telescope while an assistant holds a vertical level staff which is graduated in inches or centimeters. The level staff is placed vertically using a level, with its foot on the point for which the level measurement is required. The telescope is rotated and focused until the level staff is plainly visible in the crosshairs. In the case of a high accuracy manual level, the fine level adjustment is made by an altitude screw, using a high accuracy bubble level fixed to the telescope. This can be viewed by a mirror whilst adjusting or the ends of the bubble can be displayed within the telescope, which also allows assurance of the accurate level of the telescope whilst the sight is being taken. However, in the case of an automatic level, altitude adjustment is done automatically by a suspended prism due to gravity, as long as the coarse levelling is accurate within certain limits. When level, the staff graduation reading at the crosshairs is recorded, and an identifying mark or marker placed where the level staff rested on the object or position being surveyed.
A typical procedure for a linear track of levels from a known datum is as follows. Set up the instrument within of a point of known or assumed elevation. A rod or staff is held vertical on that point and the instrument is used manually or automatically to read the rod scale. This gives the height of the instrument above the starting (backsight) point and allows the height of the instrument (H.I.) above the datum to be computed.
The rod is then held on an unknown point and a reading is taken in the same manner, allowing the elevation of the new (foresight) point to be computed. The difference between these two readings equals the change in elevation, which is why this method is also called "differential levelling". The procedure is repeated until the destination point is reached. It is usual practice to perform either a complete loop back to the starting point or else close the traverse on a second point whose elevation is already known. The closure check guards against blunders in the operation, and allows residual error to be distributed in the most likely manner among the stations.
Some instruments provide three crosshairs which allow stadia measurement of the foresight and backsight distances. These also allow use of the average of the three readings (3-wire leveling) as a check against blunders and for averaging out the error of interpolation between marks on the rod scale.
The two main types of levelling are single-levelling as already described, and double-levelling (double-rodding). In double-levelling, a surveyor takes two foresights and two backsights and makes sure the difference between the foresights and the difference between the backsights are equal, thereby reducing the amount of error. Double-levelling costs twice as much as single-levelling.
Turning a level.
When using an optical level, the endpoint may be out of the effective range of the instrument. There may be obstructions or large changes of elevation between the endpoints. In these situations, extra setups are needed. Turning is a term used when referring to moving the level to take an elevation shot from a different location.
To "turn" the level, one must first take a reading and record the elevation of the point the rod is located on. While the rod is being kept in exactly the same location, the level is moved to a new location where the rod is still visible. A reading is taken from the new location of the level and the height difference is used to find the new elevation of the level gun. This is repeated until the series of measurements is completed.
The level must be horizontal to get a valid measurement. Because of this, if the horizontal crosshair of the instrument is lower than the base of the rod, the surveyor will not be able to sight the rod and get a reading. The rod can usually be raised up to 25 feet high, allowing the level to be set much higher than the base of the rod.
Trigonometric levelling.
The other standard method of levelling in construction and surveying is called trigonometric levelling, which is preferred when levelling "out" to a number of points from one stationary point. This is done by using a total station, or any other instrument to read the vertical, or zenith angle to the rod, and the change in elevation is calculated using trigonometric functions (see example below). At greater distances (typically 1,000 feet and greater), the curvature of the Earth, and the refraction of the instrument wave through the air must be taken into account in the measurements as well (see section below).
Ex: an instrument at Point A reading to a rod at Point B a zenith angle of < 88°15'22" (degrees, minutes, seconds of arc) and a slope distance of 305.50 feet not factoring rod or instrument height would be calculated thus:
cos(88°15'22")(305.5)≈ 9.30 ft.,
meaning an elevation change of approximately 9.30 feet in elevation between Points A and B. So if Point A is at 1,000 feet of elevation, then Point B would be at approximately 1,009.30 feet of elevation, as the reference line (0°) for zenith angles is straight up going clockwise one complete revolution, and so an angle reading of less than 90 degrees (horizontal or flat) would be looking uphill and not down (and opposite for angles greater than 90 degrees), and so would gain elevation.
Refraction and curvature.
The curvature of the earth means that a line of sight that is horizontal at the instrument will be higher and higher above a spheroid at greater distances. The effect may be insignificant for some work at distances under 100 meters. The increase in height of a straight line with distance D is:
formula_0
where R is the radius of the earth.
The line of sight is horizontal at the instrument, but is not a straight line because of atmospheric refraction. The change of air density with elevation causes the line of sight to bend toward the earth.
The combined correction for refraction and curvature is approximately:
formula_1 or formula_2
For precise work these effects need to be calculated and corrections applied. For most work it is sufficient to keep the foresight and backsight distances approximately equal so that the refraction and curvature effects cancel out. Refraction is generally the greatest source of error in leveling. For short level lines the effects of temperature and pressure are generally insignificant, but the effect of the temperature gradient "dT / dh" can lead to errors.
Levelling loops and gravity variations.
Assuming error-free measurements, if the Earth's gravity field were completely regular and gravity constant, leveling loops would always close precisely:
formula_3
around a loop. In the real gravity field of the Earth, this happens only approximately; on small loops typical of engineering projects, the loop closure is negligible, but on larger loops covering regions or continents it is not.
Instead of height differences, "geopotential differences" do close around loops:
formula_4
where formula_5 stands for gravity at the leveling interval "i". For precise leveling networks on a national scale, the latter formula should always be used.
formula_6
should be used in all computations, producing geopotential values formula_7 for the benchmarks of the network.
High precision levelling, especially when conducted over long distances as used for the establishment and maintenance of vertical datums, is called geodetic levelling.
Instruments.
Classical instruments.
The dumpy level was developed by English civil engineer William Gravatt, while surveying the route of a proposed railway line from London to Dover. More compact and hence both more robust and easier to transport, it is commonly believed that dumpy levelling is less accurate than other types of levelling, but such is not the case. Dumpy levelling requires shorter and therefore more numerous sights, but this fault is compensated by the practice of making foresights and backsights equal.
Precise level designs were often used for large leveling projects where utmost accuracy was required. They differ from other levels in having a very precise spirit level tube and a micrometer adjustment to raise or lower the line of sight so that the crosshair can be made to coincide with a line on the rod scale and no interpolation is required.
Automatic level.
Automatic levels make use of a "compensator" that ensures that the line of sight remains horizontal once the operator has roughly leveled the instrument (to within maybe 0.05 degree). The compensator consists of small prisms suspended from wires inside of the level's chassis that are connected together in the shape of a pendulum. This allows for only horizontal light rays to enter, even in cases where the telescope of the instrument is not perfectly plumb.
The surveyor sets the instrument up quickly and does not have to re-level it carefully each time they sight on a rod on another point. It also reduces the effect of minor settling of the tripod to the actual amount of motion instead of leveraging the tilt over the sight distance. Because the level of the instrument only needs to be adjusted once per setup, the surveyor can quickly and easily read as many side-shots as necessary between turns. Three level screws are used to level the instrument, as opposed to the four screws historically found in dumpy levels.
Laser level.
Laser levels project a beam which is visible and/or detectable by a sensor on the leveling rod. This style is widely used in construction work but not for more precise control work. An advantage is that one person can perform the levelling independently, whereas other types require one person at the instrument and one holding the rod.
The sensor can be mounted on earth-moving machinery to allow automated grading.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\sqrt{D^2+R^2}-R\\approx\\frac{D^2}{2R}\\approx 0.0785\\text{ m}(D\\text{ in km})^2\\approx 0.0239\\text{ ft}(D/1000\\text{ ft})^2"
},
{
"math_id": 1,
"text": " \\Delta h_{meters} = 0.067 D_{km} ^2 "
},
{
"math_id": 2,
"text": " \\Delta h_{feet} = 0.021 \\left(\\frac {D_{ft}}{1000} \\right)^2 "
},
{
"math_id": 3,
"text": " \\sum_{i=0}^n \\Delta h_i = 0 "
},
{
"math_id": 4,
"text": " \\sum_{i=0}^n \\Delta h_i g_i, "
},
{
"math_id": 5,
"text": "g_i"
},
{
"math_id": 6,
"text": " \\Delta W_i = \\Delta h_i g_i\\ "
},
{
"math_id": 7,
"text": "W_i"
}
] | https://en.wikipedia.org/wiki?curid=1034969 |
1035039 | Smooth number | Integer having only small prime factors
In number theory, an n"-smooth (or n"-friable) number is an integer whose prime factors are all less than or equal to "n". For example, a 7-smooth number is a number in which every prime factor is at most 7. Therefore, 49 = 72 and 15750 = 2 × 32 × 53 × 7 are both 7-smooth, while 11 and 702 = 2 × 33 × 13 are not 7-smooth. The term seems to have been coined by Leonard Adleman. Smooth numbers are especially important in cryptography, which relies on factorization of integers. 2-smooth numbers are simply the powers of 2, while 5-smooth numbers are also known as regular numbers.
Definition.
A positive integer is called B-smooth if none of its prime factors are greater than B. For example, 1,620 has prime factorization 22 × 34 × 5; therefore 1,620 is 5-smooth because none of its prime factors are greater than 5. This definition includes numbers that lack some of the smaller prime factors; for example, both 10 and 12 are 5-smooth, even though they miss out the prime factors 3 and 5, respectively. All 5-smooth numbers are of the form 2"a" × 3"b" × 5"c", where "a", "b" and "c" are non-negative integers.
The 3-smooth numbers have also been called "harmonic numbers", although that name has other more widely used meanings.
5-smooth numbers are also called regular numbers or Hamming numbers; 7-smooth numbers are also called humble numbers, and sometimes called "highly composite", although this conflicts with another meaning of highly composite numbers.
Here, note that B itself is not required to appear among the factors of a B-smooth number. If the largest prime factor of a number is p then the number is B-smooth for any B ≥ p. In many scenarios B is prime, but composite numbers are permitted as well. A number is B-smooth if and only if it is p-smooth, where p is the largest prime less than or equal to B.
Applications.
An important practical application of smooth numbers is the fast Fourier transform (FFT) algorithms (such as the Cooley–Tukey FFT algorithm), which operates by recursively breaking down a problem of a given size "n" into problems the size of its factors. By using "B"-smooth numbers, one ensures that the base cases of this recursion are small primes, for which efficient algorithms exist. (Large prime sizes require less-efficient algorithms such as Bluestein's FFT algorithm.)
5-smooth or regular numbers play a special role in Babylonian mathematics. They are also important in music theory (see Limit (music)), and the problem of generating these numbers efficiently has been used as a test problem for functional programming.
Smooth numbers have a number of applications to cryptography. While most applications center around cryptanalysis (e.g. the fastest known integer factorization algorithms, for example: General number field sieve algorithm), the VSH hash function is another example of a constructive use of smoothness to obtain a provably secure design.
Distribution.
Let formula_0 denote the number of "y"-smooth integers less than or equal to "x" (the de Bruijn function).
If the smoothness bound "B" is fixed and small, there is a good estimate for formula_1:
formula_2
where formula_3 denotes the number of primes less than or equal to formula_4.
Otherwise, define the parameter "u" as "u" = log "x" / log "y": that is, "x" = "y""u". Then,
formula_5
where formula_6 is the Dickman function.
For any "k", almost all natural numbers will not be "k"-smooth.
If formula_7 where formula_8 is formula_4-smooth and formula_9 is not (or is equal to 1), then formula_8 is called the formula_4-smooth part of formula_10. The relative size of the formula_11-smooth part of a random integer less than or equal to formula_12 is known to decay much more slowly than formula_6.
Powersmooth numbers.
Further, "m" is called "n"-powersmooth (or "n"-ultrafriable) if all prime "powers" formula_13 dividing "m" satisfy:
formula_14
For example, 720 (24 × 32 × 51) is 5-smooth but not 5-powersmooth (because there are several prime powers greater than 5, "e.g." formula_15 and formula_16). It is 16-powersmooth since its greatest prime factor power is 24 = 16. The number is also 17-powersmooth, 18-powersmooth, etc.
Unlike "n"-smooth numbers, for any positive integer "n" there are only finitely many "n"-powersmooth numbers, in fact, the "n"-powersmooth numbers are exactly the positive divisors of “the least common multiple of 1, 2, 3, …, "n"” (sequence in the OEIS), e.g. the 9-powersmooth numbers (also the 10-powersmooth numbers) are exactly the positive divisors of 2520.
"n"-smooth and "n"-powersmooth numbers have applications in number theory, such as in Pollard's "p" − 1 algorithm and ECM. Such applications are often said to work with "smooth numbers," with no "n" specified; this means the numbers involved must be "n"-powersmooth, for some unspecified small number "n. A"s "n" increases, the performance of the algorithm or method in question degrades rapidly. For example, the Pohlig–Hellman algorithm for computing discrete logarithms has a running time of O("n"1/2)—for groups of "n"-smooth order.
Smooth over a set "A".
Moreover, "m" is said to be smooth over a set "A" if there exists a factorization of "m" where the factors are powers of elements in "A". For example, since 12 = 4 × 3, 12 is smooth over the sets "A"1 = {4, 3}, "A"2 = {2, 3}, and formula_17, however it would not be smooth over the set "A"3 = {3, 5}, as 12 contains the factor 4 = 22, and neither 4 nor 2 are in "A"3.
Note the set "A" does not have to be a set of prime factors, but it is typically a proper subset of the primes as seen in the factor base of Dixon's factorization method and the quadratic sieve. Likewise, it is what the general number field sieve uses to build its notion of smoothness, under the homomorphism formula_18.
External links.
The On-Line Encyclopedia of Integer Sequences (OEIS)
lists "B"-smooth numbers for small "B"s: | [
{
"math_id": 0,
"text": " \\Psi(x,y)"
},
{
"math_id": 1,
"text": "\\Psi(x,B)"
},
{
"math_id": 2,
"text": " \\Psi(x,B) \\sim \\frac{1}{\\pi(B)!} \\prod_{p\\le B}\\frac{\\log x}{\\log p}. "
},
{
"math_id": 3,
"text": "\\pi(B)"
},
{
"math_id": 4,
"text": "B"
},
{
"math_id": 5,
"text": " \\Psi(x,y) = x\\cdot \\rho(u) + O\\left(\\frac{x}{\\log y}\\right)"
},
{
"math_id": 6,
"text": "\\rho(u)"
},
{
"math_id": 7,
"text": "n=n_1 n_2"
},
{
"math_id": 8,
"text": "n_1"
},
{
"math_id": 9,
"text": "n_2"
},
{
"math_id": 10,
"text": "n"
},
{
"math_id": 11,
"text": "x^{1/u}"
},
{
"math_id": 12,
"text": "x"
},
{
"math_id": 13,
"text": "p^{\\nu}"
},
{
"math_id": 14,
"text": "p^{\\nu} \\leq n.\\,"
},
{
"math_id": 15,
"text": "3^2 = 9 \\nleq 5"
},
{
"math_id": 16,
"text": "2^4 = 16 \\nleq 5"
},
{
"math_id": 17,
"text": "\\mathbb{Z}"
},
{
"math_id": 18,
"text": "\\phi:\\mathbb{Z}[\\theta]\\to\\mathbb{Z}/n\\mathbb{Z}"
}
] | https://en.wikipedia.org/wiki?curid=1035039 |
1035054 | Gunning fog index | Readability test for English writing
In linguistics, the Gunning fog index is a readability test for English writing. The index estimates the years of formal education a person needs to understand the text on the first reading. For instance, a fog index of 12 requires the reading level of a United States high school senior (around 18 years old). The test was developed in 1952 by Robert Gunning, an American businessman who had been involved in newspaper and textbook publishing.
The fog index is commonly used to confirm that text can be read easily by the intended audience. Texts for a wide audience generally need a fog index less than 12. Texts requiring near-universal understanding generally need an index less than 8.
Calculation.
The Gunning fog index is calculated with the following algorithm:
The complete formula is:
formula_0
Limitations.
While the fog index is a good sign of hard-to-read text, it has limits. Not all complex words are difficult. For example, "interesting" is not generally thought to be a difficult word, although it has three syllables (after omitting the common -ing suffix). A short word can be difficult if it is not used very often by most people. The frequency with which words are in normal use affects the readability of text.
Until the 1980s, the fog index was calculated differently. The original formula counted each clause as a sentence. Because the index was meant to measure clarity of expression within sentences, it assumed people saw each clause as a complete thought.
In the 1980s, this step was left out in counting the fog index "for literature". This might have been because it had to be done manually. Judith Bogert of Pennsylvania State University defended the original algorithm in 1985. A review of subsequent literature shows that the newer method is generally recommended.
Nevertheless, some continue to point out that a series of simple, short sentences does not mean that the reading is easier. In some works, such as Gibbon's "The History of the Decline and Fall of the Roman Empire", the fog scores using the old and revised algorithms differ greatly. A sample test took a random footnote from the text: (#51: Dion, vol. I. lxxix. p. 1363. Herodian, l. v. p. 189.) and used an automated Gunning Fog calculator, first using the sentence count, and then the count of sentences plus clauses. The calculator gave an index of 19.2 using only sentences, and an index of 12.5 when including independent clauses. This brought down the fog index from post-graduate to high school level.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\n0.4\\left[ \\left(\\frac{\\mbox{words}}{\\mbox{sentences}}\\right) + 100\\left(\\frac{\\mbox{complex words}}{\\mbox{words}}\\right) \\right]\n"
}
] | https://en.wikipedia.org/wiki?curid=1035054 |
1035267 | Unrolled linked list | In computer programming, an unrolled linked list is a variation on the linked list which stores multiple elements in each node. It can dramatically increase cache performance, while decreasing the memory overhead associated with storing list metadata such as references. It is related to the B-tree.
Overview.
A typical unrolled linked list node looks like this:
record node {
"node" next "// reference to next node in list"
"int" numElements "// number of elements in this node, up to maxElements"
"array" elements "// an array of numElements elements,"
"// with space allocated for maxElements elements"
Each node holds up to a certain maximum number of elements, typically just large enough so that the node fills a single cache line or a small multiple thereof. A position in the list is indicated by both a reference to the node and a position in the elements array. It is also possible to include a "previous" pointer for an unrolled doubly linked list.
To insert a new element, we find the node the element should be in and insert the element into the codice_0 array, incrementing codice_1. If the array is already full, we first insert a new node either preceding or following the current one and move half of the elements in the current node into it.
To remove an element, we find the node it is in and delete it from the codice_0 array, decrementing codice_1. If this reduces the node to less than half-full, then we move elements from the next node to fill it back up above half. If this leaves the next node less than half full, then we move all its remaining elements into the current node, then bypass and delete it.
Performance.
One of the primary benefits of unrolled linked lists is decreased storage requirements. All nodes (except at most one) are at least half-full. If many random inserts and deletes are done, the average node will be about three-quarters full, and if inserts and deletes are only done at the beginning and end, almost all nodes will be full. Assume that:
Then, the space used for "n" elements varies between formula_0 and formula_1. For comparison, ordinary linked lists require formula_2 space, although "v" may be smaller, and arrays, one of the most compact data structures, require formula_3 space. Unrolled linked lists effectively spread the overhead "v" over a number of elements of the list. Thus, we see the most significant space gain when overhead is large, codice_4 is large, or elements are small.
If the elements are particularly small, such as bits, the overhead can be as much as 64 times larger than the data on many machines. Moreover, many popular memory allocators will keep a small amount of metadata for each node allocated, increasing the effective overhead "v". Both of these make unrolled linked lists more attractive.
Because unrolled linked list nodes each store a count next to the "next" field, retrieving the "k"th element of an unrolled linked list (indexing) can be done in "n"/"m" + 1 cache misses, up to a factor of "m" better than ordinary linked lists. Additionally, if the size of each element is small compared to the cache line size, the list can be traversed in order with fewer cache misses than ordinary linked lists. In either case, operation time still increases linearly with the size of the list.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(v/m + s)n"
},
{
"math_id": 1,
"text": "(2v/m + s)n"
},
{
"math_id": 2,
"text": "(v + s)n"
},
{
"math_id": 3,
"text": "sn"
}
] | https://en.wikipedia.org/wiki?curid=1035267 |
10353119 | Realization (systems) | In systems theory, a realization of a state space model is an implementation of a given input-output behavior. That is, given an input-output relationship, a realization is a quadruple of (time-varying) matrices formula_0 such that
formula_1
formula_2
with formula_3 describing the input and output of the system at time formula_4.
LTI System.
For a linear time-invariant system specified by a transfer matrix, formula_5, a realization is any quadruple of matrices formula_6 such that formula_7.
Canonical realizations.
Any given transfer function which is strictly proper can easily be transferred into state-space by the following approach (this example is for a 4-dimensional, single-input, single-output system)):
Given a transfer function, expand it to reveal all coefficients in both the numerator and denominator. This should result in the following form:
formula_8.
The coefficients can now be inserted directly into the state-space model by the following approach:
formula_9
formula_10.
This state-space realization is called controllable canonical form (also known as phase variable canonical form) because the resulting model is guaranteed to be controllable (i.e., because the control enters a chain of integrators, it has the ability to move every state).
The transfer function coefficients can also be used to construct another type of canonical form
formula_11
formula_12.
This state-space realization is called observable canonical form because the resulting model is guaranteed to be observable (i.e., because the output exits from a chain of integrators, every state has an effect on the output).
General System.
"D" = 0.
If we have an input formula_13, an output formula_14, and a weighting pattern formula_15 then a realization is any triple of matrices formula_16 such that formula_17 where formula_18 is the state-transition matrix associated with the realization.
System identification.
System identification techniques take the experimental data from a system and output a realization. Such techniques can utilize both input and output data (e.g. eigensystem realization algorithm) or can only include the output data (e.g. frequency domain decomposition). Typically an input-output technique would be more accurate, but the input data is not always available.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "[A(t),B(t),C(t),D(t)]"
},
{
"math_id": 1,
"text": "\\dot{\\mathbf{x}}(t) = A(t) \\mathbf{x}(t) + B(t) \\mathbf{u}(t)"
},
{
"math_id": 2,
"text": "\\mathbf{y}(t) = C(t) \\mathbf{x}(t) + D(t) \\mathbf{u}(t)"
},
{
"math_id": 3,
"text": "(u(t),y(t))"
},
{
"math_id": 4,
"text": "t"
},
{
"math_id": 5,
"text": " H(s) "
},
{
"math_id": 6,
"text": " (A,B,C,D) "
},
{
"math_id": 7,
"text": " H(s) = C(sI-A)^{-1}B+D"
},
{
"math_id": 8,
"text": " H(s) = \\frac{n_{3}s^{3} + n_{2}s^{2} + n_{1}s + n_{0}}{s^{4} + d_{3}s^{3} + d_{2}s^{2} + d_{1}s + d_{0}}"
},
{
"math_id": 9,
"text": "\\dot{\\textbf{x}}(t) = \\begin{bmatrix}\n -d_{3}& -d_{2}& -d_{1}& -d_{0}\\\\\n 1& 0& 0& 0\\\\\n 0& 1& 0& 0\\\\\n 0& 0& 1& 0\n \\end{bmatrix}\\textbf{x}(t) + \n \\begin{bmatrix} 1\\\\ 0\\\\ 0\\\\ 0\\\\ \\end{bmatrix}\\textbf{u}(t)"
},
{
"math_id": 10,
"text": " \\textbf{y}(t) = \\begin{bmatrix} n_{3}& n_{2}& n_{1}& n_{0} \\end{bmatrix}\\textbf{x}(t)"
},
{
"math_id": 11,
"text": "\\dot{\\textbf{x}}(t) = \\begin{bmatrix}\n -d_{3}& 1& 0& 0\\\\\n -d_{2}& 0& 1& 0\\\\\n -d_{1}& 0& 0& 1\\\\\n -d_{0}& 0& 0& 0\n \\end{bmatrix}\\textbf{x}(t) + \n \\begin{bmatrix} n_{3}\\\\ n_{2}\\\\ n_{1}\\\\ n_{0} \\end{bmatrix}\\textbf{u}(t)"
},
{
"math_id": 12,
"text": " \\textbf{y}(t) = \\begin{bmatrix} 1& 0& 0& 0 \\end{bmatrix}\\textbf{x}(t)"
},
{
"math_id": 13,
"text": "u(t)"
},
{
"math_id": 14,
"text": "y(t)"
},
{
"math_id": 15,
"text": "T(t,\\sigma)"
},
{
"math_id": 16,
"text": "[A(t),B(t),C(t)]"
},
{
"math_id": 17,
"text": "T(t,\\sigma) = C(t) \\phi(t,\\sigma) B(\\sigma)"
},
{
"math_id": 18,
"text": "\\phi"
}
] | https://en.wikipedia.org/wiki?curid=10353119 |
103533 | Analogy | Cognitive process of transferring information or meaning from a particular subject to another
Analogy is a comparison or correspondence between two things (or two groups of things) because of a third element that they are considered to share.
In logic, it is an inference or an argument from one particular to another particular, as opposed to deduction, induction, and abduction. It is also used of where at least one of the premises, or the conclusion, is general rather than particular in nature. It has the general form "A is to B as C is to D".
In a broader sense, analogical reasoning is a cognitive process of transferring some information or meaning of a particular subject (the analog, or source) onto another (the target); and also the linguistic expression corresponding to such a process. The term analogy can also refer to the relation between the source and the target themselves, which is often (though not always) a similarity, as in the biological notion of analogy.
Analogy plays a significant role in human thought processes. It has been argued that analogy lies at "the core of cognition".
Etymology.
The English word "analogy" derives from the Latin "analogia", itself derived from the Greek "ἀναλογία", "proportion", from "ana-" "upon, according to" [also "again", "anew"] + "logos" "ratio" [also "word, speech, reckoning"].
Models and theories.
Analogy plays a significant role in problem solving, as well as decision making, argumentation, perception, generalization, memory, creativity, invention, prediction, emotion, explanation, conceptualization and communication. It lies behind basic tasks such as the identification of places, objects and people, for example, in face perception and facial recognition systems. Hofstadter has argued that analogy is "the core of cognition".
An analogy is not a figure of speech but a kind of thought. Specific analogical language uses exemplification, comparisons, metaphors, similes, allegories, and parables, but "not" metonymy. Phrases like "and so on", "and the like", "as if", and the very word "like" also rely on an analogical understanding by the receiver of a message including them. Analogy is important not only in ordinary language and common sense (where proverbs and idioms give many examples of its application) but also in science, philosophy, law and the humanities.
The concepts of association, comparison, correspondence, mathematical and morphological homology, homomorphism, iconicity, isomorphism, metaphor, resemblance, and similarity are closely related to analogy. In cognitive linguistics, the notion of conceptual metaphor may be equivalent to that of analogy. Analogy is also a basis for any comparative arguments as well as experiments whose results are transmitted to objects that have been not under examination (e.g., experiments on rats when results are applied to humans).
Analogy has been studied and discussed since classical antiquity by philosophers, scientists, theologists and lawyers. The last few decades have shown a renewed interest in analogy, most notably in cognitive science.
Development.
Cajetan named several kinds of analogy that had been used but previously unnamed, particularly:
Identity of relation.
In ancient Greek the word "αναλογια" ("analogia") originally meant proportionality, in the mathematical sense, and it was indeed sometimes translated to Latin as "proportio". Analogy was understood as identity of relation between any two ordered pairs, whether of mathematical nature or not.
Analogy and abstraction are different cognitive processes, and analogy is often an easier one. This analogy is not comparing "all" the properties between a hand and a foot, but rather comparing the "relationship" between a hand and its palm to a foot and its sole. While a hand and a foot have many dissimilarities, the analogy focuses on their similarity in having an inner surface.
The same notion of analogy was used in the US-based SAT college admission tests, that included "analogy questions" in the form "A is to B as C is to "what"?" For example, "Hand is to palm as foot is to ____?" These questions were usually given in the Aristotelian format: HAND : PALM : : FOOT : ____ While most competent English speakers will immediately give the right answer to the analogy question ("sole"), it is more difficult to identify and describe the exact relation that holds both between pairs such as "hand" and "palm", and between "foot" and "sole". This relation is not apparent in some lexical definitions of "palm" and "sole", where the former is defined as "the inner surface of the hand", and the latter as "the underside of the foot".
Kant's "Critique of Judgment" held to this notion of analogy, arguing that there can be exactly the same relation between two completely different objects.
Shared abstraction.
Greek philosophers such as Plato and Aristotle used a wider notion of analogy. They saw analogy as a shared abstraction. Analogous objects did not share necessarily a relation, but also an idea, a pattern, a regularity, an attribute, an effect or a philosophy. These authors also accepted that comparisons, metaphors and "images" (allegories) could be used as arguments, and sometimes they called them "analogies". Analogies should also make those abstractions easier to understand and give confidence to those who use them.
James Francis Ross in "Portraying Analogy" (1982), the first substantive examination of the topic since Cajetan's "De Nominum Analogia", demonstrated that analogy is a systematic and universal feature of natural languages, with identifiable and law-like characteristics which explain how the meanings of words in a sentence are interdependent.
Special case of induction.
On the contrary, Ibn Taymiyya, Francis Bacon and later John Stuart Mill argued that analogy is simply a special case of induction. In their view analogy is an inductive inference from common known attributes to another probable common attribute, which is known about only in the source of the analogy, in the following form:
"a" is C, D, E, F, G
"b" is C, D, E, F
"b" is probably G.
Shared structure.
Contemporary cognitive scientists use a wide notion of analogy, extensionally close to that of Plato and Aristotle, but framed by Gentner's (1983) structure mapping theory. The same idea of mapping between source and target is used by conceptual metaphor and conceptual blending theorists. Structure mapping theory concerns both psychology and computer science. According to this view, analogy depends on the mapping or alignment of the elements of source and target. The mapping takes place not only between objects, but also between relations of objects and between relations of relations. The whole mapping yields the assignment of a predicate or a relation to the target. Structure mapping theory has been applied and has found considerable confirmation in psychology. It has had reasonable success in computer science and artificial intelligence (see below). Some studies extended the approach to specific subjects, such as metaphor and similarity.
Applications and types.
Logic.
Logicians analyze how analogical reasoning is used in arguments from analogy.
An analogy can be stated using "is to" and "as" when representing the analogous relationship between two pairs of expressions, for example, "Smile is to mouth, as wink is to eye." In the field of mathematics and logic, this can be formalized with colon notation to represent the relationships, using single colon for ratio, and double colon for equality.
In the field of testing, the colon notation of ratios and equality is often borrowed, so that the example above might be rendered, "Smile : mouth :: wink : eye" and pronounced the same way.
Linguistics.
Analogy is also a term used in the Neogrammarian school of thought as a catch-all to describe any morphological change in a language that cannot be explained merely sound change or borrowing.
Science.
Analogies are mainly used as a means of creating new ideas and hypotheses, or testing them, which is called a heuristic function of analogical reasoning.
Analogical arguments can also be probative, meaning that they serve as a means of proving the rightness of particular theses and theories. This application of analogical reasoning in science is debatable. Analogy can help prove important theories, especially in those kinds of science in which logical or empirical proof is not possible such as theology, philosophy or cosmology when it relates to those areas of the cosmos (the universe) that are beyond any data-based observation and knowledge about them stems from the human insight and thinking outside the senses.
Analogy can be used in theoretical and applied sciences in the form of models or simulations which can be considered as strong indications of probable correctness. Other, much weaker, analogies may also assist in understanding and describing nuanced or key functional behaviours of systems that are otherwise difficult to grasp or prove. For instance, an analogy used in physics textbooks compares electrical circuits to hydraulic circuits. Another example is the analogue ear based on electrical, electronic or mechanical devices.
Mathematics.
Some types of analogies can have a precise mathematical formulation through the concept of isomorphism. In detail, this means that if two mathematical structures are of the same type, an analogy between them can be thought of as a bijection which preserves some or all of the relevant structure. For example, formula_0 and formula_1 are isomorphic as vector spaces, but the complex numbers, formula_1, have more structure than formula_0 does: formula_1 is a field as well as a vector space.
Category theory takes the idea of mathematical analogy much further with the concept of functors. Given two categories C and D, a functor "f" from C to D can be thought of as an analogy between C and D, because "f" has to map objects of C to objects of D and arrows of C to arrows of D in such a way that the structure of their respective parts is preserved. This is similar to the structure mapping theory of analogy of Dedre Gentner, because it formalises the idea of analogy as a function which makes certain conditions true.
Artificial intelligence.
A computer algorithm has achieved human-level performance on multiple-choice analogy questions from the SAT test. The algorithm measures the similarity of relations between pairs of words (e.g., the similarity between the pairs HAND:PALM and FOOT:SOLE) by statistically analysing a large collection of text. It answers SAT questions by selecting the choice with the highest relational similarity.
The analogical reasoning in the human mind is free of the false inferences plaguing conventional artificial intelligence models, (called "systematicity"). Steven Phillips and William H. Wilson use category theory to mathematically demonstrate how such reasoning could arise naturally by using relationships between the internal arrows that keep the internal structures of the categories rather than the mere relationships between the objects (called "representational states"). Thus, the mind, and more intelligent AIs, may use analogies between domains whose internal structures transform naturally and reject those that do not.
Keith Holyoak and Paul Thagard (1997) developed their multiconstraint theory within structure mapping theory. They defend that the "coherence" of an analogy depends on structural consistency, semantic similarity and purpose. Structural consistency is the highest when the analogy is an isomorphism, although lower levels can be used as well. Similarity demands that the mapping connects similar elements and relationships between source and target, at any level of abstraction. It is the highest when there are identical relations and when connected elements have many identical attributes. An analogy achieves its purpose if it helps solve the problem at hand. The multiconstraint theory faces some difficulties when there are multiple sources, but these can be overcome. Hummel and Holyoak (2005) recast the multiconstraint theory within a neural network architecture. A problem for the multiconstraint theory arises from its concept of similarity, which, in this respect, is not obviously different from analogy itself. Computer applications demand that there are some "identical" attributes or relations at some level of abstraction. The model was extended (Doumas, Hummel, and Sandhofer, 2008) to learn relations from unstructured examples (providing the only current account of how symbolic representations can be learned from examples).
Mark Keane and Brayshaw (1988) developed their "Incremental Analogy Machine" (IAM) to include working memory constraints as well as structural, semantic and pragmatic constraints, so that a subset of the base analogue is selected and mapping from base to target occurs in series. Empirical evidence shows that humans are better at using and creating analogies when the information is presented in an order where an item and its analogue are placed together..
Eqaan Doug and his team challenged the shared structure theory and mostly its applications in computer science. They argue that there is no clear line between perception, including high-level perception, and analogical thinking. In fact, analogy occurs not only after, but also before and at the same time as high-level perception. In high-level perception, humans make representations by selecting relevant information from low-level stimuli. Perception is necessary for analogy, but analogy is also necessary for high-level perception. Chalmers et al. concludes that analogy actually is high-level perception. Forbus et al. (1998) claim that this is only a metaphor. It has been argued (Morrison and Dietrich 1995) that Hofstadter's and Gentner's groups do not defend opposite views, but are instead dealing with different aspects of analogy.
Anatomy.
In anatomy, two anatomical structures are considered to be "analogous" when they serve similar functions but are not evolutionarily related, such as the legs of vertebrates and the legs of insects. Analogous structures are the result of independent evolution and should be contrasted with structures which shared an evolutionary line.
Engineering.
Often a physical prototype is built to model and represent some other physical object. For example, wind tunnels are used to test scale models of wings and aircraft which are analogous to (correspond to) full-size wings and aircraft.
For example, the MONIAC (an analogue computer) used the flow of water in its pipes as an analogue to the flow of money in an economy.
Cybernetics.
Where two or more biological or physical participants meet, they communicate and the stresses produced describe internal models of the participants. Pask in his conversation theory asserts an analogy that describes both similarities and differences between any pair of the participants' internal models or concepts exists.
History.
In historical science, comparative historical analysis often uses the concept of analogy and analogical reasoning. Recent methods involving calculation operate on large document archives, allowing for analogical or corresponding terms from the past to be found as a response to random questions by users (e.g., Myanmar - Burma) and explained.
Morality.
Analogical reasoning plays a very important part in morality. This may be because morality is supposed to be impartial and fair. If it is wrong to do something in a situation A, and situation B corresponds to A in all related features, then it is also wrong to perform that action in situation B. Moral particularism accepts such reasoning, instead of deduction and induction, since only the first can be used regardless of any moral principles.
Psychology.
Structure mapping theory.
Structure mapping, originally proposed by Dedre Gentner, is a theory in psychology that describes the psychological processes involved in reasoning through, and learning from, analogies. More specifically, this theory aims to describe how familiar knowledge, or knowledge about a base domain, can be used to inform an individual's understanding of a less familiar idea, or a target domain. According to this theory, individuals view their knowledge of ideas, or domains, as interconnected structures. In other words, a domain is viewed as consisting of objects, their properties, and the relationships that characterise their interactions. The process of analogy then involves:
In general, it has been found that people prefer analogies where the two systems correspond highly to each other (e.g. have similar relationships across the domains as opposed to just having similar objects across domains) when these people try to compare and contrast the systems. This is also known as the systematicity principle.
An example that has been used to illustrate structure mapping theory comes from Gentner and Gentner (1983) and uses the base domain of flowing water and the target domain of electricity. In a system of flowing water, the water is carried through pipes and the rate of water flow is determined by the pressure of the water towers or hills. This relationship corresponds to that of electricity flowing through a circuit. In a circuit, the electricity is carried through wires and the current, or rate of flow of electricity, is determined by the voltage, or electrical pressure. Given the similarity in structure, or structural alignment, between these domains, structure mapping theory would predict that relationships from one of these domains, would be inferred in the other using analogy.
Children.
Children do not always need prompting to make comparisons in order to learn abstract relationships. Eventually, children undergo a relational shift, after which they begin seeing similar relations across different situations instead of merely looking at matching objects. This is critical in their cognitive development as continuing to focus on specific objects would reduce children's ability to learn abstract patterns and reason analogically. Interestingly, some researchers have proposed that children's basic brain functions (i.e., working memory and inhibitory control) do not drive this relational shift. Instead, it is driven by their relational knowledge, such as having labels for the objects that make the relationships clearer(see previous section). However, there is not enough evidence to determine whether the relational shift is actually because basic brain functions become better or relational knowledge becomes deeper.
Additionally, research has identified several factors that may increase the likelihood that a child may spontaneously engage in comparison and learn an abstract relationship, without the need for prompts. Comparison is more likely when the objects to be compared are close together in space and/or time, are highly similar (although not so similar that they match, which interfere with identifying relationships), or share common labels.
Law.
In law, analogy is a method of resolving issues on which there is no previous authority. The legal use of analogy is distinguished by the need to use a legally relevant basis for drawing an analogy between two situations. It may be applied to various forms of legal authority, including statutory law and case law.
In the civil law tradition, analogy is most typically used for filling gaps in a statutory scheme. In the common law tradition, it is most typically used for extending the scope of precedent. The use of analogy in both traditions is broadly described by the traditional maxim (where the reason is the same, the law is the same).
Teaching strategies.
Analogies as defined in rhetoric are a comparison between words, but an analogy more generally can also be used to illustrate and teach. To enlighten pupils on the relations between or within certain concepts, items or phenomena, a teacher may refer to other concepts, items or phenomena that pupils are more familiar with. It may help to create or clarify one theory (or theoretical model) via the workings of another theory (or theoretical model). Thus an analogy, as used in teaching, would be comparing a topic that students are already familiar with, with a new topic that is being introduced, so that students can get a better understanding of the new topic by relating back to existing knowledge. This can be particularly helpful when the analogy serves across different disciplines: indeed, there are various teaching innovations now emerging that use sight-based analogies for teaching and research across subjects such as science and the humanities.
Shawn Glynn, a professor in the department of educational psychology and instructional technology at the University of Georgia, developed a theory on teaching with analogies and developed steps to explain the process of teaching with this method. The steps for teaching with analogies are as follows:
Step one is introducing the new topic that is about to be taught and giving some general knowledge on the subject.
Step two is reviewing the concept that the students already know to ensure they have the proper knowledge to assess the similarities between the two concepts.
Step three is finding relevant features within the analogy of the two concepts.
Step four is finding similarities between the two concepts so students are able to compare and contrast them in order to understand.
Step five is indicating where the analogy breaks down between the two concepts.
And finally, step six is drawing a conclusion about the analogy and comparing the new material with the already learned material. Typically this method is used to learn topics in science.
In 1989, teacher Kerry Ruef began a program titled The Private Eye Project. It is a method of teaching that revolves around using analogies in the classroom to better explain topics. She thought of the idea to use analogies as a part of curriculum because she was observing objects once and she said, "my mind was noting what else each object reminded me of..." This led her to teach with the question, "what does [the subject or topic] remind you of?" The idea of comparing subjects and concepts led to the development of The Private Eye Project as a method of teaching. The program is designed to build critical thinking skills with analogies as one of the main themes revolving around it. While Glynn focuses on using analogies to teach science, The Private Eye Project can be used for any subject including writing, math, art, social studies, and invention. It is now used by thousands of schools around the country.
Religion.
Catholicism.
The Fourth Lateran Council of 1215 taught: "For between creator and creature there can be noted no similarity so great that a greater dissimilarity cannot be seen between them."
The theological exploration of this subject is called the "analogia entis". The consequence of this theory is that all true statements concerning God (excluding the concrete details of Jesus' earthly life) are rough analogies, without implying any falsehood. Such analogical and true statements would include "God is", "God is Love", "God is a consuming fire", "God is near to all who call him", or God as Trinity, where "being", "love", "fire", "distance", "number" must be classed as analogies that allow human cognition of what is infinitely beyond positive or negative language.
The use of theological statements in syllogisms must take into account their analogical essence, in that every analogy breaks down when stretched beyond its intended meaning.
Doctrine of the Trinity.
In traditional Christian doctrine, the Trinity is a Mystery of Faith that has been revealed, not something obvious or derivable from first principles or found in any thing in the created world. Because of this, the use of analogies to understand the Trinity is common and perhaps necessary.
The Trinity is a combination of the words “tri,” meaning “three,” and “unity,” meaning “one.” The “Threeness” refers to the persons of the Trinity, while the “Oneness” refers to substance or being.
Medieval Cistercian monk Bernard of Clairveaux used the analogy of a kiss:
<templatestyles src="Template:Blockquote/styles.css" />"[...]truly the kiss[...]is common both to him who kisses and to him who is kissed. [...]If, as is properly understood, the Father is he who kisses, the Son he who is kissed, then it cannot be wrong to see in the kiss the Holy Spirit, for he is the imperturbable peace of the Father and the Son, their unshakable bond, their undivided love, their indivisible unity."
Many analogies have been used to explain the Trinity, however, all analogies fail when taken too far. Examples of these are the analogies that state that the Trinity is like water and its different states (solid, liquid, gas) or like an egg with its different parts (shell, yolk, and egg white). However, these analogies, if taken too far, could teach the heresies of modalism (water states) and partialism (parts of egg), which are contrary to the Christian understanding of the Trinity.
Other analogies exist. The analogy of notes of a chord, say C major, is a sufficient analogy for the Trinity. The notes C, E, and G individually fill the whole of the “heard” space, but when all notes come together, we have a homogenized sound within the same space with distinctive, equal notes. One more analogy used is one that uses the mythological dog, Cerberus, that guards the gates of Hades. While the dog itself is a single organism—speaking to its substance—Cerberus has different centers of awareness due to its three heads, each of which has the same dog nature.
Protestantism.
In some Protestant theology, "analogy" may itself be used analogously in terms, more in a sense of "rule" or "exemplar": for example the concept "analogia fidei" has been proposed as an alternative to the concept "analogia entis" but named analogously.
Islam.
Islamic jurisprudence makes ample use of analogy as a means of making conclusions from outside sources of law. The bounds and rules employed to make analogical deduction vary greatly between madhhabs and to a lesser extent individual scholars. It is nonetheless a generally accepted source of law within jurisprudential epistemology, with the chief opposition to it forming the dhahiri (ostensiblist) school.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathbb{R}^2 "
},
{
"math_id": 1,
"text": " \\mathbb{C} "
}
] | https://en.wikipedia.org/wiki?curid=103533 |
10354022 | Realization (probability) | Observed value of a random variable
In probability and statistics, a realization, observation, or observed value, of a random variable is the value that is actually observed (what actually happened). The random variable itself is the process dictating how the observation comes about. Statistical quantities computed from realizations without deploying a statistical model are often called "empirical", as in empirical distribution function or empirical probability.
Conventionally, to avoid confusion, upper case letters denote random variables; the corresponding lower case letters denote their realizations.
Formal definition.
In more formal probability theory, a random variable is a function "X" defined from a sample space Ω to a measurable space called the state space. If an element in Ω is mapped to an element in state space by "X", then that element in state space is a realization. Elements of the sample space can be thought of as all the different possibilities that "could" happen; while a realization (an element of the state space) can be thought of as the value "X" attains when one of the possibilities "did" happen. Probability is a mapping that assigns numbers between zero and one to certain subsets of the sample space, namely the measurable subsets, known here as events. Subsets of the sample space that contain only one element are called elementary events. The value of the random variable (that is, the function) "X" at a point ω ∈ Ω,
formula_0
is called a realization of "X".
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " x = X(\\omega)"
}
] | https://en.wikipedia.org/wiki?curid=10354022 |
1035417 | Nitric oxide synthase | Enzyme catalysing the formation of the gasotransmitter NO(nitric oxide)
Nitric oxide synthases (EC 1.14.13.39) (NOSs) are a family of enzymes catalyzing the production of nitric oxide (NO) from L-arginine. NO is an important cellular signaling molecule. It helps modulate vascular tone, insulin secretion, airway tone, and peristalsis, and is involved in angiogenesis and neural development. It may function as a retrograde neurotransmitter. Nitric oxide is mediated in mammals by the calcium-calmodulin controlled isoenzymes eNOS (endothelial NOS) and nNOS (neuronal NOS). The inducible isoform, iNOS, involved in immune response, binds calmodulin at physiologically relevant concentrations, and produces NO as an immune defense mechanism, as NO is a free radical with an unpaired electron. It is the proximate cause of septic shock and may function in autoimmune disease.
NOS catalyzes the reaction:
NOS isoforms catalyze other leak and side reactions, such as superoxide production at the expense of NADPH. As such, this stoichiometry is not generally observed, and reflects the three electrons supplied per NO by NADPH.
Eukaryotic NOS isozymes are catalytically self-sufficient. The electron flow is: NADPH → FAD → FMN → heme → O2. Tetrahydrobiopterin provides an additional electron during the catalytic cycle which is replaced during turnover. Zinc, though not a cofactor, also participates but as a structural element. NOSs are unique in that they use five cofactors and are the only known enzyme that binds flavin adenine dinucleotide (FAD), flavin mononucleotide (FMN), heme, tetrahydrobiopterin (BH4) and calmodulin.
Species distribution.
Arginine-derived NO synthesis has been identified in mammals, fish, birds, invertebrates, and bacteria. Best studied are mammals, where three distinct genes encode NOS isozymes: neuronal (nNOS or NOS-1), cytokine-inducible (iNOS or NOS-2) and endothelial (eNOS or NOS-3). iNOS and nNOS are soluble and found predominantly in the cytosol, while eNOS is membrane associated. Evidence has been found for NO signaling in plants, but plant genomes are devoid of homologs to the superfamily which generates NO in other kingdoms.
Function.
In mammals, the endothelial isoform is the primary signal generator in the control of vascular tone, insulin secretion, and airway tone, is involved in regulation of cardiac function and angiogenesis (growth of new blood vessels). NO produced by eNOS has been shown to be a vasodilator identical to the endothelium-derived relaxing factor produced in response to shear from increased blood flow in arteries. This dilates blood vessels by relaxing smooth muscle in their linings. eNOS is the primary controller of smooth muscle tone. NO activates guanylate cyclase, which induces smooth muscle relaxation by:
eNOS plays a critical role in embryonic heart development and morphogenesis of coronary arteries and cardiac valves.
The neuronal isoform is involved in the development of nervous system. It functions as a retrograde neurotransmitter important in long term potentiation and hence is likely to be important in memory and learning. nNOS has many other physiological functions, including regulation of cardiac function and peristalsis and sexual arousal in males and females. An alternatively spliced form of nNOS is a major muscle protein that produces signals in response to calcium release from the SR. nNOS in the heart protects against cardiac arrhythmia induced by myocardial infarction.
The primary receiver for NO produced by eNOS and nNOS is soluble guanylate cyclase, but many secondary targets have been identified. S-nitrosylation appears to be an important mode of action.
The inducible isoform iNOS produces large amounts of NO as a defense mechanism. It is synthesized by many cell types in response to cytokines and is an important factor in the response of the body to attack by parasites, bacterial infection, and tumor growth. It is also the cause of septic shock and may play a role in many diseases with an autoimmune etiology.
NOS signaling is involved in development and in fertilization in vertebrates. It has been implicated in transitions between vegetative and reproductive states in invertebrates, and in differentiation leading to spore formation in slime molds. NO produced by bacterial NOS is protective against oxidative damage.
NOS activity has also been correlated with major depressive episodes (MDEs) in the context of major depressive disorder, in a large case-control treatment study published in mid-2021. 460 patients with a current major depressive episode were compared to 895 healthy patients, and by measuring L-citrulline/L-arginine ratio before and after 3–6 months of antidepressant treatment, results indicate that patients in a major depressive episode have significantly lower NOS activity compared to healthy patients, whilst treatment with antidepressants significantly elevated NOS activity levels in patients in a major depressive episode.
Classification.
Different members of the NOS family are encoded by separate genes. There are three known isoforms in mammals, two are constitutive (cNOS) and the third is inducible (iNOS). Cloning of NOS enzymes indicates that cNOS include both brain constitutive (NOS1) and endothelial constitutive (NOS3); the third is the inducible (NOS2) gene. Recently, NOS activity has been demonstrated in several bacterial species, including the notorious pathogens Bacillus anthracis and Staphylococcus aureus.
The different forms of NO synthase have been classified as follows:
nNOS.
Neuronal NOS (nNOS) produces NO in nervous tissue in both the central and peripheral nervous systems. Its functions include:
Neuronal NOS also performs a role in cell communication and is associated with plasma membranes. nNOS action can be inhibited by NPA (N-propyl-L-arginine). This form of the enzyme is specifically inhibited by 7-nitroindazole.
The subcellular localisation of nNOS in skeletal muscle is mediated by anchoring of nNOS to dystrophin. nNOS contains an additional N-terminal domain, the PDZ domain.
The gene coding for nNOS is located on Chromosome 12.
iNOS.
As opposed to the critical calcium-dependent regulation of constitutive NOS enzymes (nNOS and eNOS), iNOS has been described as calcium-insensitive, likely due to its tight non-covalent interaction with calmodulin (CaM) and Ca2+. The gene coding for iNOS is located on Chromosome 17. While evidence for ‘baseline’ iNOS expression has been elusive, IRF1 and NF-κB-dependent activation of the inducible NOS promoter supports an inflammation mediated stimulation of this transcript. iNOS produces large quantities of NO upon stimulation, such as by proinflammatory cytokines (e.g. Interleukin-1, Tumor necrosis factor alpha and Interferon gamma).
Induction of the high-output iNOS usually occurs in an oxidative environment, and thus high levels of NO have the opportunity to react with superoxide leading to peroxynitrite formation and cell toxicity. These properties may define the roles of iNOS in host immunity, enabling its participation in anti-microbial and anti-tumor activities as part of the oxidative burst of macrophages.
It has been suggested that pathologic generation of nitric oxide through increased iNOS production may decrease tubal ciliary beats and smooth muscle contractions and thus affect embryo transport, which may consequently result in ectopic pregnancy.
eNOS.
Endothelial NOS (eNOS), also known as nitric oxide synthase 3 (NOS3), generates NO in blood vessels and is involved with regulating vascular function. The gene coding for eNOS is located on Chromosome 7. A constitutive Ca2+ dependent NOS provides a basal release of NO. eNOS localizes to caveolae, a plasma membrane domain primarily composed of the protein caveolin 1, and to the Golgi apparatus. These two eNOS populations are distinct, but are both necessary for proper NO production and cell health. eNOS localization to endothelial membranes is mediated by cotranslational N-terminal myristoylation and post-translational palmitoylation. As an essential co-factor for nitric oxide synthase, tetrahydrobiopterin (BH4) supplementation has shown beneficial results for the treatment of endothelial dysfunction in animal experiments and clinical trials, although the tendency of BH4 to become oxidized to BH2 remains a problem.
bNOS.
Bacterial NOS (bNOS) has been shown to protect bacteria against oxidative stress, diverse antibiotics, and host immune response. bNOS plays a key role in the transcription of superoxide dismutase (SodA). Bacteria late in the log phase who do not possess bNOS fail to upregulate SodA, which disables the defenses against harmful oxidative stress. Initially, bNOS may have been present to prepare the cell for stressful conditions but now seems to help shield the bacteria against conventional antimicrobials. As a clinical application, a bNOS inhibitor could be produced to decrease the load of Gram positive bacteria.
Chemical reaction.
Nitric oxide synthases produce NO by catalysing a five-electron oxidation of a guanidino nitrogen of -arginine (-Arg). Oxidation of -Arg to -citrulline occurs via two successive monooxygenation reactions producing "N"ω-hydroxy--arginine (NOHLA) as an intermediate. 2 mol of O2 and 1.5 mol of NADPH are consumed per mole of NO formed.
Structure.
The enzymes exist as homodimers. In eukaryotes, each monomer consisting of two major regions: an N-terminal oxygenase domain, which belongs to the class of heme-thiolate proteins, and a multi-domain C-terminal reductase, which is homologous to NADPH:cytochrome P450 reductase (EC 1.6.2.4) and other flavoproteins. The FMN binding domain is homologous to flavodoxins, and the two domain fragment containing the FAD and NADPH binding sites is homologous to flavodoxin-NADPH reductases. The interdomain linker between the oxygenase and reductase domains contains a calmodulin-binding sequence. The oxygenase domain is a unique extended beta sheet cage with binding sites for heme and pterin.
NOSs can be dimeric, calmodulin-dependent or calmodulin-containing cytochrome p450-like hemoprotein that combines reductase and oxygenase catalytic domains in one dimer, bear both flavin adenine dinucleotide (FAD) and flavin mononucleotide (FMN), and carry out a 5`-electron oxidation of non-aromatic amino acid arginine with the aid of tetrahydrobiopterin.
All three isoforms (each of which is presumed to function as a homodimer during activation) share a carboxyl-terminal reductase domain homologous to the cytochrome P450 reductase. They also share an amino-terminal oxygenase domain containing a heme prosthetic group, which is linked in the middle of the protein to a calmodulin-binding domain. Binding of calmodulin appears to act as a "molecular switch" to enable electron flow from flavin prosthetic groups in the reductase domain to heme. This facilitates the conversion of O2 and -arginine to NO and -citrulline. The oxygenase domain of each NOS isoform also contains an BH4 prosthetic group, which is required for the efficient generation of NO. Unlike other enzymes where BH4 is used as a source of reducing equivalents and is recycled by dihydrobiopterin reductase (EC 1.5.1.33), BH4 activates heme-bound O2 by donating a single electron, which is then recaptured to enable nitric oxide release.
The first nitric oxide synthase to be identified was found in neuronal tissue (NOS1 or nNOS); the endothelial NOS (eNOS or NOS3) was the third to be identified. They were originally classified as "constitutively expressed" and "Ca2+ sensitive" but it is now known that they are present in many different cell types and that expression is regulated under specific physiological conditions.
In NOS1 and NOS3, physiological concentrations of Ca2+ in cells regulate the binding of calmodulin to the "latch domains", thereby initiating electron transfer from the flavins to the heme moieties. In contrast, calmodulin remains tightly bound to the inducible and Ca2+-insensitive isoform (iNOS or NOS2) even at a low intracellular Ca2+ activity, acting essentially as a subunit of this isoform.
Nitric oxide may itself regulate NOS expression and activity. Specifically, NO has been shown to play an important negative feedback regulatory role on NOS3, and therefore vascular endothelial cell function. This process, known formally as "S"-nitrosation (and referred to by many in the field as "S"-nitrosylation), has been shown to reversibly inhibit NOS3 activity in vascular endothelial cells. This process may be important because it is regulated by cellular redox conditions and may thereby provide a mechanism for the association between "oxidative stress" and endothelial dysfunction. In addition to NOS3, both NOS1 and NOS2 have been found to be "S"-nitrosated, but the evidence for dynamic regulation of those NOS isoforms by this process is less complete. In addition, both NOS1 and NOS2 have been shown to form ferrous-nitrosyl complexes in their heme prosthetic groups that may act partially to self-inactivate these enzymes under certain conditions. The rate-limiting step for the production of nitric oxide may well be the availability of -arginine in some cell types. This may be particularly important after the induction of NOS2.
Inhibitors.
Ronopterin (VAS-203), also known as 4-amino-tetrahydrobiopterin (4-ABH4), an analogue of BH4 (a cofactor of NOS), is an NOS inhibitor that is under development as a neuroprotective agent for the treatment of traumatic brain injury. Other NOS inhibitors that have been or are being researched for possible clinical use include cindunistat, A-84643, ONO-1714, L-NOARG, NCX-456, VAS-2381, GW-273629, NXN-462, CKD-712, KD-7040, and guanidinoethyldisulfide, TFPI among others.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] | https://en.wikipedia.org/wiki?curid=1035417 |
10354285 | Macdonald polynomials | In mathematics, Macdonald polynomials "P"λ("x"; "t","q") are a family of orthogonal symmetric polynomials in several variables, introduced by Macdonald in 1987. He later introduced a non-symmetric generalization in 1995. Macdonald originally associated his polynomials with weights λ of finite root systems and used just one variable "t", but later realized that it is more natural to associate them with affine root systems rather than finite root systems, in which case the variable "t" can be replaced by several different variables "t"=("t"1...,"t""k"), one for each of the "k" orbits of roots in the affine root system. The Macdonald polynomials are polynomials in "n" variables "x"=("x"1...,"x""n"), where "n" is the rank of the affine root system. They generalize many other families of orthogonal polynomials, such as Jack polynomials and Hall–Littlewood polynomials and Askey–Wilson polynomials, which in turn include most of the named 1-variable orthogonal polynomials as special cases. Koornwinder polynomials are Macdonald polynomials of certain non-reduced root systems. They have deep relationships with affine Hecke algebras and Hilbert schemes, which were used to prove several conjectures made by Macdonald about them.
Definition.
First fix some notation:
The Macdonald polynomials "P"λ for λ ∈ "P"+ are uniquely defined by the following two conditions:
formula_5 where "u"λμ is a rational function of "q" and "t" with "u"λλ = 1;
"P"λ and "P"μ are orthogonal if λ < μ.
In other words, the Macdonald polynomials are obtained by orthogonalizing the obvious basis for "A""W". The existence of polynomials with these properties is easy to show (for any inner product). A key property of the Macdonald polynomials is that they are orthogonal: 〈"P"λ, "P"μ〉 = 0 whenever λ ≠ μ. This is not a trivial consequence of the definition because "P"+ is not totally ordered, and so has plenty of elements that are incomparable. Thus one must check that the corresponding polynomials are still orthogonal. The orthogonality can be proved by showing that the Macdonald polynomials are eigenvectors
for an algebra of commuting self-adjoint operators with 1-dimensional eigenspaces, and using the fact that eigenspaces for different eigenvalues must be orthogonal.
In the case of non-simply-laced root systems (B, C, F, G), the parameter "t" can be chosen to vary with the length of the root, giving a three-parameter family of Macdonald polynomials. One can also extend the definition to the nonreduced root system BC, in which case one obtains a six-parameter family (one "t" for each orbit of roots, plus "q") known as Koornwinder polynomials. It is sometimes better to regard Macdonald polynomials as depending on a possibly non-reduced affine root system. In this case, there is one parameter "t" associated to each orbit of roots in the affine root system, plus one parameter "q". The number of orbits of roots can vary from 1 to 5.
The Macdonald constant term conjecture.
If "t" = "q""k" for some positive integer "k", then the norm of the Macdonald polynomials is given by
formula_6
This was conjectured by Macdonald (1982) as a generalization of the Dyson conjecture, and proved for all (reduced) root systems by Cherednik (1995) using properties of double affine Hecke algebras. The conjecture had previously been proved case-by-case for all roots systems except those of type "E""n" by several authors.
There are two other conjectures which together with the norm conjecture are collectively referred to as the Macdonald conjectures in this context: in addition to the formula for the norm, Macdonald conjectured a formula for the value of "P"λ at the point "t"ρ, and a symmetry
formula_7
Again, these were proved for general reduced root systems by Cherednik (1995), using double affine Hecke algebras, with the extension to the BC case following shortly thereafter via work of van Diejen, Noumi, and Sahi.
The Macdonald positivity conjecture.
In the case of roots systems of type "A""n"−1 the Macdonald polynomials
are simply symmetric polynomials in "n" variables with coefficients that are rational functions of "q" and "t". A certain transformed version formula_8 of the Macdonald polynomials (see Combinatorial formula below) form an orthogonal basis of the space of symmetric functions over formula_9, and therefore can be expressed in terms of Schur functions formula_10. The coefficients "K"λμ("q","t") of these relations are called Kostka–Macdonald coefficients or "qt"-Kostka coefficients.
Macdonald conjectured that the Kostka–Macdonald coefficients were polynomials in "q" and "t" with non-negative integer coefficients. These conjectures are now proved; the hardest and final step was proving the positivity, which was done by Mark Haiman (2001), by proving the "n"! conjecture.
It is still a central open problem in algebraic combinatorics to find a combinatorial formula for the "qt"-Kostka coefficients.
n! conjecture.
The "n"! conjecture of Adriano Garsia and Mark Haiman states that for each partition μ of "n" the space
formula_11
spanned by all higher partial derivatives of
formula_12
has dimension "n"!, where ("p""j", "q""j") run through the "n" elements of the diagram of the partition μ, regarded as a subset of the pairs of non-negative integers.
For example, if μ is the partition 3 = 2 + 1 of "n" = 3 then the pairs ("p""j", "q""j") are
(0, 0), (0, 1), (1, 0), and the space "D"μ is spanned by
formula_13
formula_14
formula_15
formula_16
formula_17
formula_18
which has dimension 6 = 3!.
Haiman's proof of the Macdonald positivity conjecture and the "n"! conjecture involved showing that the isospectral Hilbert scheme of "n" points in a plane was Cohen–Macaulay (and even Gorenstein). Earlier results of Haiman and Garsia had already shown that this implied the "n"! conjecture, and that the "n"! conjecture implied that the Kostka–Macdonald coefficients were graded character multiplicities for the modules "D"μ. This immediately implies the Macdonald positivity conjecture because character multiplicities have to be non-negative integers.
Ian Grojnowski and Mark Haiman found another proof of the Macdonald positivity conjecture by proving a positivity conjecture for LLT polynomials.
Combinatorial formula for the Macdonald polynomials.
In 2005, J. Haglund, M. Haiman and N. Loehr gave the first proof of a combinatorial interpretation of the
Macdonald polynomials. In 1988, I.G. Macdonald gave the second proof of a combinatorial interpretation of the Macdonald polynomials (equations (4.11) and (5.13)).
Macdonald’s formula is different to that in Haglund, Haiman, and Loehr's work, with many fewer terms (this formula is proved also in Macdonald's seminal work, Ch. VI (7.13)). While very useful for computation and interesting in its own right, their combinatorial formulas do not immediately imply positivity of the Kostka-Macdonald coefficients formula_19 as the give the decomposition of the Macdonald polynomials into monomial symmetric functions rather than into Schur functions.
Written in the "transformed Macdonald polynomials" formula_8 rather than the usual formula_20, they are
formula_21
where σ is a filling of the Young diagram of shape μ, "inv" and "maj" are certain combinatorial statistics (functions) defined on the filling σ. This formula expresses the Macdonald polynomials in infinitely many variables. To obtain the polynomials in "n" variables, simply restrict the formula to fillings that only use the integers 1, 2, ..., "n". The term "x"σ should be interpreted as formula_22 where "σi" is the number of boxes in the filling of μ with content "i".
The transformed Macdonald polynomials formula_23 in the formula above are related to the classical Macdonald polynomials formula_24 via a sequence of transformations. First, the "integral form" of the Macdonald polynomials, denoted formula_25, is a re-scaling of formula_26 that clears the denominators of the coefficients:
formula_27
where formula_28 is the collection of squares in the Young diagram of formula_29, and formula_30 and formula_31 denote the "arm" and "leg" of the square formula_32, as shown in the figure. "Note: The figure at right uses French notation for tableau, which is flipped vertically from the English notation used on the Wikipedia page for Young diagrams. French notation is more commonly used in the study of Macdonald polynomials."
The transformed Macdonald polynomials formula_23 can then be defined in terms of the formula_33's. We have
formula_34
where
formula_35
The bracket notation above denotes plethystic substitution.
This formula can be used to prove Knop and Sahi's formula for the Jack polynomials.
Non-symmetric Macdonald polynomials.
In 1995, Macdonald introduced a non-symmetric analogue of the symmetric Macdonald polynomials,
and the symmetric Macdonald polynomials can easily be recovered from the non-symmetric counterpart.
In his original definition, he shows that the non-symmetric Macdonald polynomials are a unique family of
polynomials orthogonal to a certain inner product, as well as satisfying a
triangularity property when expanded in the monomial basis.
In 2007, Haglund, Haiman and Loehr gave a combinatorial formula for the non-symmetric Macdonald polynomials.
The non-symmetric Macdonald polynomials specialize to Demazure characters by taking q=t=0,
and to key polynomials when q=t=∞.
Combinatorial formulae based on the exclusion process.
In 2018, S. Corteel, O. Mandelshtam, and L. Williams used the exclusion process to give a direct combinatorial characterization of both symmetric and nonsymmetric Macdonald polynomials. Their results differ from the earlier work of Haglund in part because they give a formula directly for the Macdonald polynomials rather than a transformation thereof. They develop the concept of a multiline queue, which is a matrix containing balls or empty cells together with a mapping between balls and their neighbors and a combinatorial labeling mechanism. The nonsymmetric Macdonald polynomial then satisfies:
formula_36
where the sum is over all formula_37 multiline queues of type formula_29 and formula_38 is a weighting function mapping those queues to specific polynomials. The symmetric Macdonald polynomial satisfies:
formula_39
where the outer sum is over all distinct compositions formula_40 which are permutations of formula_29, and the inner sum is as before.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mu \\le \\lambda"
},
{
"math_id": 1,
"text": "\\lambda-\\mu"
},
{
"math_id": 2,
"text": "(a;q)_\\infty = \\prod_{r\\ge0}(1-aq^r)"
},
{
"math_id": 3,
"text": "\\Delta= \\prod_{\\alpha\\in R} {(e^\\alpha; q)_\\infty \\over (te^\\alpha; q)_\\infty}. "
},
{
"math_id": 4,
"text": "\\langle f,g\\rangle=(\\text{constant term of }f \\overline g \\Delta)/|W|"
},
{
"math_id": 5,
"text": "P_\\lambda=\\sum_{\\mu\\le \\lambda}u_{\\lambda\\mu}m_\\mu"
},
{
"math_id": 6,
"text": "\\langle P_\\lambda, P_\\lambda\\rangle = \\prod_{\\alpha\\in R, \\alpha>0} \\prod_{0<i<k} {1-q^{(\\lambda+k\\rho,2\\alpha/(\\alpha,\\alpha))+i} \\over 1-q^{(\\lambda+k\\rho,2\\alpha/(\\alpha,\\alpha))-i}}."
},
{
"math_id": 7,
"text": "\\frac{P_\\lambda(\\dots,q^{\\mu_i}t^{\\rho_i},\\dots)}{P_\\lambda(t^\\rho)}\n=\n\\frac{P_\\mu(\\dots,q^{\\lambda_i}t^{\\rho_i},\\dots)}{P_\\mu(t^\\rho)}."
},
{
"math_id": 8,
"text": "\\widetilde{H}_\\mu"
},
{
"math_id": 9,
"text": "\\mathbb{Q}(q,t)"
},
{
"math_id": 10,
"text": "s_\\lambda"
},
{
"math_id": 11,
"text": "D_\\mu =C[\\partial_x,\\partial_y]\\,\\Delta_\\mu"
},
{
"math_id": 12,
"text": "\\Delta_\\mu = \\det (x_i^{p_j}y_i^{q_j})_{1\\le i,j,\\le n}"
},
{
"math_id": 13,
"text": "\\Delta_\\mu=x_1y_2+x_2y_3+x_3y_1-x_2y_1-x_3y_2-x_1y_3"
},
{
"math_id": 14,
"text": "y_2-y_3"
},
{
"math_id": 15,
"text": "y_3-y_1"
},
{
"math_id": 16,
"text": "x_3-x_2"
},
{
"math_id": 17,
"text": "x_1-x_3"
},
{
"math_id": 18,
"text": "1"
},
{
"math_id": 19,
"text": "K_{\\lambda \\mu}(q,t),"
},
{
"math_id": 20,
"text": "P_\\lambda"
},
{
"math_id": 21,
"text": "\\widetilde{H}_\\mu(x;q,t) = \\sum_{\\sigma:\\mu \\to \\Z_+} q^{inv(\\sigma)}t^{maj(\\sigma)} x^{\\sigma}"
},
{
"math_id": 22,
"text": "x_1^{\\sigma_1} x_2^{\\sigma_2} \\cdots "
},
{
"math_id": 23,
"text": "\\widetilde{H}_\\mu(x;q,t)"
},
{
"math_id": 24,
"text": "P_{\\lambda}"
},
{
"math_id": 25,
"text": "J_\\lambda(x;q,t)"
},
{
"math_id": 26,
"text": "P_\\lambda(x;q,t)"
},
{
"math_id": 27,
"text": "J_\\lambda(x;q,t)=\\prod_{s\\in D(\\lambda)}(1-q^{a(s)}t^{1+l(s)})\\cdot P_\\lambda(x;q,t)"
},
{
"math_id": 28,
"text": "D(\\lambda)"
},
{
"math_id": 29,
"text": "\\lambda"
},
{
"math_id": 30,
"text": "a(s)"
},
{
"math_id": 31,
"text": "l(s)"
},
{
"math_id": 32,
"text": "s"
},
{
"math_id": 33,
"text": "J_\\mu"
},
{
"math_id": 34,
"text": "\\widetilde{H}_\\mu(x;q,t)=t^{-n(\\mu)}J_\\mu\\left[\\frac{X}{1-t^{-1}};q,t^{-1}\\right]"
},
{
"math_id": 35,
"text": "n(\\mu)=\\sum_{i}\\mu_i\\cdot (i-1)."
},
{
"math_id": 36,
"text": "E_{\\lambda}(\\textbf{x};q,t)=\\sum_Q \\mathrm{wt}(Q)"
},
{
"math_id": 37,
"text": "L\\times n"
},
{
"math_id": 38,
"text": "\\mathrm{wt}"
},
{
"math_id": 39,
"text": "P_{\\lambda}(\\textbf{x};q,t)=\\sum_{\\mu}E_{\\mu}(x_1,...,x_n;q,t)=\\sum_{\\mu}\\sum_Q \\mathrm{wt}(Q)"
},
{
"math_id": 40,
"text": "\\mu"
}
] | https://en.wikipedia.org/wiki?curid=10354285 |
10355069 | Acyclic space | In mathematics, an acyclic space is a nonempty topological space "X" in which cycles are always boundaries, in the sense of homology theory. This implies that integral homology groups in all dimensions of "X" are isomorphic to the corresponding homology groups of a point.
In other words, using the idea of reduced homology,
formula_0
It is common to consider such a space as a nonempty space without "holes"; for example, a circle or a sphere is not acyclic but a disc
or a ball is acyclic. This condition however is weaker than asking that every closed loop in the space would bound a disc in the space, all we ask is that any closed loop—and higher dimensional analogue thereof—would bound something like a "two-dimensional surface."
The condition of acyclicity on a space "X" implies, for example, for nice spaces—say, simplicial complexes—that any continuous map of "X" to the circle or to the higher spheres is null-homotopic.
If a space "X" is contractible, then it is also acyclic, by the homotopy invariance of homology. The converse is not true, in general. Nevertheless, if "X" is an acyclic CW complex, and if the fundamental group of "X" is trivial, then "X" is a contractible space, as follows from the Whitehead theorem and the Hurewicz theorem.
Examples.
Acyclic spaces occur in topology, where they can be used to construct other, more interesting topological spaces.
For instance, if one removes a single point from a manifold "M" which is a homology sphere, one gets such a space. The homotopy groups of an acyclic space "X" do not vanish in general, because the fundamental group formula_1 need not be trivial. For example, the punctured Poincaré homology sphere is an acyclic, 3-dimensional manifold which is not contractible.
This gives a repertoire of examples, since the first homology group is the abelianization of the fundamental group. With every perfect group "G" one can associate a (canonical, terminal) acyclic space, whose fundamental group is a central extension of the given group "G".
The homotopy groups of these associated acyclic spaces are closely related to Quillen's plus construction on the classifying space "BG".
Acyclic groups.
An acyclic group is a group "G" whose classifying space "BG" is acyclic; in other words, all its (reduced) homology groups vanish, i.e., formula_2, for all formula_3. Every acyclic group is thus a perfect group, meaning its first homology group vanishes: formula_4, and in fact, a superperfect group, meaning the first two homology groups vanish: formula_5. The converse is not true: the binary icosahedral group is superperfect (hence perfect) but not acyclic. | [
{
"math_id": 0,
"text": "\\tilde{H}_i(X)=0, \\quad \\forall i\\ge -1."
},
{
"math_id": 1,
"text": "\\pi_1(X)"
},
{
"math_id": 2,
"text": "\\tilde{H}_i(G;\\mathbf{Z})=0"
},
{
"math_id": 3,
"text": "i\\ge 0"
},
{
"math_id": 4,
"text": "H_1(G;\\mathbf{Z})=0"
},
{
"math_id": 5,
"text": "H_1(G;\\mathbf{Z})=H_2(G;\\mathbf{Z})=0"
}
] | https://en.wikipedia.org/wiki?curid=10355069 |
10356 | Endothermic process | Thermodynamic process that absorbs energy from its surroundings
An endothermic process is a chemical or physical process that absorbs heat from its surroundings. In terms of thermodynamics and thermochemistry, it is a thermodynamic process with an increase in the enthalpy H (or internal energy U) of the system. In an endothermic process, the heat that a system absorbs is thermal energy transfer into the system. Thus, an endothermic reaction generally leads to an increase in the temperature of the system and a decrease in that of the surroundings.
The term was coined by 19th-century French chemist Marcellin Berthelot. The term "endothermic" comes from the Greek ἔνδον ("endon") meaning 'within' and θερμ- ("therm") meaning 'hot' or 'warm'.
An endothermic process may be a chemical process, such as dissolving ammonium nitrate () in water (), or a physical process, such as the melting of ice cubes.
The opposite of an endothermic process is an exothermic process, one that releases or "gives out" energy, usually in the form of heat and sometimes as electrical energy. Thus, "endo" in endothermic refers to energy or heat going in, and "exo" in exothermic refers to energy or heat going out. In each term (endothermic and exothermic) the prefix refers to where heat (or electrical energy) goes as the process occurs.
In chemistry.
Due to bonds breaking and forming during various processes (changes in state, chemical reactions), there is usually a change in energy. If the energy of the forming bonds is greater than the energy of the breaking bonds, then energy is released. This is known as an exothermic reaction. However, if more energy is needed to break the bonds than the energy being released, energy is taken up. Therefore, it is an endothermic reaction.
Details.
Whether a process can occur spontaneously depends not only on the enthalpy change but also on the entropy change (∆"S") and absolute temperature T. If a process is a spontaneous process at a certain temperature, the products have a lower Gibbs free energy "G" = "H" – "TS" than the reactants (an exergonic process), even if the enthalpy of the products is higher. Thus, an endothermic process usually requires a favorable entropy increase (∆"S" > 0) in the system that overcomes the unfavorable increase in enthalpy so that still ∆"G" < 0. While endothermic phase transitions into more disordered states of higher entropy, e.g. melting and vaporization, are common, spontaneous chemical processes at moderate temperatures are rarely endothermic. The enthalpy increase∆"H" ≫ 0 in a hypothetical strongly endothermic process usually results in ∆"G" = ∆"H" – "T"∆"S" > 0, which means that the process will not occur (unless driven by electrical or photon energy). An example of an endothermic and exergonic process is
<chem>C6H12O6 + 6 H2O -> 12 H2 + 6 CO2</chem>
formula_0.
Distinction between endothermic and endotherm.
The terms "endothermic" and "endotherm" are both derived from Greek ' "within" and ' "heat", but depending on context, they can have very different meanings.
In physics, thermodynamics applies to processes involving a system and its surroundings, and the term "endothermic" is used to describe a reaction where energy is taken "(with)in" by the system (vs. an "exothermic" reaction, which releases energy "outwards").
In biology, thermoregulation is the ability of an organism to maintain its body temperature, and the term "endotherm" refers to an organism that can do so from "within" by using the heat released by its internal bodily functions (vs. an "ectotherm", which relies on external, environmental heat sources) to maintain an adequate temperature.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\Delta_r H^\\circ = +627 \\ \\text{kJ/mol},\\quad \\Delta_r G^\\circ = -31 \\ \\text{kJ/mol}"
}
] | https://en.wikipedia.org/wiki?curid=10356 |
10356246 | Standard atomic weight | Relative atomic mass as defined by IUPAC (CIAAW)
The standard atomic weight of a chemical element (symbol "A"r°(E) for element "E") is the weighted arithmetic mean of the relative isotopic masses of all isotopes of that element weighted by each isotope's abundance on Earth. For example, isotope 63Cu ("A"r = 62.929) constitutes 69% of the copper on Earth, the rest being 65Cu ("A"r = 64.927), so
formula_0
Because relative isotopic masses are dimensionless quantities, this weighted mean is also dimensionless. It can be converted into a measure of mass (with dimension ) by multiplying it with the dalton, also known as the atomic mass constant.
Among various variants of the notion of atomic weight ("A"r, also known as "relative atomic mass") used by scientists, the standard atomic weight ("A"r°) is the most common and practical. The standard atomic weight of each chemical element is determined and published by the Commission on Isotopic Abundances and Atomic Weights (CIAAW) of the International Union of Pure and Applied Chemistry (IUPAC) based on natural, stable, sources of the element. The definition specifies the use of samples from many representative sources from the Earth, so that the value can widely be used as the atomic weight for substances as they are encountered in reality—for example, in pharmaceuticals and scientific research. Non-standardized atomic weights of an element are specific to sources and samples, such as the atomic weight of carbon in a particular bone from a particular archaeological site. Standard atomic weight averages such values to the "range of atomic weights" that a chemist might expect to derive from many random samples from Earth. This range is the rationale for the "interval notation" given for some standard atomic weight values.
Of the 118 known chemical elements, 80 have stable isotopes and 84 have this Earth-environment based value. Typically, such a value is, for example helium: "A"r°(He) = . The "(2)" indicates the uncertainty in the last digit shown, to read . IUPAC also publishes "abridged values", rounded to five significant figures. For helium, "A"r, abridged°(He) = .
For fourteen elements the samples diverge on this value, because their sample sources have had a different decay history. For example, thallium (Tl) in sedimentary rocks has a different isotopic composition than in igneous rocks and volcanic gases. For these elements, the standard atomic weight is noted as an interval: "A"r°(Tl) = [204.38, 204.39]. With such an interval, for less demanding situations, IUPAC also publishes a "conventional value". For thallium, "A"r, conventional°(Tl) = .
Definition.
The "standard" atomic weight is a special value of the relative atomic mass. It is defined as the "recommended values" of relative atomic masses of sources "in the local environment of the Earth's crust and atmosphere as determined by the IUPAC Commission on Atomic Weights and Isotopic Abundances" (CIAAW). In general, values from different sources are subject to natural variation due to a different radioactive history of sources. Thus, standard atomic weights are an expectation range of atomic weights from a range of samples or sources. By limiting the sources to terrestrial origin only, the CIAAW-determined values have less variance, and are a more precise value for relative atomic masses (atomic weights) actually found and used in worldly materials.
The CIAAW-published values are used and sometimes lawfully required in mass calculations. The values have an uncertainty (noted in brackets), or are an expectation interval (see example in illustration immediately above). This uncertainty reflects natural variability in isotopic distribution for an element, rather than uncertainty in measurement (which is much smaller with quality instruments).
Although there is an attempt to cover the range of variability on Earth with standard atomic weight figures, there are known cases of mineral samples which contain elements with atomic weights that are outliers from the standard atomic weight range.
For synthetic elements the isotope formed depends on the means of synthesis, so the concept of natural isotope abundance has no meaning. Therefore, for synthetic elements the total nucleon count of the most stable isotope (i.e., the isotope with the longest half-life) is listed in brackets, in place of the standard atomic weight. When the term "atomic weight" is used in chemistry, usually it is the more specific standard atomic weight that is implied. It is standard atomic weights that are used in periodic tables and many standard references in ordinary terrestrial chemistry.
Lithium represents a unique case where the natural abundances of the isotopes have in some cases been found to have been perturbed by human isotopic separation activities to the point of affecting the uncertainty in its standard atomic weight, even in samples obtained from natural sources, such as rivers.
Terrestrial definition.
An example of why "conventional terrestrial sources" must be specified in giving standard atomic weight values is the element argon. Between locations in the Solar System, the atomic weight of argon varies as much as 10%, due to extreme variance in isotopic composition. Where the major source of argon is the decay of 40K in rocks, 40Ar will be the dominant isotope. Such locations include the planets Mercury and Mars, and the moon Titan. On Earth, the ratios of the three isotopes 36Ar : 38Ar : 40Ar are approximately 5 : 1 : 1600, giving terrestrial argon a standard atomic weight of 39.948(1).
However, such is not the case in the rest of the universe. Argon produced directly, by stellar nucleosynthesis, is dominated by the alpha-process nuclide 36Ar. Correspondingly, solar argon contains 84.6% 36Ar (according to solar wind measurements), and the ratio of the three isotopes 36Ar : 38Ar : 40Ar in the atmospheres of the outer planets is 8400 : 1600 : 1. The atomic weight of argon in the Sun and most of the universe, therefore, would be only approximately 36.3.
Causes of uncertainty on Earth.
Famously, the published atomic weight value comes with an uncertainty. This uncertainty (and related: precision) follows from its definition, the source being "terrestrial and stable". Systematic causes for uncertainty are:
These three uncertainties are accumulative. The published value is a result of all these.
Determination of relative atomic mass.
Modern relative atomic masses (a term specific to a given element sample) are calculated from measured values of atomic mass (for each nuclide) and isotopic composition of a sample. Highly accurate atomic masses are available for virtually all non-radioactive nuclides, but isotopic compositions are both harder to measure to high precision and more subject to variation between samples. For this reason, the relative atomic masses of the 22 mononuclidic elements (which are the same as the isotopic masses for each of the single naturally occurring nuclides of these elements) are known to especially high accuracy.
The calculation is exemplified for silicon, whose relative atomic mass is especially important in metrology. Silicon exists in nature as a mixture of three isotopes: 28Si, 29Si and 30Si. The atomic masses of these nuclides are known to a precision of one part in 14 billion for 28Si and about one part in one billion for the others. However the range of natural abundance for the isotopes is such that the standard abundance can only be given to about ±0.001% (see table).
The calculation is
"A"r(Si) = (27.97693 × 0.922297) + (28.97649 × 0.046832) + (29.97377 × 0.030872) = 28.0854
The estimation of the uncertainty is complicated, especially as the sample distribution is not necessarily symmetrical: the IUPAC standard relative atomic masses are quoted with estimated symmetrical uncertainties, and the value for silicon is 28.0855(3). The relative standard uncertainty in this value is 1×10–5 or 10 ppm. To further reflect this natural variability, in 2010, IUPAC made the decision to list the relative atomic masses of 10 elements as an interval rather than a fixed number.
Naming controversy.
The use of the name "atomic weight" has attracted a great deal of controversy among scientists. Objectors to the name usually prefer the term "relative atomic mass" (not to be confused with atomic mass). The basic objection is that atomic weight is not a weight, that is the force exerted on an object in a gravitational field, measured in units of force such as the newton or poundal.
In reply, supporters of the term "atomic weight" point out (among other arguments) that:
It could be added that atomic weight is often not truly "atomic" either, as it does not correspond to the property of any individual atom. The same argument could be made against "relative atomic mass" used in this sense.
Published values.
IUPAC publishes one formal value for each stable chemical element, called the "standard atomic weight".Table 1 Any updates are published biannually (in uneven years). In 2015, the atomic weight of ytterbium was updated. Per 2017, 14 atomic weights were changed, including argon changing from single number to interval value.
The value published can have an uncertainty, like for neon: , or can be an interval, like for boron: [10.806, 10.821].
Next to these 84 values, IUPAC also publishes "abridged" values (up to five digits per number only), and for the twelve interval values, "conventional" values (single number values).
Symbol "A"r is a relative atomic mass, for example from a specific sample. To be specific, the standard atomic weight can be noted as "A"r°(E), where (E) is the element symbol.
Abridged atomic weight.
The abridged atomic weight, also published by CIAAW, is derived from the standard atomic weight, reducing the numbers to five digits (five significant figures). The name does not say 'rounded'.
Interval borders are rounded "downwards" for the first (low most) border, and "upwards" for the "upward" (upmost) border. This way, the more precise original interval is fully covered.Table 2
Examples:
Conventional atomic weight.
Fourteen chemical elements – hydrogen, lithium, boron, carbon, nitrogen, oxygen, magnesium, silicon, sulfur, chlorine, argon, bromine, thallium, and lead – have a standard atomic weight that is defined not as a single number, but as an interval. For example, hydrogen has "A"r°(H) = [1.00 784, 1.00811]. This notation states that the various sources on Earth have substantially different isotopic constitutions, and that the uncertainties in all of them are just covered by the two numbers. For these elements, there is not an 'Earth average' constitution, and the 'right' value is not its middle (which would be 1.007975 for hydrogen, with an uncertainty of (±0.000135) that would make it just cover the interval). However, for situations where a less precise value is acceptable, for example in trade, CIAAW has published a single-number conventional atomic weight. For hydrogen, "A"r, conventional°(H) = 1.008.Table 3
A formal short atomic weight.
By using the abridged value, and the conventional value for the fourteen interval values, a short IUPAC-defined value (5 digits plus uncertainty) can be given for all stable elements. In many situations, and in periodic tables, this may be sufficiently detailed.Tables 2 and 3
List of atomic weights.
<templatestyles src="Reflist/styles.css" />
In the periodic table.
Primordial From decay Synthetic Border shows natural occurrence of the element
Standard atomic weight "A"r, std(E)<templatestyles src="Plainlist/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "A_\\text{r}\\text{°}(_\\text{29}\\text{Cu})=0.69\\times62.929+0.31\\times64.927=63.55."
}
] | https://en.wikipedia.org/wiki?curid=10356246 |
1035712 | Counts per minute | Measurement for ionizing radiation
The measurement of ionizing radiation is sometimes expressed as being a "rate" of counts per unit time as registered by a radiation monitoring instrument, for which counts per minute (cpm) and counts per second (cps) are commonly used quantities.
Count rate measurements are associated with the detection of particles, such as alpha particles and beta particles. However, for gamma ray and X-ray dose measurements a unit such as the sievert is normally used.
Both cpm and cps are the rate of detection events registered by the measuring instrument, not the rate of emission from the source of radiation. For radioactive decay measurements it must not be confused with disintegrations per unit time (dpm), which represents the rate of atomic disintegration events at the source of the radiation.
Count rates.
The count rates of cps and cpm are generally accepted and convenient practical rate measurements. They are not SI units, but are "de facto" radiological units of measure in widespread use.
Counts per minute (abbreviated to cpm) is a measure of the detection rate of ionization events per minute. Counts are only manifested in the reading of the measuring instrument, and are not an absolute measure of the strength of the source of radiation. Whilst an instrument will display a rate of cpm, it does not have to detect counts for one minute, as it can infer the total per minute from a smaller sampling period.
Counts per second (abbreviated to cps) is used for measurements when higher count rates are being encountered, or if hand held radiation survey instruments are being used which can be subject to rapid changes of count rate when the instrument is moved over a source of radiation in a survey area.
Conversion to dose rate.
Count rate does not universally equate to dose rate, and there is no simple universal conversion factor. Any conversions are instrument-specific.
Counts is the number of events detected, but dose rate relates to the amount of ionising energy "deposited" in the sensor of the radiation detector. The conversion calculation is dependent on the radiation energy levels, the type of radiation being detected and the radiometric characteristic of the detector.
The continuous current ion chamber instrument can easily measure dose but cannot measure counts. However the Geiger counter can measure counts but not the energy of the radiation, so a technique known as energy compensation of the detector tube is used to produce a dose reading. This modifies the tube characteristic so each count resulting from a particular radiation type is equivalent to a specific quantity of deposited dose.
More can be found on radiation dose and dose rate at absorbed dose and equivalent dose.
Count rates versus disintegration rates.
Disintegrations per minute (dpm) and disintegrations per second (dps) are measures of the activity of the source of radioactivity. The SI unit of radioactivity, the becquerel (Bq), is equivalent to one disintegration per second. This unit should not be confused with cps, which is the number of counts received by an instrument from the source. The quantity dps (dpm) is the number of atoms that have decayed in one second (one minute), not the number of atoms that have been measured as decayed.
The efficiency of the radiation detector and its relative position to the source of radiation must be accounted for when relating cpm to dpm. This is known as the counting efficiency. The factors affecting counting efficiency are shown in the accompanying diagram.
Surface emission rate.
The surface emission rate (SER) is used as a measure of the rate of particles emitted from a radioactive source which is being used as a calibration standard. When the source is of plate or planar construction and the radiation of interest is emitting from one face, it is known as "formula_0 emission". When the emissions are from a "point source" and the radiation of interest is emitting from all faces, it is known as "formula_1 emission". These terms correspond to the spherical geometry over which the emissions are being measured.
The SER is the measured emission rate from the source and is related to, but different from, the source activity. This relationship is affected by the type of radiation being emitted and the physical nature of the radioactive source. Sources with formula_1 emissions will nearly always have a lower SER than the Bq activity due to self-shielding within the active layer of the source. Sources with formula_0 emissions suffer from self-shielding or backscatter, so the SER is variable, and individually can be greater than or less than 50% of the Bq activity, depending on construction and the particle types being measured. Backscatter will reflect particles off the backing plate of the active layer and will increase the rate; beta particle plate sources usually have a significant backscatter, whereas alpha plate sources usually have no backscatter. However alpha particles are easily attenuated if the active layer is made too thick. The SER is established by measurement using calibrated equipment, normally traceable to a national standard source of radiation.
Ratemeters and scalers.
In radiation protection practice, an instrument which reads a rate of detected events is normally known as a ratemeter, which was first developed by R D Robley Evans in 1939. This mode of operation provides real-time dynamic indication of the radiation rate, and the principle has found widespread application in radiation survey meters used in health physics.
An instrument which totalises the events detected over a time period is known as a scaler. This colloquial name comes from the early days of automatic radiation counting, when a pulse-dividing circuit was required to "scale down" a high count rate to a speed which mechanical counters could register. This technique was developed by C E Wynn-Williams at The Cavendish Laboratory and first published in 1932. The original counters used a cascade of "Eccles-Jordan" divide-by-two circuits, today known as flip flops. Early count readings were therefore binary numbers and had to be manually re-calculated into decimal values.
Later, with the development of electronic indicators, which started with the introduction of the Dekatron readout tube in the 1950s, and culminating in the modern digital indicator, totalised readings came to be directly indicated in decimal notation.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "2\\pi"
},
{
"math_id": 1,
"text": "4\\pi"
}
] | https://en.wikipedia.org/wiki?curid=1035712 |
10359011 | Array gain | In array antenna systems, array gain is the measure of the improvement in signal-to-noise ratio (SNR) achieved by the array. It is calculated as the SNR of the array output signal divided by the SNR of the array input signal. Intuitively, the array gain is realized by the fact that the signal is coherently added from "N" array elements, while the noise is incoherently added from those same elements. If the noise is presumed to be uncorrelated the array gain is ≤ "N", the number of array elements, and the array gain reduces to the inverse of the square of the 2-norm of the array weight vector, under the assumption that the weight vector is normalized such that its sum is unity, so that
formula_0
For a uniformly weighted array (un-tapered such that all elements contribute equally), the array gain is equal to "N".
Array gain is not the same thing as "gain," "power gain," "directive gain," or "directivity," but if the noise environment around the array is isotropic and the array input signal is from an isotropic radiator, then array gain is equal to gain defined in the usual way from the array beam pattern. The terms "power gain" and "directive gain" are deprecated by IEEE.
Applications.
In radio astronomy it is difficult to achieve a good signal-to-noise ratio because of background noise from modern communications. Even for strong astronomical radio emission it is typical for SNR levels to be below 0 decibels. To counter this problem exposure of the antenna to the source over large periods of time are needed just as in visible sky viewing. Array gain is achieved using multiple, even dozens of radio receivers to collect as much signal as possible.
Robots equipped with antenna arrays have better communication using array gain to eradicate dead spots and reduce interferences.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "A = {1 \\over \\vec{w}^H\\vec{w}}"
}
] | https://en.wikipedia.org/wiki?curid=10359011 |
1035915 | Sieve theory | Ways to estimate the size of sifted sets of integers
Sieve theory is a set of general techniques in number theory, designed to count, or more realistically to estimate the size of, sifted sets of integers. The prototypical example of a sifted set is the set of prime numbers up to some prescribed limit "X". Correspondingly, the prototypical example of a sieve is the sieve of Eratosthenes, or the more general Legendre sieve. The direct attack on prime numbers using these methods soon reaches apparently insuperable obstacles, in the way of the accumulation of error terms. In one of the major strands of number theory in the twentieth century, ways were found of avoiding some of the difficulties of a frontal attack with a naive idea of what sieving should be.
One successful approach is to approximate a specific sifted set of numbers (e.g. the set of prime numbers) by another, simpler set (e.g. the set of almost prime numbers), which is typically somewhat larger than the original set, and easier to analyze. More sophisticated sieves also do not work directly with sets "per se", but instead count them according to carefully chosen weight functions on these sets (options for giving some elements of these sets more "weight" than others). Furthermore, in some modern applications, sieves are used not to estimate the size of a sifted
set, but to produce a function that is large on the set and mostly small outside it, while being easier to analyze than the characteristic function of the set.
The term "sieve" was first used by the norwegian mathematician Viggo Brun in 1915. However Brun's work was inspired by the works of the french mathematician Jean Merlin who died in the World War I and only two of his manuscripts survived.
Basic sieve theory.
For information on notation see at the end.
We start with some countable sequence of non-negative numbers formula_0. In the most basic case this sequence is just the indicator function formula_1 of some set formula_2 we want to sieve. However this abstraction allows for more general situations. Next we introduce a general set of prime numbers called the "sifting range" formula_3 and their product up to formula_4 as a function formula_5.
The goal of sieve theory is to estimate the "sifting function"
formula_6
In the case of formula_1 this just counts the cardinality of a subset formula_7 of numbers, that are coprime to the prime factors of formula_8.
The inclusion–exclusion principle.
For formula_9 define
formula_10
and for each prime formula_11 denote the set formula_12 and let formula_13 be the cardinality. Let now formula_14 be some set of primes.
If one wants to calculate the cardinality of formula_15, one can apply the inclusion–exclusion principle. This algorithm works like this: first one removes from the cardinality of formula_16 the cardinality formula_17 and formula_18. Now since one has removed the numbers that are divisble by formula_19 and formula_20 twice, one has to add the cardinality formula_21. In the next step one removes formula_22 and adds formula_23 and formula_24 again. Additionally one has now to remove formula_25, i.e. the cardinality of all numbers divisible by formula_26 and formula_27. This leads to the inclusion–exclusion principle
formula_28
Legendre's identity.
We can rewrite the sifting function with "Legendre's identity"
formula_29
by using the Möbius function and some functions formula_30 induced by the elements of formula_9
formula_31
Example.
Let formula_32 and formula_33. The Möbius function is negative for every prime, so we get
formula_34
Approximation of the congruence sum.
One assumes then that formula_30 can be written as
formula_35
where formula_36 is a "density", meaning a multiplicative function such that
formula_37
and formula_38 is an approximation of formula_39 and formula_40 is some remainder term. The sifting function becomes
formula_41
or in short
formula_42
One tries then to estimate the sifting function by finding upper and lower bounds for formula_43 respectively formula_44 and formula_45.
The partial sum of the sifting function alternately over- and undercounts, so the remainder term will be huge. Brun's idea to improve this was to replace formula_46 in the sifting function with a weight sequence formula_47 consisting of restricted Möbius functions. Choosing two appropriate sequences formula_48 and formula_49 and denoting the sifting functions with formula_50 and formula_51, one can get lower and upper bounds for the original sifting functions
formula_52
Since formula_53 is multiplicative, one can also work with the identity
formula_54
Notation: a word of caution regarding the notation, in the literature one often identifies the set of sequences formula_55 with the set formula_56 itself. This means one writes formula_57 to define a sequence formula_0. Also in the literature the sum formula_30 is sometimes notated as the cardinality formula_58 of some set formula_30, while we have defined formula_30 to be already the cardinality of this set. We used
formula_59 to denote the set of primes and formula_60 for the greatest common divisor of formula_61 and formula_62.
Types of sieving.
Modern sieves include the Brun sieve, the Selberg sieve, the Turán sieve, the large sieve, the larger sieve and the Goldston-Pintz-Yıldırım sieve. One of the original purposes of sieve theory was to try to prove conjectures in number theory such as the twin prime conjecture. While the original broad aims of sieve theory still are largely unachieved, there have been some partial successes, especially in combination with other number theoretic tools. Highlights include:
Techniques of sieve theory.
The techniques of sieve theory can be quite powerful, but they seem to be limited by an obstacle known as the "parity problem", which roughly speaking asserts that sieve theory methods have extreme difficulty distinguishing between numbers with an odd number of prime factors and numbers with an even number of prime factors. This parity problem is still not very well understood.
Compared with other methods in number theory, sieve theory is comparatively "elementary", in the sense that it does not necessarily require sophisticated concepts from either algebraic number theory or analytic number theory. Nevertheless, the more advanced sieves can still get very intricate and delicate (especially when combined with other deep techniques in number theory), and entire textbooks have been devoted to this single subfield of number theory; a classic reference is and a more modern text is .
The sieve methods discussed in this article are not closely related to the integer factorization sieve methods such as the quadratic sieve and the general number field sieve. Those factorization methods use the idea of the sieve of Eratosthenes to determine efficiently which members of a list of numbers can be completely factored into small primes. | [
{
"math_id": 0,
"text": "\\mathcal{A}=(a_n)"
},
{
"math_id": 1,
"text": "a_n=1_{A}(n)"
},
{
"math_id": 2,
"text": "A=\\{s:s\\leq x\\}"
},
{
"math_id": 3,
"text": "\\mathcal{P}\\subseteq \\mathbb{P}"
},
{
"math_id": 4,
"text": "z"
},
{
"math_id": 5,
"text": "P(z)=\\prod\\limits_{p\\in\\mathcal{P}, p<z}p"
},
{
"math_id": 6,
"text": "S(\\mathcal{A},\\mathcal{P},z)=\\sum\\limits_{n\\leq x, \\text{gcd}(n,P(z))=1}a_n."
},
{
"math_id": 7,
"text": "A_{\\operatorname{sift}}\\subseteq A"
},
{
"math_id": 8,
"text": "P(z)"
},
{
"math_id": 9,
"text": "\\mathcal{P}"
},
{
"math_id": 10,
"text": "A_{\\operatorname{sift}}:=\\{a\\in A|(a,p_1\\cdots p_k)=1\\}, \\quad p_1,\\dots,p_k\\in\\mathcal{P}"
},
{
"math_id": 11,
"text": "p\\in \\mathcal{P}"
},
{
"math_id": 12,
"text": "E_p=\\{pn:n\\in\\mathbb{N}\\}"
},
{
"math_id": 13,
"text": "|E_p|"
},
{
"math_id": 14,
"text": "\\mathcal{P}:=\\{2,3,5,7,11,13\\dots\\}"
},
{
"math_id": 15,
"text": "A_{\\operatorname{sift}}"
},
{
"math_id": 16,
"text": "|A|"
},
{
"math_id": 17,
"text": "|E_2|"
},
{
"math_id": 18,
"text": "|E_3|"
},
{
"math_id": 19,
"text": "2"
},
{
"math_id": 20,
"text": "3"
},
{
"math_id": 21,
"text": "|E_6|"
},
{
"math_id": 22,
"text": "|E_5|"
},
{
"math_id": 23,
"text": "|E_{10}|"
},
{
"math_id": 24,
"text": "|E_{15}|"
},
{
"math_id": 25,
"text": "|E_{30}|"
},
{
"math_id": 26,
"text": "2,3"
},
{
"math_id": 27,
"text": "5"
},
{
"math_id": 28,
"text": "|A_{\\operatorname{sift}}|=|A|-|E_2|-|E_3|+|E_6|-|E_5|+|E_{10}|+|E_{15}|-|E_{30}|+\\cdots"
},
{
"math_id": 29,
"text": "S(\\mathcal{A},\\mathcal{P},z)=\\sum\\limits_{d\\mid P(z)}\\mu(d)A_d(x)"
},
{
"math_id": 30,
"text": "A_d(x)"
},
{
"math_id": 31,
"text": "A_d(x)=\\sum\\limits_{n\\leq x, n\\equiv 0\\pmod{d}}a_n."
},
{
"math_id": 32,
"text": "z=7"
},
{
"math_id": 33,
"text": "\\mathcal{P}=\\mathbb{P}"
},
{
"math_id": 34,
"text": "\\begin{align}\nS(\\mathcal{A},\\mathbb{P},7)&=A_1(x)-A_2(x)-A_3(x)-A_5(x)+A_6(x)+A_{10}(x)+A_{15}(x)-A_{30}(x).\n\\end{align}"
},
{
"math_id": 35,
"text": "A_d(x)=g(d)X+r_d(x)"
},
{
"math_id": 36,
"text": "g(d)"
},
{
"math_id": 37,
"text": "g(1)=1,\\qquad 0\\leq g(p)<1 \\qquad p\\in \\mathbb{P}"
},
{
"math_id": 38,
"text": "X"
},
{
"math_id": 39,
"text": "A_1(x)"
},
{
"math_id": 40,
"text": "r_d(x)"
},
{
"math_id": 41,
"text": "S(\\mathcal{A},\\mathcal{P},z)=X\\sum\\limits_{d\\mid P(z)}\\mu(d)g(d)+\\sum\\limits_{d\\mid P(z)}\\mu(d)r_d(x)"
},
{
"math_id": 42,
"text": "S(\\mathcal{A},\\mathcal{P},z)=XG(x,z)+R(x,z)."
},
{
"math_id": 43,
"text": "S"
},
{
"math_id": 44,
"text": "G"
},
{
"math_id": 45,
"text": "R"
},
{
"math_id": 46,
"text": "\\mu(d)"
},
{
"math_id": 47,
"text": "(\\lambda_d)"
},
{
"math_id": 48,
"text": "(\\lambda_d^{-})"
},
{
"math_id": 49,
"text": "(\\lambda_d^{+})"
},
{
"math_id": 50,
"text": "S^{-}"
},
{
"math_id": 51,
"text": "S^{+}"
},
{
"math_id": 52,
"text": "S^{-}\\leq S\\leq S^{+}."
},
{
"math_id": 53,
"text": "g"
},
{
"math_id": 54,
"text": "\\sum\\limits_{d\\mid n}\\mu(d)g(d)=\\prod\\limits_{\\begin{array}{c} p|n ;\\; p\\in\\mathbb{P}\\end{array}}(1-g(p)),\\quad\\forall\\; n\\in\\mathbb{N}."
},
{
"math_id": 55,
"text": "\\mathcal{A}"
},
{
"math_id": 56,
"text": "A"
},
{
"math_id": 57,
"text": "\\mathcal{A}=\\{s:s\\leq x\\}"
},
{
"math_id": 58,
"text": "|A_d(x)|"
},
{
"math_id": 59,
"text": "\\mathbb{P}"
},
{
"math_id": 60,
"text": "(a,b)"
},
{
"math_id": 61,
"text": "a"
},
{
"math_id": 62,
"text": "b"
},
{
"math_id": 63,
"text": "N^\\varepsilon"
},
{
"math_id": 64,
"text": "\\varepsilon"
},
{
"math_id": 65,
"text": "N^{1/2}"
},
{
"math_id": 66,
"text": "a^2 + b^4"
}
] | https://en.wikipedia.org/wiki?curid=1035915 |
10361630 | Affine Hecke algebra | In mathematics, an affine Hecke algebra is the algebra associated to an affine Weyl group, and can be used to prove Macdonald's constant term conjecture for Macdonald polynomials.
Definition.
Let formula_0 be a Euclidean space of a finite dimension and formula_1 an affine root system on formula_0. An affine Hecke algebra is a certain associative algebra that deforms the group algebra formula_2 of the Weyl group formula_3 of formula_1 (the affine Weyl group). It is usually denoted by formula_4, where formula_5 is multiplicity function that plays the role of deformation parameter. For formula_6 the affine Hecke algebra formula_4 indeed reduces to formula_2.
Generalizations.
Ivan Cherednik introduced generalizations of affine Hecke algebras, the so-called double affine Hecke algebra (usually referred to as DAHA). Using this he was able to give a proof of Macdonald's constant term conjecture for Macdonald polynomials (building on work of Eric Opdam). Another main inspiration for Cherednik to consider the double affine Hecke algebra was the quantum KZ equations. | [
{
"math_id": 0,
"text": "V"
},
{
"math_id": 1,
"text": "\\Sigma"
},
{
"math_id": 2,
"text": "\\mathbb{C}[W]"
},
{
"math_id": 3,
"text": " W"
},
{
"math_id": 4,
"text": " H(\\Sigma,q)"
},
{
"math_id": 5,
"text": "q:\\Sigma\\rightarrow \\mathbb{C}"
},
{
"math_id": 6,
"text": "q\\equiv 1"
}
] | https://en.wikipedia.org/wiki?curid=10361630 |
10361669 | Free lattice | Mathematical concept
In mathematics, in the area of order theory, a free lattice is the free object corresponding to a lattice. As free objects, they have the universal property.
Formal definition.
Because the concept of a lattice can be axiomatised in terms of two operations formula_0 and formula_1 satisfying certain identities, the category of all lattices constitute a variety (universal algebra), and thus there exist (by general principles of universal algebra) free objects within this category: lattices where "only" those relations hold which follow from the general axioms.
These free lattices may be characterised using the relevant universal property. Concretely, free lattice is a functor formula_2 from sets to lattices, assigning to each set formula_3 the free lattice formula_4 equipped with a set map formula_5 assigning to each formula_6 the corresponding element formula_7. The universal property of these is that there for any map formula_8 from formula_3 to some arbitrary lattice formula_9 exists a unique lattice homomorphism formula_10 satisfying formula_11, or as a commutative diagram:
formula_12
The functor formula_2 is left adjoint to the forgetful functor from lattices to their underlying sets.
It is frequently possible to prove things about the free lattice directly using the universal property, but such arguments tend to be rather abstract, so a concrete construction provides a valuable alternative presentation.
Semilattices.
In the case of semilattices, an explicit construction of the "free semilattice" formula_13 is straightforward to give; this helps illustrate several features of the definition by way of universal property. Concretely, the free semilattice formula_13 may be realised as the set of all finite nonempty subsets of formula_3, with ordinary set union as the join operation formula_1. The map formula_14 maps elements of formula_3 to singleton sets, i.e., formula_15 for all formula_6. For any semilattice formula_9 and any set map formula_8, the corresponding universal morphism formula_16 is given by
formula_17
where formula_18 denotes the semilattice operation in formula_9.
This form of formula_19 is forced by the universal property: any formula_20 can be written as a finite union of elements on the form formula_21 for some formula_6, the equality in the universal property says formula_22, and finally the homomorphism status of formula_19 implies formula_23 for all formula_24. Any extension of formula_19 to "infinite" subsets of formula_3 (if there even is one) need however not be "uniquely" determined by these conditions, so there cannot in formula_13 be any elements corresponding to infinite subsets of formula_3.
Lower semilattices.
It is similarly possible to define a free functor formula_25 for lower semilattices, but the combination formula_26 fails to produce the free lattice formula_4 in several ways, because formula_25 treats formula_13 as just a set:
The actual structure of the free lattice formula_4 is considerably more intricate than that of the free semilattice.
Word problem.
The word problem for free lattices has some interesting aspects. Consider the case of bounded lattices, i.e. algebraic structures with the two binary operations ∨ and ∧ and the two constants (nullary operations) 0 and 1. The set of all well-formed expressions that can be formulated using these operations on elements from a given set of generators "X" will be called W("X"). This set of words contains many expressions that turn out to denote equal values in every lattice. For example, if "a" is some element of "X", then "a" ∨ 1 = 1 and "a" ∧ 1 ="a". The word problem for free bounded lattices is the problem of determining which of these elements of W("X") denote the same element in the free bounded lattice "FX", and hence in every bounded lattice.
The word problem may be resolved as follows. A relation ≤~ on W("X") may be defined inductively by setting "w" ≤~ "v" if and only if one of the following holds:
This defines a preorder ≤~ on W("X"), so an equivalence relation can be defined by "w" ~ "v" when "w" ≤~ "v" and "v" ≤~ "w". One may then show that the partially ordered quotient space W("X")/~ is the free bounded lattice "FX". The equivalence classes of W("X")/~ are the sets of all words "w" and "v" with "w" ≤~ "v" and "v" ≤~ "w". Two well-formed words "v" and "w" in W("X") denote the same value in every bounded lattice if and only if "w" ≤~ "v" and "v" ≤~ "w"; the latter conditions can be effectively decided using the above inductive definition. The table shows an example computation to show that the words "x"∧"z" and "x"∧"z"∧("x"∨"y") denote the same value in every bounded lattice. The case of lattices that are not bounded is treated similarly, omitting rules 2. and 3. in the above construction.
The solution of the word problem on free lattices has several interesting corollaries. One is that the free lattice of a three-element set of generators is infinite. In fact, one can even show that every free lattice on three generators contains a sublattice which is free for a set of four generators. By induction, this eventually yields a sublattice free on countably many generators. This property is reminiscent of SQ-universality in groups.
The proof that the free lattice in three generators is infinite proceeds by inductively defining
"p""n"+1 = "x" ∨ ("y" ∧ ("z" ∨ ("x" ∧ ("y" ∨ ("z" ∧ "p""n")))))
where "x", "y", and "z" are the three generators, and "p"0 = "x". One then shows, using the inductive relations of the word problem, that "p""n"+1 is strictly greater
than "p""n", and therefore all infinitely many words "p""n" evaluate to different values in the free lattice "FX".
The complete free lattice.
Another corollary is that the complete free lattice (on three or more generators) "does not exist", in the sense that it is a proper class. The proof of this follows from the word problem as well. To define a complete lattice in terms of relations, it does not suffice to use the finitary relations of meet and join; one must also have infinitary relations defining the meet and join of infinite subsets. For example, the infinitary relation corresponding to "join" may be defined as
formula_30
Here, "f" is a map from the elements of a cardinal "N" to "FX"; the operator formula_31 denotes the supremum, in that it takes the image of "f" to its join. This is, of course, identical to "join" when "N" is a finite number; the point of this definition is to define join as a relation, even when "N" is an infinite cardinal.
The axioms of the pre-ordering of the word problem may be adjoined by the two infinitary operators corresponding to meet and join. After doing so, one then extends the definition of formula_32 to an ordinally indexed formula_33 given by
formula_34
when formula_35 is a limit ordinal. Then, as before, one may show that formula_36 is strictly greater than formula_33. Thus, there are at least as many elements in the complete free lattice as there are ordinals, and thus, the complete free lattice cannot exist as a set, and must therefore be a proper class.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\wedge"
},
{
"math_id": 1,
"text": "\\vee"
},
{
"math_id": 2,
"text": "F"
},
{
"math_id": 3,
"text": "X"
},
{
"math_id": 4,
"text": "F(X)"
},
{
"math_id": 5,
"text": "\\eta\\colon X \\longrightarrow F(X)"
},
{
"math_id": 6,
"text": "x \\in X"
},
{
"math_id": 7,
"text": "\\eta(x) \\in F(X)"
},
{
"math_id": 8,
"text": "f\\colon X \\longrightarrow L"
},
{
"math_id": 9,
"text": "L"
},
{
"math_id": 10,
"text": "\\tilde{f}\\colon F(X) \\longrightarrow L"
},
{
"math_id": 11,
"text": " f = \\tilde{f} \\circ \\eta"
},
{
"math_id": 12,
"text": "\n \\begin{array}{cccc}\n X & \\stackrel{\\eta}{\\longrightarrow} & F(X) \\\\\n & \\!\\forall f \\searrow & \\Bigg\\downarrow\\exists_1 \\tilde{f} \\!\\!\\!\\!\\!\\!\\!\\!\\!\\! &\\quad\\\\\n & & L\n \\end{array}\n"
},
{
"math_id": 13,
"text": "F_\\vee(X)"
},
{
"math_id": 14,
"text": "\\eta\\colon X \\longrightarrow F_\\vee(X)"
},
{
"math_id": 15,
"text": "\\eta(x) = \\{x\\}"
},
{
"math_id": 16,
"text": "\\tilde{f}\\colon F_\\vee(X) \\longrightarrow L"
},
{
"math_id": 17,
"text": "\n \\tilde{f}(S) = \\bigvee_{x \\in S} f(x)\n \\qquad\\text{for all } S \\in F_\\vee(X)\n"
},
{
"math_id": 18,
"text": "\\bigvee"
},
{
"math_id": 19,
"text": "\\tilde{f}"
},
{
"math_id": 20,
"text": "S \\in F_\\vee(X)"
},
{
"math_id": 21,
"text": "\\eta(x)"
},
{
"math_id": 22,
"text": " \\tilde{f}\\bigl( \\eta(x) \\bigr) = f(x) "
},
{
"math_id": 23,
"text": "\\tilde{f}(S_1 \\cup S_2) = \\tilde{f}(S_1) \\vee \\tilde{f}(S_2)"
},
{
"math_id": 24,
"text": "S_1,S_2 \\in F_\\vee(X)"
},
{
"math_id": 25,
"text": "F_\\wedge"
},
{
"math_id": 26,
"text": "F_\\wedge(F_\\vee(X))"
},
{
"math_id": 27,
"text": "a"
},
{
"math_id": 28,
"text": " a \\vee b "
},
{
"math_id": 29,
"text": " a \\wedge (a \\vee b) = a "
},
{
"math_id": 30,
"text": "\\operatorname{sup}_N:(f:N\\to FX)"
},
{
"math_id": 31,
"text": "\\operatorname{sup}_N"
},
{
"math_id": 32,
"text": "p_n"
},
{
"math_id": 33,
"text": "p_\\alpha"
},
{
"math_id": 34,
"text": "p_\\alpha = \\operatorname{sup}\\{p_\\beta \\mid \\beta < \\alpha\\}"
},
{
"math_id": 35,
"text": "\\alpha"
},
{
"math_id": 36,
"text": "p_{\\alpha+1}"
}
] | https://en.wikipedia.org/wiki?curid=10361669 |
1036303 | Variation of parameters | Procedure for solving differential equationsIn mathematics, variation of parameters, also known as variation of constants, is a general method to solve inhomogeneous linear ordinary differential equations.
For first-order inhomogeneous linear differential equations it is usually possible to find solutions via integrating factors or undetermined coefficients with considerably less effort, although those methods leverage heuristics that involve guessing and do not work for all inhomogeneous linear differential equations.
Variation of parameters extends to linear partial differential equations as well, specifically to inhomogeneous problems for linear evolution equations like the heat equation, wave equation, and vibrating plate equation. In this setting, the method is more often known as Duhamel's principle, named after Jean-Marie Duhamel (1797–1872) who first applied the method to solve the inhomogeneous heat equation. Sometimes variation of parameters itself is called Duhamel's principle and vice versa.
History.
The method of variation of parameters was first sketched by the Swiss mathematician Leonhard Euler (1707–1783), and later completed by the Italian-French mathematician Joseph-Louis Lagrange (1736–1813).
A forerunner of the method of variation of a celestial body's orbital elements appeared in Euler's work in 1748, while he was studying the mutual perturbations of Jupiter and Saturn. In his 1749 study of the motions of the earth, Euler obtained differential equations for the orbital elements. In 1753, he applied the method to his study of the motions of the moon.
Lagrange first used the method in 1766. Between 1778 and 1783, he further developed the method in two series of memoirs: one on variations in the motions of the planets and another on determining the orbit of a comet from three observations. During 1808–1810, Lagrange gave the method of variation of parameters its final form in a third series of papers.
Description of method.
Given an ordinary non-homogeneous linear differential equation of order "n"
Let formula_0 be a basis of the vector space of solutions of the corresponding homogeneous equation
Then a particular solution to the non-homogeneous equation is given by
where the formula_1 are differentiable functions which are assumed to satisfy the conditions
Starting with (iii), repeated differentiation combined with repeated use of (iv) gives
One last differentiation gives
By substituting (iii) into (i) and applying (v) and (vi) it follows that
The linear system (iv and vii) of "n" equations can then be solved using Cramer's rule yielding
formula_2
where formula_3 is the Wronskian determinant of the basis formula_0 and formula_4 is the Wronskian determinant of the basis with the "i"-th column replaced by formula_5
The particular solution to the non-homogeneous equation can then be written as
formula_6
Intuitive explanation.
Consider the equation of the forced dispersionless spring, in suitable units:
formula_7
Here "x" is the displacement of the spring from the equilibrium "x"
0, and "F"("t") is an external applied force that depends on time. When the external force is zero, this is the homogeneous equation (whose solutions are linear combinations of sines and cosines, corresponding to the spring oscillating with constant total energy).
We can construct the solution physically, as follows. Between times formula_8 and formula_9, the momentum corresponding to the solution has a net change formula_10 (see: Impulse (physics)). A solution to the inhomogeneous equation, at the present time "t" > 0, is obtained by linearly superposing the solutions obtained in this manner, for "s" going between 0 and t.
The homogeneous initial-value problem, representing a small impulse formula_10 being added to the solution at time formula_8, is
formula_11
The unique solution to this problem is easily seen to be formula_12. The linear superposition of all of these solutions is given by the integral:
formula_13
To verify that this satisfies the required equation:
formula_14
formula_15
as required (see: Leibniz integral rule).
The general method of variation of parameters allows for solving an inhomogeneous linear equation
formula_16
by means of considering the second-order linear differential operator "L" to be the net force, thus the total impulse imparted to a solution between time "s" and "s"+"ds" is "F"("s")"ds". Denote by formula_17 the solution of the homogeneous initial value problem
formula_18
Then a particular solution of the inhomogeneous equation is
formula_19
the result of linearly superposing the infinitesimal homogeneous solutions. There are generalizations to higher order linear differential operators.
In practice, variation of parameters usually involves the fundamental solution of the homogeneous problem, the infinitesimal solutions formula_17 then being given in terms of explicit linear combinations of linearly independent fundamental solutions. In the case of the forced dispersionless spring, the kernel formula_20 is the associated decomposition into fundamental solutions.
formula_21
Examples.
First-order equation.
The complementary solution to our original (inhomogeneous) equation is the general solution of the corresponding homogeneous equation (written below):
formula_22
This homogeneous differential equation can be solved by different methods, for example separation of variables:
formula_23
formula_24
formula_25
formula_26
formula_27
formula_28
The complementary solution to our original equation is therefore:
formula_29
Now we return to solving the non-homogeneous equation:
formula_30
Using the method variation of parameters, the particular solution is formed by multiplying the complementary solution by an unknown function "C"("x"):
formula_31
By substituting the particular solution into the non-homogeneous equation, we can find "C"("x"):
formula_32
formula_33
formula_34
formula_35
We only need a single particular solution, so we arbitrarily select formula_36 for simplicity. Therefore the particular solution is:
formula_37
The final solution of the differential equation is:
formula_38
This recreates the method of integrating factors.
Specific second-order equation.
Let us solve
formula_39
We want to find the general solution to the differential equation, that is, we want to find solutions to the homogeneous differential equation
formula_40
The characteristic equation is:
formula_41
Since formula_42 is a repeated root, we have to introduce a factor of "x" for one solution to ensure linear independence: formula_43 and formula_44. The Wronskian of these two functions is
formula_45
Because the Wronskian is non-zero, the two functions are linearly independent, so this is in fact the general solution for the homogeneous differential equation (and not a mere subset of it).
We seek functions "A"("x") and "B"("x") so "A"("x")"u"1 + "B"("x")"u"2 is a particular solution of the non-homogeneous equation. We need only calculate the integrals
formula_46
Recall that for this example
formula_47
That is,
formula_48
formula_49
where formula_50 and formula_51 are constants of integration.
General second-order equation.
We have a differential equation of the form
formula_52
and we define the linear operator
formula_53
where "D" represents the differential operator. We therefore have to solve the equation formula_54 for formula_55, where formula_56 and formula_57 are known.
We must solve first the corresponding homogeneous equation:
formula_58
by the technique of our choice. Once we've obtained two linearly independent solutions to this homogeneous differential equation (because this ODE is second-order) — call them "u"1 and "u"2 — we can proceed with variation of parameters.
Now, we seek the general solution to the differential equation formula_59 which we assume to be of the form
formula_60
Here, formula_61 and formula_62 are unknown and formula_63 and formula_64 are the solutions to the homogeneous equation. (Observe that if formula_61 and formula_62 are constants, then formula_65.) Since the above is only one equation and we have two unknown functions, it is reasonable to impose a second condition. We choose the following:
formula_66
Now,
formula_67
Differentiating again (omitting intermediary steps)
formula_68
Now we can write the action of "L" upon "u""G" as
formula_69
Since "u"1 and "u"2 are solutions, then
formula_70
We have the system of equations
formula_71
Expanding,
formula_72
So the above system determines precisely the conditions
formula_66
formula_73
We seek "A"("x") and "B"("x") from these conditions, so, given
formula_74
we can solve for ("A"′("x"), "B"′("x"))T, so
formula_75
where "W" denotes the Wronskian of "u"1 and "u"2. (We know that "W" is nonzero, from the assumption that "u"1 and "u"2 are linearly independent.) So,
formula_76
While homogeneous equations are relatively easy to solve, this method allows the calculation of the coefficients of the general solution of the "in"homogeneous equation, and thus the complete general solution of the inhomogeneous equation can be determined.
Note that formula_61 and formula_77 are each determined only up to an arbitrary additive constant (the constant of integration). Adding a constant to formula_61 or formula_62 does not change the value of formula_78 because the extra term is just a linear combination of "u"1 and "u"2, which is a solution of formula_56 by definition.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "y_1(x), \\ldots, y_n(x)"
},
{
"math_id": 1,
"text": "c_i(x)"
},
{
"math_id": 2,
"text": "c_i'(x) = \\frac{W_i(x)}{W(x)}, \\, \\quad i=1,\\ldots,n"
},
{
"math_id": 3,
"text": "W(x)"
},
{
"math_id": 4,
"text": "W_i(x)"
},
{
"math_id": 5,
"text": "(0, 0, \\ldots, b(x))."
},
{
"math_id": 6,
"text": "\\sum_{i=1}^n y_i(x) \\, \\int \\frac{W_i(x)}{W(x)}\\, \\mathrm dx."
},
{
"math_id": 7,
"text": "x''(t) + x(t) = F(t)."
},
{
"math_id": 8,
"text": "t=s"
},
{
"math_id": 9,
"text": "t=s+ds"
},
{
"math_id": 10,
"text": "F(s)\\,ds"
},
{
"math_id": 11,
"text": "x''(t)+x(t)=0,\\quad x(s)=0,\\ x'(s)=F(s)\\,ds."
},
{
"math_id": 12,
"text": "x(t) = F(s)\\sin(t-s)\\,ds"
},
{
"math_id": 13,
"text": "x(t) = \\int_0^t F(s)\\sin(t-s)\\,ds."
},
{
"math_id": 14,
"text": "x'(t)=\\int_0^t F(s)\\cos(t-s)\\,ds"
},
{
"math_id": 15,
"text": "x''(t) = F(t) - \\int_0^tF(s)\\sin(t-s)\\,ds = F(t)-x(t),"
},
{
"math_id": 16,
"text": "Lx(t)=F(t)"
},
{
"math_id": 17,
"text": "x_s "
},
{
"math_id": 18,
"text": "Lx(t)=0, \\quad x(s)=0,\\ x'(s)=F (s)\\,ds. "
},
{
"math_id": 19,
"text": "x (t)=\\int_0^t x_s (t)\\,ds,"
},
{
"math_id": 20,
"text": "\\sin(t-s)=\\sin t\\cos s - \\sin s\\cos t "
},
{
"math_id": 21,
"text": " y' + p(x)y = q(x) "
},
{
"math_id": 22,
"text": " y' + p(x)y = 0 "
},
{
"math_id": 23,
"text": "\\frac{d}{dx} y + p(x)y = 0 "
},
{
"math_id": 24,
"text": "\\frac{dy}{dx}=-p(x)y "
},
{
"math_id": 25,
"text": "{dy \\over y} = -{p(x)\\,dx},"
},
{
"math_id": 26,
"text": "\\int \\frac{1}{ y} \\, dy = -\\int p(x) \\, dx "
},
{
"math_id": 27,
"text": "\\ln |y| = -\\int p(x) \\, dx + C "
},
{
"math_id": 28,
"text": "y = \\pm e^{-\\int p(x) \\, dx +C } = C_0 e^{-\\int p(x) \\, dx}"
},
{
"math_id": 29,
"text": "y_c = C_0 e^{-\\int p(x) \\, dx}"
},
{
"math_id": 30,
"text": " y' + p(x)y = q(x)"
},
{
"math_id": 31,
"text": "y_p = C(x) e^{-\\int p(x) \\, dx}"
},
{
"math_id": 32,
"text": " C' (x) e^{-\\int p(x) \\, dx} - C(x) p(x) e^{-\\int p(x) \\, dx} + p(x) C(x) e^{-\\int p(x) \\, dx} = q(x)"
},
{
"math_id": 33,
"text": " C' (x) e^{-\\int p(x) \\, dx} = q(x)"
},
{
"math_id": 34,
"text": " C' (x) = q(x) e^{\\int p(x) \\, dx} "
},
{
"math_id": 35,
"text": " C(x) =\\int q(x) e^{\\int p(x) \\, dx} \\, dx + C_1 "
},
{
"math_id": 36,
"text": "C_1=0"
},
{
"math_id": 37,
"text": "y_p =e^{-\\int p(x) \\, dx} \\int q(x) e^{\\int p(x) \\, dx} \\, dx"
},
{
"math_id": 38,
"text": "\\begin{align}\ny &= y_c + y_p\\\\\n&=C_0 e^{-\\int p(x) \\, dx} + e^{-\\int p(x) \\, dx} \\int q(x) e^{\\int p(x) \\, dx} \\, dx\n\\end{align}"
},
{
"math_id": 39,
"text": " y''+4y'+4y = \\cosh x"
},
{
"math_id": 40,
"text": "y''+4y'+4y=0. "
},
{
"math_id": 41,
"text": "\\lambda^2+4\\lambda+4=(\\lambda+2)^2=0 "
},
{
"math_id": 42,
"text": "\\lambda=-2"
},
{
"math_id": 43,
"text": " u_1 = e^{-2x} "
},
{
"math_id": 44,
"text": " u_2 =x e^{-2x}"
},
{
"math_id": 45,
"text": "W=\\begin{vmatrix}\n e^{-2x} & xe^{-2x} \\\\\n-2e^{-2x} & -e^{-2x}(2x-1)\\\\\n\\end{vmatrix} = -e^{-2x}e^{-2x}(2x-1)+2xe^{-2x}e^{-2x} = e^{-4x}. "
},
{
"math_id": 46,
"text": "A(x) = - \\int {1\\over W} u_2(x) b(x)\\,\\mathrm dx,\\; B(x) = \\int {1 \\over W} u_1(x)b(x)\\,\\mathrm dx"
},
{
"math_id": 47,
"text": "b(x) = \\cosh x"
},
{
"math_id": 48,
"text": "A(x) = - \\int {1\\over e^{-4x}} xe^{-2x} \\cosh x \\,\\mathrm dx = - \\int xe^{2x}\\cosh x \\,\\mathrm dx = -{1\\over 18}e^x\\left(9(x-1)+e^{2x}(3x-1)\\right)+C_1"
},
{
"math_id": 49,
"text": "B(x) = \\int {1 \\over e^{-4x}} e^{-2x} \\cosh x \\,\\mathrm dx = \\int e^{2x}\\cosh x\\,\\mathrm dx ={1\\over 6}e^x\\left(3+e^{2x}\\right)+C_2 "
},
{
"math_id": 50,
"text": "C_1"
},
{
"math_id": 51,
"text": "C_2"
},
{
"math_id": 52,
"text": "u''+p(x)u'+q(x)u=f(x)"
},
{
"math_id": 53,
"text": "L=D^2+p(x)D+q(x)"
},
{
"math_id": 54,
"text": "L u(x)=f(x)"
},
{
"math_id": 55,
"text": "u(x)"
},
{
"math_id": 56,
"text": "L"
},
{
"math_id": 57,
"text": "f(x)"
},
{
"math_id": 58,
"text": "u''+p(x)u'+q(x)u=0"
},
{
"math_id": 59,
"text": " u_G(x)"
},
{
"math_id": 60,
"text": "u_G(x)=A(x)u_1(x)+B(x)u_2(x)."
},
{
"math_id": 61,
"text": "A(x)"
},
{
"math_id": 62,
"text": "B(x)"
},
{
"math_id": 63,
"text": "u_1(x)"
},
{
"math_id": 64,
"text": "u_2(x)"
},
{
"math_id": 65,
"text": "Lu_G(x)=0"
},
{
"math_id": 66,
"text": "A'(x)u_1(x)+B'(x)u_2(x)=0."
},
{
"math_id": 67,
"text": "\\begin{align}\nu_G'(x) &= \\left (A(x)u_1(x)+B(x)u_2(x) \\right )' \\\\\n&= \\left (A(x)u_1(x) \\right )'+ \\left (B(x)u_2(x) \\right )'\\\\\n&=A'(x)u_1(x)+A(x)u_1'(x)+B'(x)u_2(x)+B(x)u_2'(x)\\\\\n&=A'(x)u_1(x)+B'(x)u_2(x)+A(x)u_1'(x)+B(x)u_2'(x) \\\\\n&= A(x)u_1'(x)+B(x)u_2'(x)\n\\end{align}"
},
{
"math_id": 68,
"text": "u_G''(x)=A(x)u_1''(x)+B(x)u_2''(x)+A'(x)u_1'(x)+B'(x)u_2'(x)."
},
{
"math_id": 69,
"text": "Lu_G=A(x)Lu_1(x)+B(x)Lu_2(x)+A'(x)u_1'(x)+B'(x)u_2'(x)."
},
{
"math_id": 70,
"text": "Lu_G=A'(x)u_1'(x)+B'(x)u_2'(x)."
},
{
"math_id": 71,
"text": "\\begin{bmatrix}\nu_1(x) & u_2(x) \\\\\nu_1'(x) & u_2'(x) \\end{bmatrix}\n\\begin{bmatrix}\nA'(x) \\\\\nB'(x)\\end{bmatrix} =\n\\begin{bmatrix} 0 \\\\ f \\end{bmatrix}."
},
{
"math_id": 72,
"text": "\\begin{bmatrix}\nA'(x)u_1(x)+B'(x)u_2(x)\\\\\nA'(x)u_1'(x)+B'(x)u_2'(x) \\end{bmatrix}\n= \\begin{bmatrix} 0\\\\f\\end{bmatrix}."
},
{
"math_id": 73,
"text": "A'(x)u_1'(x)+B'(x)u_2'(x)=Lu_G=f."
},
{
"math_id": 74,
"text": "\\begin{bmatrix}\nu_1(x) & u_2(x) \\\\\nu_1'(x) & u_2'(x)\n\\end{bmatrix}\n\\begin{bmatrix}\nA'(x) \\\\\nB'(x)\\end{bmatrix} =\n\\begin{bmatrix}\n0\\\\\nf\\end{bmatrix}"
},
{
"math_id": 75,
"text": "\\begin{bmatrix} A'(x) \\\\ B'(x) \\end{bmatrix} =\n\\begin{bmatrix}\nu_1(x) & u_2(x) \\\\\nu_1'(x) & u_2'(x)\n\\end{bmatrix}^{-1}\n\\begin{bmatrix} 0\\\\ f \\end{bmatrix} =\\frac{1}{W} \\begin{bmatrix}\nu_2'(x) & -u_2(x) \\\\\n-u_1'(x) & u_1(x) \\end{bmatrix}\n\\begin{bmatrix} 0\\\\ f \\end{bmatrix},"
},
{
"math_id": 76,
"text": " \\begin{align}\nA'(x) &= - {1\\over W} u_2(x) f(x), & B'(x) &= {1 \\over W} u_1(x)f(x) \\\\\nA(x) &= - \\int {1\\over W} u_2(x) f(x)\\,\\mathrm dx, & B(x) &= \\int {1 \\over W} u_1(x)f(x)\\,\\mathrm dx\n\\end{align}"
},
{
"math_id": 77,
"text": " B(x)"
},
{
"math_id": 78,
"text": "Lu_G(x)"
}
] | https://en.wikipedia.org/wiki?curid=1036303 |
1036651 | Input–output model | Quantitative economic model
In economics, an input–output model is a quantitative economic model that represents the interdependencies between different sectors of a national economy or different regional economies. Wassily Leontief (1906–1999) is credited with developing this type of analysis and earned the Nobel Prize in Economics for his development of this model.
Origins.
Francois Quesnay had developed a cruder version of this technique called Tableau économique, and Léon Walras's work "Elements of Pure Economics" on general equilibrium theory also was a forerunner and made a generalization of Leontief's seminal concept.
Alexander Bogdanov has been credited with originating the concept in a report delivered to the All Russia Conference on the Scientific Organisation of Labour and Production Processes, in January 1921. This approach was also developed by Lev Kritzman. Thomas Remington, has argued that their work provided a link between Quesnay's tableau économique and the subsequent contributions by Vladimir Groman and Vladimir Bazarov to Gosplan's method of material balance planning.
Wassily Leontief's work in the input–output model was influenced by the works of the classical economists Karl Marx and Jean Charles Léonard de Sismondi. Karl Marx's economics provided an early outline involving a set of tables where the economy consisted of two interlinked departments.
Leontief was the first to use a matrix representation of a national (or regional) economy.
Basic derivation.
The model depicts inter-industry relationships within an economy, showing how output from one industrial sector may become an input to another industrial sector. In the inter-industry matrix, column entries typically represent inputs to an industrial sector, while row entries represent outputs from a given sector. This format, therefore, shows how dependent each sector is on every other sector, both as a customer of outputs from other sectors and as a supplier of inputs. Sectors may also depend internally on a portion of their own production as delineated by the entries of the matrix diagonal. Each column of the input–output matrix shows the monetary value of inputs to each sector and each row represents the value of each sector's outputs.
Say that we have an economy with formula_0 sectors. Each sector produces formula_1 units of a single homogeneous good. Assume that the formula_2th sector, in order to produce 1 unit, must use formula_3 units from sector formula_4. Furthermore, assume that each sector sells some of its output to other sectors (intermediate output) and some of its output to consumers (final output, or final demand). Call final demand in the formula_4th sector formula_5. Then we might write
formula_6
or total output equals intermediate output plus final output. If we let formula_7 be the matrix of coefficients formula_3, formula_8 be the vector of total output, and formula_9 be the vector of final demand, then our expression for the economy becomes
which after re-writing becomes formula_10. If the matrix formula_11 is invertible then this is a linear system of equations with a unique solution, and so given some final demand vector the required output can be found. Furthermore, if the principal minors of the matrix formula_11 are all positive (known as the Hawkins–Simon condition), the required output vector formula_8 is non-negative.
Example.
Consider an economy with two goods, A and B. The matrix of coefficients and the final demand is given by
formula_12
Intuitively, this corresponds to finding the amount of output each sector should produce given that we want 7 units of good A and 4 units of good B. Then solving the system of linear equations derived above gives us
formula_13
Further research.
There is extensive literature on these models. The model has been extended to work with non-linear relationships between sectors. There is the Hawkins–Simon condition on producibility. There has been research on disaggregation to clustered inter-industry flows, and on the study of constellations of industries. A great deal of empirical work has been done to identify coefficients, and data has been published for the national economy as well as for regions. The Leontief system can be extended to a model of general equilibrium; it offers a method of decomposing work done at a macro level.
Regional multipliers.
While national input–output tables are commonly created by countries' statistics agencies, officially published regional input–output tables are rare. Therefore, economists often use location quotients to create regional multipliers starting from national data. This technique has been criticized because there are several location quotient regionalization techniques, and none are universally superior across all use-cases.
Introducing transportation.
Transportation is implicit in the notion of inter-industry flows. It is explicitly recognized when transportation is identified as an industry – how much is purchased from transportation in order to produce. But this is not very satisfactory because transportation requirements differ, depending on industry locations and capacity constraints on regional production. Also, the receiver of goods generally pays freight cost, and often transportation data are lost because transportation costs are treated as part of the cost of the goods.
Walter Isard and his student, Leon Moses, were quick to see the spatial economy and transportation implications of input–output, and began work in this area in the 1950s developing a concept of interregional input–output. Take a one region versus the world case. We wish to know something about inter-regional commodity flows, so introduce a column into the table headed "exports" and we introduce an "import" row.
A more satisfactory way to proceed would be to tie regions together at the industry level. That is, we could identify both intra-region inter-industry transactions and inter-region inter-industry transactions. The problem here is that the table grows quickly.
Input–output is conceptually simple. Its extension to a model of equilibrium in the national economy has been done successfully using high-quality data. One who wishes to work with input–output systems must deal with industry classification, data estimation, and inverting very large, often ill-conditioned matrices. The quality of the data and matrices of the input-output model can be improved by modelling activities with digital twins and solving the problem of optimizing management decisions. Moreover, changes in relative prices are not readily handled by this modelling approach alone. Input–output accounts are part and parcel to a more flexible form of modelling, computable general equilibrium models.
Two additional difficulties are of interest in transportation work. There is the question of substituting one input for another, and there is the question about the stability of coefficients as production increases or decreases. These are intertwined questions. They have to do with the nature of regional production functions.
Technology Assumptions.
To construct input-output tables from supply and use tables, four principal assumptions can be applied. The choice depends on whether product-by-product or industry-by-industry input-output tables are to be established.
Usefulness.
Because the input–output model is fundamentally linear in nature, it lends itself to rapid computation as well as flexibility in computing the effects of changes in demand. Input–output models for different regions can also be linked together to investigate the effects of inter-regional trade, and additional columns can be added to the table to perform environmentally extended input–output analysis (EEIOA). For example, information on fossil fuel inputs to each sector can be used to investigate flows of embodied carbon within and between different economies.
The structure of the input–output model has been incorporated into national accounting in many developed countries, and as such can be used to calculate important measures such as national GDP. Input–output economics has been used to study regional economies within a nation, and as a tool for national and regional economic planning. A main use of input–output analysis is to measure the economic impacts of events as well as public investments or programs as shown by IMPLAN and Regional Input–Output Modeling System. It is also used to identify economically related industry clusters and also so-called "key" or "target" industries (industries that are most likely to enhance the internal coherence of a specified economy). By linking industrial output to satellite accounts articulating energy use, effluent production, space needs, and so on, input–output analysts have extended the approaches application to a wide variety of uses.
Input–output and socialist planning.
The input–output model is one of the major conceptual models for a socialist planned economy. This model involves the direct determination of physical quantities to be produced in each industry, which are used to formulate a consistent economic plan of resource allocation. This method of planning is contrasted with price-directed Lange-model socialism and Soviet-style material balance planning.
In the economy of the Soviet Union, planning was conducted using the method of material balances up until the country's dissolution. The method of material balances was first developed in the 1930s during the Soviet Union's rapid industrialization drive. Input–output planning was never adopted because the material balance system had become entrenched in the Soviet economy, and input–output planning was shunned for ideological reasons. As a result, the benefits of consistent and detailed planning through input–output analysis were never realized in the Soviet-type economies.
Measuring input–output tables.
The mathematics of input–output economics is straightforward, but the data requirements are enormous because the expenditures and revenues of each branch of economic activity have to be represented. As a result, not all countries collect the required data and data quality varies, even though a set of standards for the data's collection has been set out by the United Nations through its System of National Accounts (SNA): the most recent standard is the 2008 SNA. Because the data collection and preparation process for the input–output accounts is necessarily labor and computer intensive, input–output tables are often published long after the year in which the data were collected—typically as much as 5–7 years after. Moreover, the economic "snapshot" that the benchmark version of the tables provides of the economy's cross-section is typically taken only once every few years, at best.
However, many developed countries estimate input–output accounts annually and with much greater recency. This is because while most uses of the input–output analysis focus on the matrix set of inter-industry exchanges, the actual focus of the analysis from the perspective of most national statistical agencies is the benchmarking of gross domestic product. Input–output tables therefore are an instrumental part of national accounts. As suggested above, the core input–output table reports only intermediate goods and services that are exchanged among industries. But an array of row vectors, typically aligned at the bottom of this matrix, record non-industrial inputs by industry like payments for labor; indirect business taxes; dividends, interest, and rents; capital consumption allowances (depreciation); other property-type income (like profits); and purchases from foreign suppliers (imports). At a national level, although excluding the imports, when summed this is called "gross product originating" or "gross domestic product by industry." Another array of column vectors is called "final demand" or "gross product consumed." This displays columns of spending by households, governments, changes in industry stocks, and industries on investment, as well as net exports. (See also Gross domestic product.) In any case, by employing the results of an economic census which asks for the sales, payrolls, and material/equipment/service input of each establishment, statistical agencies back into estimates of industry-level profits and investments using the input–output matrix as a sort of double-accounting framework.
Dynamic Extensions.
The Leontef IO model with capital formation endogenized.
The IO model discussed above is static because it does not describe the evolution of the economy over time: it does not include different time periods.
Dynamic Leontief models are obtained by endogenizing the formation of capital stock over time.
Denote by formula_14the vector of capital formation, with formula_15 its formula_16th element, and by formula_17 the amount of capital good formula_16 (for example, a blade) used in sector formula_18 ( for example, wind power generation), for investment at time formula_19. We then have
formula_20
We assume that it takes one year for investment in plant and equipment to become productive capacity. Denoting by formula_21 the stock of formula_16 at the beginning of time formula_19, and by formula_22 the rate of depreciation, we then have:
Here, formula_23 refers to the amount of capital stock that is used up in year formula_19.
Denote by formula_24 the productive capacity in formula_19, and assume the following proportionalty between formula_21 and formula_24:
The matrix formula_25 is called the capital coefficient matrix.
From (2) and (3), we obtain the following expression for formula_14:
formula_26
Assuming that the productive capacity is always fully utilized, we obtain the following expression for (1) with endogenized capital formation:
formula_27
where formula_28 stands for the items of final demand other than formula_14.
Rearranged, we have
formula_29
wehere formula_30.
If formula_31 is non-singular, this model could be solved for formula_32 for given formula_33 and formula_34:
formula_35
This is the Leontief dynamic forward-looking model
A caveat to this model is that formula_31 will, in general, be singular, and the above formulation cannot be obtained.
This is because some products, such as energy items, are not used as capital goods, and the corresponding rows of the matrix formula_31 will be zeros.
This fact has prompted some researchers to consolidate the sectors until the non-singularity of formula_31 is achieved, at the cost of sector resolution. Apart from this feature, many studies have found that the outcomes obtained for this forward-looking model invariably lead to unrealistic and widely fluctuating results that lack economic interpretation. This has resulted in a gradual decline in interest in the model after the 1970s, although there is a recent increase in interest within the context of disaster analysis.
Input–output analysis versus consistency analysis.
Despite the clear ability of the input–output model to depict and analyze the dependence of one industry or sector on another, Leontief and others never managed to introduce the full spectrum of dependency relations in a market economy. In 2003, Mohammad Gani, a pupil of Leontief, introduced consistency analysis in his book "Foundations of Economic Science", which formally looks exactly like the input–output table but explores the dependency relations in terms of payments and intermediation relations. Consistency analysis explores the consistency of plans of buyers and sellers by decomposing the input–output table into four matrices, each for a different kind of means of payment. It integrates micro and macroeconomics into one model and deals with money in a value-free manner. It deals with the flow of funds via the movement of goods.
Notes.
<templatestyles src="Reflist/styles.css" />
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" />
ابونوری, اسمعیل, فرهادی, & عزیزاله. (2017). آزمون فروض تکنولوژی در محاسبه جدول داده ستانده متقارن ایران: یک رهیافت اقتصاد سنجی. پژوهشهای اقتصادی ایران, 21(69), 117–145. | [
{
"math_id": 0,
"text": " n "
},
{
"math_id": 1,
"text": " x_i "
},
{
"math_id": 2,
"text": " j "
},
{
"math_id": 3,
"text": " a_{ij} "
},
{
"math_id": 4,
"text": " i "
},
{
"math_id": 5,
"text": " y_i "
},
{
"math_id": 6,
"text": "\nx_i = a_{i1}x_1 + a_{i2}x_2 + \\cdots + a_{in}x_n + y_i,\n"
},
{
"math_id": 7,
"text": " A "
},
{
"math_id": 8,
"text": "\\mathbf x "
},
{
"math_id": 9,
"text": "\\mathbf y "
},
{
"math_id": 10,
"text": " \\left(I - A\\right)\\mathbf{x} = \\mathbf{y} "
},
{
"math_id": 11,
"text": " I - A "
},
{
"math_id": 12,
"text": "\nA = \\begin{bmatrix} 0.5 & 0.2 \\\\ 0.4 & 0.1 \\end{bmatrix} \\text{ and } \\mathbf{y} = \\begin{bmatrix} 7 \\\\ 4 \\end{bmatrix}.\n"
},
{
"math_id": 13,
"text": "\n\\mathbf{x} = \\left(I - A\\right)^{-1} \\mathbf{y} = \\begin{bmatrix} 19.19 \\\\ 12.97 \\end{bmatrix}.\n"
},
{
"math_id": 14,
"text": "y^I"
},
{
"math_id": 15,
"text": "y^I_i"
},
{
"math_id": 16,
"text": "i"
},
{
"math_id": 17,
"text": "I_{ij}(t)"
},
{
"math_id": 18,
"text": "j"
},
{
"math_id": 19,
"text": "t"
},
{
"math_id": 20,
"text": "\ny^I_i(t) = \\sum_j I_{ij}(t)\n"
},
{
"math_id": 21,
"text": "K_{ij}(t)"
},
{
"math_id": 22,
"text": "\\delta \\in (0,1]"
},
{
"math_id": 23,
"text": "\\delta_{ij}K_{ij}(t)"
},
{
"math_id": 24,
"text": "\\bar{x}_j(t)"
},
{
"math_id": 25,
"text": "B=[b_{ij}]"
},
{
"math_id": 26,
"text": "\ny^I(t) = B\\bar{x}(t+1) + (\\delta - I)\\bar{x}(t)\n"
},
{
"math_id": 27,
"text": "\nx(t) = Ax(t)+Bx(t+1)+ (\\delta-I)Bx(t) + y^o(t), \n"
},
{
"math_id": 28,
"text": "y^o"
},
{
"math_id": 29,
"text": "\n\\begin{align}\nBx(t+1) &= (I-A + (I-\\delta)B)x(t) - y^o(t)\\\\\n&= (I - \\bar{A} + B)x(t) - y^o(t)\n\\end{align}\n"
},
{
"math_id": 30,
"text": "\\bar{A}=A + \\delta B"
},
{
"math_id": 31,
"text": "B"
},
{
"math_id": 32,
"text": "x(t+1)"
},
{
"math_id": 33,
"text": "x(t)"
},
{
"math_id": 34,
"text": "y^o(t)"
},
{
"math_id": 35,
"text": "\nx(t+1) = [I + B^{-1}(I- \\bar{A})]x(t) - B^{-1}y^o(t)\n"
}
] | https://en.wikipedia.org/wiki?curid=1036651 |
10367580 | Manual handling of loads | Use of the human body to lift, lower, carry or transfer loads
Manual handling of loads (MHL) or manual material handling (MMH) involves the use of the human body to lift, lower, carry or transfer loads. The average person is exposed to manual lifting of loads in the work place, in recreational atmospheres, and even in the home. To properly protect one from injuring themselves, it can help to understand general body mechanics.
Hazards and injuries.
Manual handling of materials can be found in any workplace from offices to heavy industrial and manufacturing facilities. Often times, manual material handling entails tasks such as lifting, climbing, pushing, pulling, and pivoting, all of which pose the risk of injury to the back and other skeletal systems which can often lead to musculoskeletal disorders (MSDs). Musculoskeletal disorders can be defined as often involving strains and sprains to the lower back, shoulders, and upper limbs. According to a U.S. Department of Labor study published in 1990, back injuries accounted for approximately 20% of all injuries in the workplace which accounted for almost 25% of the total workers compensation payouts.
To better understand the potential injuries of manual handling of materials, we must first understand the underlying conditions which can cause the injuries. When an injury occurs from manual handling of materials, it often is a result of one of the following underlying condition(s).
Although musculoskeletal disorder can develop overtime, when manual handling of materials, they can also occur after only one activity. Some of the common injuries associated with manual handling of loads include but are not limited to:
In addition to the injuries listed above, the worker can be exposed to soreness, bruises, cuts, punctures, crushing,
Remember your health comes first, it is temporary so make good use of it.
Commonly affected industries and workforces.
Although employee's can be exposed to manual handling of materials in any industry or workplace there are workplaces that are more susceptible to hazards of manual material handling. These industries include but are not limited to:
Evaluation or assessment tools.
There are multiple tools which can be used to assess the manual handling of material. Some of the most common methods are discussed below in no particular order.
NIOSH lifting equation.
The U.S. National Institute for Occupational Safety and Health (NIOSH) is a division of the Centers for Disease Control and Prevention (CDC) under the United States Department of Health and Human Services. NIOSH first published the NIOSH lifting equation in 1991 and went into effect July 1994. NIOSH made changes to the NIOSH lifting equation manual in 2021 which included updated graphics and tables and identified errors were corrected.
The NIOSH lifting equation is a tool (now application ) that can be used by health and safety professionals to assess employees who are exposed to manual lifting or handling of materials. The NIOSH lifting equation is a mathematical calculation which calculates the Recommended Weight Limit (RWL) using a series of tables, variables, and constants. The equation for the NIOSH lifting equation is shown below
formula_0
where:
Using the RWL, you can also find the lifting index (LI) using the following equation:
formula_1
The lifting index can be used to identify the stresses that each lift will expose the employees. The general understanding is that as the LI increased, the higher risk the worker is exposed to. As the LI decreases, the worker is less likely to develop back related injuries. Ideally, any lifting tasks should have a lifting index of 1.0 or less.
The NIOSH lifting Equation does have some limitations which include:
The NIOSH Revised Lifting Equation Manual can be found on the CDC's website or by clicking here.
Liberty Mutual tables.
Liberty Mutual Insurance has studied tasks related to manual materials handling, resulting in a comprehensive set of tables which predicts the percentages of both the male and female population that can move the weight of the object. The Liberty Mutual Risk Control Team recommends that tasks should be designed so that 75% or more of the female work force population can safely complete the task.
Key components that first must be collected before using the Liberty Mutual tables are:
The complete Liberty Mutual tables and their guidelines can be found at their website. Liberty Mutual Insurance also has provided calculators that will manually calculate the percentage for both the male and female populations. If the percentage is less than 75% for the female population, a redesign of the lifting plan should be created so that 75% or more of the female population can conduct the materials handling. The link to the calculator can be found here.
Rapid Entire Body Assessment.
The Rapid Entire Body Assessment (REBA) is a tool developed by Dr. Sue Hignett and Dr. Lynn McAtamney which was published July 1998 in the "Applied Ergonomics" journal. This measurement device was designed to be a tool that health and safety professionals could use in the field to assess posture techniques in the workplace. Rather than heavily reliant on the weight of the object being lifted, Dr. Hignett and Dr. McAtamney developed this tool based on the posture of the employee lifting the weight. Using a series of mathematical calculations and a series of tables, each activity is assigned a REBA score. To calculate the REBA score, the tool separates the body parts into the two groups group A and group B. The body parts assigned to Group A are:
The body parts assigned to group B are:
Using the score of each body part posture in group A, locate the score in table A to assign a group A posture score. This score is then added to the Load or force. This sum is the score A.
Using the score of each body part posture in group B, locate the score in table B to assign a group B posture score. This score is then added to the coupling score. This sum is the score B.
Using score A and score B, the correct score C is assigned using table C. The score C is then added to the activity score. This sum is the REBA score. The REBA score is a numerical value between 0 and 4. A REBA score of 0 has a negligible risk level, while a REBA score 4 has a very high-risk level. The REBA score can also provide how quickly action needs to be taken for each REBA score.
Rapid Upper Limb Assessment.
The Rapid Upper Limb Assessment (RULA) is a tool developed by Dr. Lynn McAtamney and Professor E. Nigel Corlett which was published in 1993 in the "Applied Ergonomics" journal. Very similarly to the REBA tool, this tool was designed so that health and safety professionals could assess lifting in the field. The tool is mainly focused on posture. Using a series of mathematical calculations and a series of tables, each activity is assigned a RULA score. To calculate the RULA score, the tool separates the body parts into the two groups group A and group B. The body parts assigned to group A are:
The body parts assigned to group B are:
Using the score of each body part posture in group A, locate the score in table A to assign a group A posture score. This score is then added to the muscle use score and the force/load score which assigns the wrist and arm Score.
Using the score of each body part posture in group B, locate the score in table B to assign a group B posture score. This score is then added to the muscle use score and force/load score which equals the neck, trunk, leg score.
Using table C, locate the wrist/arm score in the Y-axis and the neck, trunk, leg score on the X-axis to determine the RULA score. The RULA score is a numerical value between 1 and 7. If the RULA score returns a 1 or 2, the posture is acceptable but if the RULA score is a 7, changes are needed.
Equipment to reduce risk of injury.
To help mitigate the risk of injury from manual material handling there are devices which can be used to help mitigate some of the risk of manual material handling.
Exoskeletons.
Exoskeletons are devices which can be used to supplement the human body when completing tasks which require repetitive motions or using strength to complete a job. Exoskeletons can be powered or passive. Powered exoskeletons are powered using a battery which will supplement the strength needed for lifting materials. Passive exoskeletons are non-powered devices that are focused on a specific muscle group.
Overhead cranes.
Overhead cranes or workstation cranes can drastically decrease the exposure to workers for lifting and moving heavy objects. These devices can assist the worker by mechanically lifting, lowering, turning, or moving a heavy object to a different location.
Handles and grip aids.
When lifting heavy materials, the grip matters. By using handles, the grip could drastically change the posture of the lift. It can also reduce pressure points while lifting.
Forklifts.
Forklifts are powered vehicles (gas, diesel, electric) that are often used in facilities to move heavy items. The truck can pick up significantly heavier objects compared to the human and move it to a new location. Since the forks can be adjusted, the height of the lift can also be adjusted so that employees can lift in a more neutral position. The use of forklifts can eliminate or reduce some exposures of manual material handling.
Pallet jacks.
Pallet jacks are a great tool that can assist in moving heavy objects. Pallet jacks can be both powered and manual, so it is important to understand the weight being moved. If the weight exceeds the amount a person can easily push/pull, alternative means should be considered such as a forklift or a powered pallet jack.
Adjustable workstations.
The workstation height is critical to posture and preferred ergonomic principles. If the workstation is properly adjusted, it can prevent bending over and exposing the worker to awkward posture. Having an adjustable workstation can also allow the employee to adjust the height based on their height so that they can perform their work using good ergonomic principle.
Ways to reduce risk of injury.
The safest way to manual materials handling is to eliminate any manual handling of materials using the hierarchy of control. There will be times where elimination is not an option. Below are some ways to reduce the risk of injury if manual materials handling is present.
Stretch and flex programs.
Stretch and flex programs are designed to help reduce workplace injuries. Using a stretch and flex program allows the worker to properly warm up before exerting lots of energy in their normal workdays. When properly stretched and warming up, the workers heartrate increases which in returns blood flow, nutrients, and oxygen to muscle groups. When the body is properly warmed up, muscle injuries are less likely to occur. Physical and occupational therapists can be contracted to help develop a Stretch and Flex Program that is best suited for the work taking place.
Rest and recovery.
Just like any muscle use, it is critical to provide the muscles proper rest to allow for them to recover properly. One key mitigation effort would be proper rest. Getting a good nights sleep is critical to help employees reduce workplace injuries from manual material handling. Throughout the day, employees should incorporate breaks to allow for their muscles to rest and so that the employees can rehydrate and rest to prevent fatigue.
Another principle that employers can use is a job rotation. These tasks will only expose the workers to fatigue in certain muscles groups instead of repetitively working the same muscle group. Allowing employees to rotate jobs will allow for longer rest and recovery and can potentially lessen the exposure to manual handling of materials.
Safe manual materials handling techniques.
Using proper ergonomic techniques while manual handling of materials will help reduce the likelihood of injury. Below are a few good practices to follow while manual handling of materials.
Climbing.
When climbing with a load, safe material handling includes maintaining contact with the ladder or stairs at three points (two hands and a foot or both feet and a hand). Bulky loads would require a second person or a mechanical device to assist.
Pushing and pulling.
Manual material handling may require pushing or pulling. Pushing is generally easier on the back than pulling. It is important to use both the arms and legs to provide the leverage to start the push.
Pivoting.
When moving containers, handlers are safer when pivoting their shoulders, hips and feet with the load in front at all times rather than twisting only the torso.
Two people lifting.
When handling heavy materials that exceed an individual's lifting capacity, experts suggest working with a partner to minimize the risk of injury. Two people lifting or carrying the load not only distributes the weight evenly but also utilizes their natural lifting capacity, reducing the chances of strains or sprains. Proper communication between partners is necessary for coordination during the lift, ensuring the safety of both the participants, the goods being carried, and the surrounding environment.
Legislation.
In the UK, all organisations have a duty to protect employees from injury from manual handling activities and this duty is outlined in the Manual Handling Operations Regulations 1992 (MHOR) as amended.
The regulations define manual handling as "[...] any transporting or supporting of a load (including the lifting, putting down, pushing, pulling, carrying or moving thereof) by hand or bodily force". The load can be an object, person or animal.
In the United States, Occupational Safety and Health Administration (OSHA) is the governing body which regulates workplace safety. OSHA does not have a standard which sets a maximum allowable weight that employers must follow. However, manual materials handling may fall under Section 5(a) which is often referred to as the General Duty Clause. The OSHA general duty clause states “Each employer—shall furnish to each of his employees employment and a place of employment which are free from recognized hazards that are causing or are likely to cause death or serious physical harm to his employees…"
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "RWL= LC\\times HM\\times VM\\times DM\\times AM\\times FM\\times CM "
},
{
"math_id": 1,
"text": "LI=(load\\,weight)/RWL "
}
] | https://en.wikipedia.org/wiki?curid=10367580 |
10367686 | ISO 217 | International standard for sizes of raw, untrimmed paper
The ISO 217:2013 standard defines the RA and SRA paper formats.
Overview.
These paper series are untrimmed raw paper. RA stands for "raw format A" and SRA stands for "supplementary raw format A". The RA and SRA formats are slightly larger than the corresponding A series formats. This allows bleed (ink to the edge) on printed material that will be later cut down to size. These paper sheets will after printing and binding be cut to match the A format.
Tolerances.
Paper in the RA and SRA series format is intended to have a formula_0 aspect ratio but the dimensions of the start format have been rounded to whole centimetres.
For example, the RA0 format has been rounded to 860 mm × 1220 mm from the theoretical dimensions formula_1.
The resulting real ratios are:
The sizes of the RA series are also slightly larger than corresponding inch-based US sizes specified in ANSI/ASME Y14.1, e.g. RA4 is roughly equivalent to and ANSI A (alias "US Letter") is defined as . | [
{
"math_id": 0,
"text": "1:\\sqrt{2}"
},
{
"math_id": 1,
"text": "\\sqrt{1.05}\\cdot2^{-\\frac14} \\mathrm{m} \\times \\sqrt{1.05}\\cdot2^{\\frac14} \\mathrm{m} \\approx 861.7 \\mathrm{mm} \\times 1218.6 \\mathrm{mm}"
},
{
"math_id": 2,
"text": "43:61 \\approx 1:1.4186"
},
{
"math_id": 3,
"text": "61:86 \\approx 1:1.4098"
},
{
"math_id": 4,
"text": "45:64 \\approx 1:1.4222"
},
{
"math_id": 5,
"text": "32:45 = 1:1.40625"
}
] | https://en.wikipedia.org/wiki?curid=10367686 |
1036865 | Impact factor | Measure of relative importance of a journal
The impact factor (IF) or journal impact factor (JIF) of an academic journal is a scientometric index calculated by Clarivate that reflects the yearly mean number of citations of articles published in the last two years in a given journal, as indexed by Clarivate's Web of Science.
As a journal-level metric, it is frequently used as a proxy for the relative importance of a journal within its field; journals with higher impact factor values are given the status of being more important, or carry more prestige in their respective fields, than those with lower values.
While frequently used by universities and funding bodies to decide on promotion and research proposals, it has been criticised for distorting good scientific practices.
History.
The impact factor was devised by Eugene Garfield, the founder of the Institute for Scientific Information (ISI) in Philadelphia. Impact factors began to be calculated yearly starting from 1975 for journals listed in the "Journal Citation Reports" (JCR). ISI was acquired by Thomson Scientific & Healthcare in 1992, and became known as Thomson ISI. In 2018, Thomson-Reuters spun off and sold ISI to Onex Corporation and Baring Private Equity Asia. They founded a new corporation, Clarivate, which is now the publisher of the JCR.
Calculation.
In any given year, the two-year journal impact factor is the ratio between the number of citations received in that year for publications in that journal that were published in the two preceding years and the total number of "citable items" published in that journal during the two preceding years:
formula_0
For example, "Nature" had an impact factor of 41.577 in 2017:
formula_1
This means that, on average, its papers published in 2015 and 2016 received roughly 42 citations each in 2017. 2017 impact factors are reported in 2018; they cannot be calculated until all of the 2017 publications have been processed by the indexing agency.
The value of impact factor depends on how to define "citations" and "publications"; the latter are often referred to as "citable items". In current practice, both "citations" and "publications" are defined exclusively by ISI as follows. "Publications" are items that are classed as "article", "review" or "proceedings paper" in the Web of Science (WoS) database; other items like editorials, corrections, notes, retractions and discussions are excluded. WoS is accessible to all registered users, who can independently verify the number of citable items for a given journal. In contrast, the number of citations is extracted not from the WoS database, but from a dedicated JCR database, which is not accessible to general readers. Hence, the commonly used "JCR Impact Factor" is a proprietary value, which is defined and calculated by ISI and can not be verified by external users.
New journals, which are indexed from their first published issue, will receive an impact factor after two years of indexing; in this case, the citations to the year prior to volume 1, and the number of articles published in the year prior to volume 1, are known zero values. Journals that are indexed starting with a volume other than the first volume will not get an impact factor until they have been indexed for three years. Occasionally, "Journal Citation Reports" assigns an impact factor to new journals with less than two years of indexing, based on partial citation data. The calculation always uses two complete and known years of item counts, but for new titles one of the known counts is zero. Annuals and other irregular publications sometimes publish no items in a particular year, affecting the count. The impact factor relates to a specific time period; it is possible to calculate it for any desired period. For example, the JCR also includes a five-year impact factor, which is calculated by dividing the number of citations to the journal in a given year by the number of articles published in that journal in the previous five years.
Use.
While originally invented as a tool to help university librarians to decide which journals to purchase, the impact factor soon became used as a measure for judging academic success. This use of impact factors was summarised by Hoeffel in 1998:
Impact Factor is not a perfect tool to measure the quality of articles but there is nothing better and it has the advantage of already being in existence and is, therefore, a good technique for scientific evaluation. Experience has shown that in each specialty the best journals are those in which it is most difficult to have an article accepted, and these are the journals that have a high impact factor. Most of these journals existed long before the impact factor was devised. The use of impact factor as a measure of quality is widespread because it fits well with the opinion we have in each field of the best journals in our specialty...In conclusion, prestigious journals publish papers of high level. Therefore, their impact factor is high, and not the contrary.
As impact factors are a journal-level metric, rather than an article- or individual-level metric, this use is controversial. Eugene Garfield, the inventor of the JIF agreed with Hoeffel, but warned about the "misuse in evaluating individuals" because there is "a wide variation [of citations] from article to article within a single journal". Despite this warning, the use of the JIF has evolved, playing a key role in the process of assessing individual researchers, their job applications and their funding proposals. In 2005, "The Journal of Cell Biology" noted that:
Impact factor data ... have a strong influence on the scientific community, affecting decisions on where to publish, whom to promote or hire, the success of grant applications, and even salary bonuses.
More targeted research has begun to provide firm evidence of how deeply the impact factor is embedded within formal and informal research assessment processes. A review in 2019 studied how often the JIF featured in documents related to the review, promotion, and tenure of scientists in US and Canadian universities. It concluded that 40% of universities focused on academic research specifically mentioned the JIF as part of such review, promotion, and tenure processes. And a 2017 study of how researchers in the life sciences behave concluded that "everyday decision-making practices as highly governed by pressures to publish in high-impact journals". The deeply embedded nature of such indicators not only effect research assessment, but the more fundamental issue of what research is actually undertaken: "Given the current ways of evaluation and valuing research, risky, lengthy, and unorthodox project rarely take center stage."
Criticism.
Numerous critiques have been made regarding the use of impact factors, both in terms of its statistical validity and also of its implications for how science is carried out and assessed. A 2007 study noted that the most fundamental flaw is that impact factors present the mean of data that are not normally distributed, and suggested that it would be more appropriate to present the median of these data. There is also a more general debate on the validity of the impact factor as a measure of journal importance and the effect of policies that editors may adopt to boost their impact factor (perhaps to the detriment of readers and writers). Other criticism focuses on the effect of the impact factor on behavior of scholars, editors and other stakeholders. Further criticisms argue that emphasis on impact factor results from the negative influence of neoliberal politics on academia. Some of these arguments demand not just replacement of the impact factor with more sophisticated metrics but also discussion on the social value of research assessment and the growing precariousness of scientific careers in higher education.
Inapplicability of impact factor to individuals and between-discipline differences.
It has been stated that impact factors in particular and citation analysis in general are affected by field-dependent factors which invalidate comparisons not only across disciplines but even within different fields of research of one discipline. The percentage of total citations occurring in the first two years after publication also varies highly among disciplines from 1–3% in the mathematical and physical sciences to 5–8% in the biological sciences. Thus impact factors cannot be used to compare journals across disciplines.
Impact factors are sometimes used to evaluate not only the journals but the papers therein, thereby devaluing papers in certain subjects. In 2004, the Higher Education Funding Council for England was urged by the House of Commons Science and Technology Select Committee to remind Research Assessment Exercise panels that they are obliged to assess the quality of the content of individual articles, not the reputation of the journal in which they are published. Other studies have repeatedly stated that impact factor is a metric for journals and should not be used to assess individual researchers or institutions.
Questionable editorial policies that affect the impact factor.
Because impact factor is commonly accepted as a proxy for research quality, some journals adopt editorial policies and practices, some acceptable and some of dubious purpose, to increase its impact factor. For example, journals may publish a larger percentage of review articles which generally are cited more than research reports. Research undertaken in 2020 on dentistry journals concluded that the publication of "systematic reviews have significant effect on the Journal Impact Factor ... while papers publishing clinical trials bear no influence on this factor. Greater yearly average of published papers ... means a higher impact factor."
Journals may also attempt to limit the number of "citable items"—i.e., the denominator of the impact factor equation—either by declining to publish articles that are unlikely to be cited (such as case reports in medical journals) or by altering articles (e.g., by not allowing an abstract or bibliography in hopes that Journal Citation Reports will not deem it a "citable item"). As a result of negotiations over whether items are "citable", impact factor variations of more than 300% have been observed. Items considered to be uncitable—and thus are not incorporated in impact factor calculations—can, if cited, still enter into the numerator part of the equation despite the ease with which such citations could be excluded. This effect is hard to evaluate, for the distinction between editorial comment and short original articles is not always obvious. For example, letters to the editor may be part of either class.
Another less insidious tactic journals employ is to publish a large portion of its papers, or at least the papers expected to be highly cited, early in the calendar year. This gives those papers more time to gather citations. Several methods, not necessarily with nefarious intent, exist for a journal to cite articles in the same journal which will increase the journal's impact factor.
Beyond editorial policies that may skew the impact factor, journals can take overt steps to game the system. For example, in 2007, the specialist journal "Folia Phoniatrica et Logopaedica", with an impact factor of 0.66, published an editorial that cited all its articles from 2005 to 2006 in a protest against the "absurd scientific situation in some countries" related to use of the impact factor. The large number of citations meant that the impact factor for that journal increased to 1.44. As a result of the increase, the journal was not included in the 2008 and 2009 "Journal Citation Reports".
Coercive citation is a practice in which an editor forces an author to add extraneous citations to an article before the journal will agree to publish it, in order to inflate the journal's impact factor. A survey published in 2012 indicates that coercive citation has been experienced by one in five researchers working in economics, sociology, psychology, and multiple business disciplines, and it is more common in business and in journals with a lower impact factor. Editors of leading business journals banded together to disavow the practice. However, cases of coercive citation have occasionally been reported for other disciplines.
Assumed correlation between impact factor and quality.
The journal impact factor was originally designed by Eugene Garfield as a metric to help librarians make decisions about which journals were worth indexing, as the JIF aggregates the number of citations to articles published in each journal. Since then, the JIF has become associated as a mark of journal "quality", and gained widespread use for evaluation of research and researchers instead, even at the institutional level. It thus has significant impact on steering research practices and behaviours.
By 2010, national and international research funding institutions were already starting to point out that numerical indicators such as the JIF should not be considered as a measure of quality. In fact, research was indicating that the JIF is a highly manipulated metric, and the justification for its continued widespread use beyond its original narrow purpose seems due to its simplicity (easily calculable and comparable number), rather than any actual relationship to research quality.
Empirical evidence shows that the misuse of the JIF—and journal ranking metrics in general—has a number of negative consequences for the scholarly communication system. These include gaps between the reach of a journal and the quality of its individual papers and insufficient coverage of social sciences and humanities as well as research outputs from across Latin America, Africa, and South-East Asia. Additional drawbacks include the marginalization of research in vernacular languages and on locally relevant topics and inducement to unethical authorship and citation practices. More generally, the impact factors fosters a reputation economy, where scientific success is based on publishing in prestigious journals ahead of actual research qualities such as rigorous methods, replicability and social impact. Using journal prestige and the JIF to cultivate a competition regime in academia has been shown to have deleterious effects on research quality.
A number of regional and international initiatives are now providing and suggesting alternative research assessment systems, including key documents such as the Leiden Manifesto and the San Francisco Declaration on Research Assessment (DORA). Plan S calls for a broader adoption and implementation of such initiatives alongside fundamental changes in the scholarly communication system. As appropriate measures of quality for authors and research, concepts of research excellence should be remodelled around transparent workflows and accessible research results.
JIFs are still regularly used to evaluate research in many countries, which is a problem since a number of issues remain around the opacity of the metric and the fact that it is often negotiated by publishers.
Negotiated values.
Results of an impact factor can change dramatically depending on which items are considered as "citable" and therefore included in the denominator. One notorious example of this occurred in 1988 when it was decided that meeting abstracts published in "FASEB Journal" would no longer be included in the denominator. The journal's impact factor jumped from 0.24 in 1988 to 18.3 in 1989. Publishers routinely discuss with Clarivate how to improve the "accuracy" of their journals' impact factor and therefore get higher scores.
Such discussions routinely produce "negotiated values" which result in dramatic changes in the observed scores for dozens of journals, sometimes after unrelated events like the purchase by one of the larger publishers.
Distribution skewness.
Because citation counts have highly skewed distributions, the mean number of citations is potentially misleading if used to gauge the typical impact of articles in the journal rather than the overall impact of the journal itself. For example, about 90% of "Nature"'s 2004 impact factor was based on only a quarter of its publications. Thus the actual number of citations for a single article in the journal is in most cases much lower than the mean number of citations across articles. Furthermore, the strength of the relationship between impact factors of journals and the citation rates of the papers therein has been steadily decreasing since articles began to be available digitally.
The effect of outliers can be seen in the case of the article "A short history of SHELX", which included this sentence: "This paper could serve as a general literature citation when one or more of the open-source SHELX programs (and the Bruker AXS version SHELXTL) are employed in the course of a crystal-structure determination". This article received more than 6,600 citations. As a consequence, the impact factor of the journal "Acta Crystallographica Section A" rose from 2.051 in 2008 to 49.926 in 2009, more than "Nature" (at 31.434) and "Science" (at 28.103). The second-most cited article in "Acta Crystallographica Section A" in 2008 had only 28 citations.
Critics of the JIF state that use of the arithmetic mean in its calculation is problematic because the pattern of citation distribution is skewed and citation distributions metrics have been proposed as an alternative to impact factors.
However, there have also been pleas to take a more nuanced approach to judging the distribution skewness of the impact factor. Ludo Waltman and Vincent Antonio Traag, in their 2021 paper, ran numerous simulations and concluded that "statistical objections against the use of the IF at the level of individual articles are not convincing", and that "the IF may be a more accurate indicator of the value of an article than the number of citations of the article".
Reproducibility.
While the underlying mathematical model is publicly known, the dataset which is used to calculate the JIF is not publicly available. This prompted criticism: "Just as scientists would not accept the findings in a scientific paper without seeing the primary data, so should they not rely on Thomson Scientific's impact factor, which is based on hidden data". However, a 2019 article demonstrated that "with access to the data and careful cleaning, the JIF can be reproduced", although this required much labour to achieve. A 2020 research paper went further. It indicated that by querying open access or partly open-access databases, like Google Scholar, ResearchGate, and Scopus, it is possible to calculate approximate impact factors without the need to purchase Web of Science / JCR.
Broader negative impact on science.
Just as the impact factor has attracted criticism for various immediate problems associated with its application, so has there also been criticism that its application undermines the broader process of science. Research has indicated that bibliometrics figures, particularly the impact factor, decrease the quality of peer review an article receives, cause a reluctance to share data, decrease the quality of articles, and a reduce the scope in of publishable research. "For many researchers the only research questions and projects that appear viable are those that can meet the demand of scoring well in terms of metric performance indicators – and chiefly the journal impact factor.". Furthermore, the process of publication and science is slowed down – authors automatically try and publish with the journals with the highest impact factor – "as editors and reviewers are tasked with reviewing papers that are not submitted to the most appropriate venues".
Institutional responses to criticism of the impact factor.
Given the growing criticism and its widespread usage as a means of research assessment, organisations and institutions have begun to take steps to move away from the journal impact factor. In November 2007 the European Association of Science Editors (EASE) issued an official statement recommending "that journal impact factors are used only—and cautiously—for measuring and comparing the influence of entire journals, but not for the assessment of single papers, and certainly not for the assessment of researchers or research programmes".
In July 2008, the International Council for Science Committee on Freedom and Responsibility in the Conduct of Science issued a "statement on publication practices and indices and the role of peer review in research assessment", suggesting many possible solutions—e.g., considering a limit number of publications per year to be taken into consideration for each scientist, or even penalising scientists for an excessive number of publications per year—e.g., more than 20.
In February 2010, the Deutsche Forschungsgemeinschaft (German Research Foundation) published new guidelines to reduce the number of publications that could be submitted when applying for funding: "The focus has not been on what research someone has done but rather how many papers have been published and where." They noted that for decisions concerning "performance-based funding allocations, postdoctoral qualifications, appointments, or reviewing funding proposals, [where] increasing importance has been given to numerical indicators such as the h-index and the impact factor". The UK's Research Assessment Exercise for 2014 also banned the journal impact factor although evidence suggested that this ban was often ignored.
In response to growing concerns over the inappropriate use of journal impact factors in evaluating scientific outputs and scientists themselves, the American Society for Cell Biology together with a group of editors and publishers of scholarly journals created the San Francisco Declaration on Research Assessment (DORA). Released in May 2013, DORA has garnered support from thousands of individuals and hundreds of institutions, including in March 2015 the League of European Research Universities (a consortium of 21 of the most renowned research universities in Europe), who have endorsed the document on the DORA website.
Publishers, even those with high impact factors, also recognised the flaws. "Nature" magazine criticised the over-reliance on JIF, pointing not just to its statistical flaws but to negative effects on science: "The resulting pressures and disappointments are nothing but demoralizing, and in badly run labs can encourage sloppy research that, for example, fails to test assumptions thoroughly or to take all the data into account before submitting big claims." Various publishers now use a mixture of metrics on their website; the PLOS series of journals does not display the impact factor. Microsoft Academic took a similar view, stating that h-index, EI/SCI and journal impact factors are not shown because "the research literature has provided abundant evidence that these metrics are at best a rough approximation of research impact and scholarly influence."
In 2021, Utrecht University promised to abandon all quantitative bibliometrics, including the impact factor. The university stated that "it has become a very sick model that goes beyond what is really relevant for science and putting science forward". This followed a 2018 decision by the main Dutch funding body for research, NWO, to remove all references to journal impact factors and the h-index in all call texts and application forms. Utrecht's decision met with some resistance. An open letter signed by over 150 Dutch academics argued that, while imperfect, the JIF is still useful, and that omitting it "will lead to randomness and a compromising of scientific quality".
Related indices.
Some related metrics, also calculated and published by the same organization, include:
A given journal may attain a different quartile or percentile in different categories.
As with the impact factor, there are some nuances to this: for example, Clarivate excludes certain article types (such as news items, correspondence, and errata) from the denominator.
Other measures of scientific impact.
Additional journal-level metrics are available from other organizations. For example, "CiteScore" is a metric for serial titles in Scopus launched in December 2016 by Elsevier. While these metrics apply only to journals, there are also author-level metrics, such as the h-index, that apply to individual researchers. In addition, article-level metrics measure impact at an article level instead of journal level.
Other more general alternative metrics, or "altmetrics", that include article views, downloads, or mentions in social media, offer a different perspective on research impact, concentrating more on immediate social impact in and outside academia.
Counterfeit impact factors.
Fake impact factors or bogus impact factors are produced by certain companies or individuals. According to an article published in the "Electronic Physician", these include Global Impact Factor, Citefactor, and Universal Impact Factor. Jeffrey Beall maintained a list of such misleading metrics. Another deceitful practice is reporting "alternative impact factors", calculated as the average number of citations per article using citation indices other than JCR such as Google Scholar (e.g., "Google-based Journal Impact Factor") or Microsoft Academic.
False impact factors are often used by predatory publishers. Consulting Journal Citation Reports' master journal list can confirm if a publication is indexed by the "Journal Citation Reports". The use of fake impact metrics is considered a red flag.
Notes on alternatives.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\text{IF}_y = \\frac{\\text{Citations}_y}{\\text{Publications}_{y-1} + \\text{Publications}_{y-2}}."
},
{
"math_id": 1,
"text": "\\text{IF}_{2017} = \\frac{\\text{Citations}_{2017}}{\\text{Publications}_{2016} + \\text{Publications}_{2015}} = \\frac{74090}{880 + 902} = 41.577."
}
] | https://en.wikipedia.org/wiki?curid=1036865 |
10372 | Entire function | Function that is holomorphic on the whole complex plane
In complex analysis, an entire function, also called an integral function, is a complex-valued function that is holomorphic on the whole complex plane. Typical examples of entire functions are polynomials and the exponential function, and any finite sums, products and compositions of these, such as the trigonometric functions sine and cosine and their hyperbolic counterparts sinh and cosh, as well as derivatives and integrals of entire functions such as the error function. If an entire function formula_0 has a
root at formula_1, then formula_2, taking the limit value at formula_1, is an entire function. On the other hand, the natural logarithm, the reciprocal function, and the square root are all not entire functions, nor can they be continued analytically to an entire function.
A transcendental entire function is an entire function that is not a polynomial.
Just as meromorphic functions can be viewed as a generalization of rational fractions, entire functions can be viewed as a generalization of polynomials. In particular, if for meromorphic functions one can generalize the factorization into simple fractions (the Mittag-Leffler theorem on the decomposition of a meromorphic function), then for entire functions there is a generalization of the factorization — the Weierstrass theorem on entire functions.
Properties.
Every entire function formula_3 can be represented as a single power series
formula_4
that converges everywhere in the complex plane, hence uniformly on compact sets. The radius of convergence is infinite, which implies that
formula_5
or, equivalently,
formula_6
Any power series satisfying this criterion will represent an entire function.
If (and only if) the coefficients of the power series are all real then the function evidently takes real values for real arguments, and the value of the function at the complex conjugate of formula_7 will be the complex conjugate of the value at formula_8 Such functions are sometimes called self-conjugate (the conjugate function, formula_9 being given by formula_10).
If the real part of an entire function is known in a neighborhood of a point then both the real and imaginary parts are known for the whole complex plane, up to an imaginary constant. For instance, if the real part is known in a neighborhood of zero, then we can find the coefficients for formula_11 from the following derivatives with respect to a real variable formula_12:
formula_13
Note however that an entire function is not determined by its real part on all curves. In particular, if the real part is given on any curve in the complex plane where the real part of some other entire function is zero, then any multiple of that function can be added to the function we are trying to determine. For example, if the curve where the real part is known is the real line, then we can add formula_14 times any self-conjugate function. If the curve forms a loop, then it is determined by the real part of the function on the loop since the only functions whose real part is zero on the curve are those that are everywhere equal to some imaginary number.
The Weierstrass factorization theorem asserts that any entire function can be represented by a product involving its zeroes (or "roots").
The entire functions on the complex plane form an integral domain (in fact a Prüfer domain). They also form a commutative unital associative algebra over the complex numbers.
Liouville's theorem states that any bounded entire function must be constant.
As a consequence of Liouville's theorem, any function that is entire on the whole Riemann sphere
is constant. Thus any non-constant entire function must have a singularity at the complex point at infinity, either a pole for a polynomial or an essential singularity for a transcendental entire function. Specifically, by the Casorati–Weierstrass theorem, for any transcendental entire function formula_15 and any complex formula_16 there is a sequence formula_17 such that
formula_18
Picard's little theorem is a much stronger result: Any non-constant entire function takes on every complex number as value, possibly with a single exception. When an exception exists, it is called a lacunary value of the function. The possibility of a lacunary value is illustrated by the exponential function, which never takes on the value One can take a suitable branch of the logarithm of an entire function that never hits so that this will also be an entire function (according to the Weierstrass factorization theorem). The logarithm hits every complex number except possibly one number, which implies that the first function will hit any value other than 0 an infinite number of times. Similarly, a non-constant, entire function that does not hit a particular value will hit every other value an infinite number of times.
Liouville's theorem is a special case of the following statement:
<templatestyles src="Math_theorem/styles.css" />
Theorem — Assume formula_19 formula_20 are positive constants and formula_21 is a non-negative integer. An entire function formula_22 satisfying the inequality formula_23 for all formula_7 with formula_24 is necessarily a polynomial, of degree at most formula_25
Similarly, an entire function formula_15 satisfying the inequality formula_26 for all formula_27 with formula_24 is necessarily a polynomial, of degree at least formula_28.
Growth.
Entire functions may grow as fast as any increasing function: for any increasing function
formula_29 there exists an entire function formula_22 such that
formula_30 for all real formula_31. Such a function formula_22 may be easily found of the form:
formula_32
for a constant formula_33 and a strictly increasing sequence of positive integers formula_34. Any such sequence defines an entire function formula_0, and if the powers are chosen appropriately we may satisfy the inequality formula_30 for all real formula_31. (For instance, it certainly holds if one chooses formula_35 and, for any integer formula_36 one chooses an even exponent formula_37 such that formula_38).
Order and type.
The order (at infinity) of an entire function formula_0 is defined using the limit superior as:
formula_39
where formula_40 is the disk of radius formula_41 and formula_42 denotes the supremum norm of formula_0 on formula_40. The order is a non-negative real number or infinity (except when formula_43 for all formula_27. In other words, the order of formula_0 is the infimum of all formula_44 such that:
formula_45
The example of formula_46 shows that this does not mean formula_47 if
formula_0 is of order formula_44.
If formula_48 one can also define the type:
formula_49
If the order is 1 and the type is formula_50, the function is said to be "of exponential type formula_50". If it is of order less than 1 it is said to be of exponential type 0.
If formula_51 then the order and type can be found by the formulas
formula_52
Let formula_53 denote the formula_28-th derivative of formula_22, then we may restate these formulas in terms of the derivatives at any arbitrary point formula_54:
formula_55
The type may be infinite, as in the case of the reciprocal gamma function, or zero (see example below under ).
Another way to find out the order and type is Matsaev's theorem.
Examples.
Here are some examples of functions of various orders:
Order "ρ".
For arbitrary positive numbers formula_56 and formula_50 one can construct an example of an entire function of order formula_56 and type formula_50 using:
formula_57
Order 1/4.
formula_59
where formula_60
Order 1/3.
formula_61
where
formula_62
Order 1/2.
formula_63 with formula_64 (for which the type is given by formula_65)
Genus.
Entire functions of finite order have Hadamard's canonical representation (Hadamard factorization theorem):
formula_74
where formula_75 are those roots of formula_22 that are not zero (formula_76), formula_44 is the order of the zero of formula_22 at formula_77 (the case formula_78 being taken to mean formula_79), formula_80 a polynomial (whose degree we shall call formula_81), and formula_82 is the smallest non-negative integer such that the series
formula_83
converges. The non-negative integer formula_84 is called the genus of the entire function formula_22.
If the order formula_56 is not an integer, then formula_85 is the integer part of formula_56. If the order is a positive integer, then there are two possibilities: formula_86 or formula_87.
For example, formula_88, formula_89 and formula_90 are entire functions of genus formula_91.
Other examples.
According to J. E. Littlewood, the Weierstrass sigma function is a 'typical' entire function. This statement can be made precise in the theory of random entire functions: the asymptotic behavior of almost all entire functions is similar to that of the sigma function. Other examples include the Fresnel integrals, the Jacobi theta function, and the reciprocal Gamma function. The exponential function and the error function are special cases of the Mittag-Leffler function. According to the fundamental theorem of Paley and Wiener, Fourier transforms of functions (or distributions) with bounded support are entire functions of order formula_92 and finite type.
Other examples are solutions of linear differential equations with polynomial coefficients. If the coefficient at the highest derivative is constant, then all solutions of such equations are entire functions. For example, the exponential function, sine, cosine, Airy functions and Parabolic cylinder functions arise in this way. The class of entire functions is closed with respect to compositions. This makes it possible to study dynamics of entire functions.
An entire function of the square root of a complex number is entire if the original function is even, for example formula_93.
If a sequence of polynomials all of whose roots are real converges in a neighborhood of the origin to a limit which is not identically equal to zero, then this limit is an entire function. Such entire functions form the Laguerre–Pólya class, which can also be characterized in terms of the Hadamard product, namely, formula_22 belongs to this class if and only if in the Hadamard representation all formula_94 are real, formula_95, and
formula_96, where formula_97 and formula_33 are real, and formula_98. For example, the sequence of polynomials
formula_99
converges, as formula_28 increases, to formula_100. The polynomials
formula_101
have all real roots, and converge to formula_102. The polynomials
formula_103
also converge to formula_102, showing the buildup of the Hadamard product for cosine.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
Sources.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "f(z)"
},
{
"math_id": 1,
"text": "w"
},
{
"math_id": 2,
"text": "f(z)/(z-w)"
},
{
"math_id": 3,
"text": "\\ f(z)\\ "
},
{
"math_id": 4,
"text": "\\ f(z) = \\sum_{n=0}^\\infty a_n z^n\\ "
},
{
"math_id": 5,
"text": "\\ \\lim_{n\\to\\infty} |a_n|^{\\frac{1}{n}} = 0\\ "
},
{
"math_id": 6,
"text": "\\ \\lim_{n\\to\\infty} \\frac{\\ln|a_n|}n = -\\infty ~."
},
{
"math_id": 7,
"text": "\\ z\\ "
},
{
"math_id": 8,
"text": "\\ z ~."
},
{
"math_id": 9,
"text": "\\ F^*(z)\\ ,"
},
{
"math_id": 10,
"text": "\\ \\bar F(\\bar z)\\ "
},
{
"math_id": 11,
"text": "n>0"
},
{
"math_id": 12,
"text": "\\ r\\ "
},
{
"math_id": 13,
"text": "\\begin{align}\n\\operatorname\\mathcal{R_e} \\left\\{\\ a_n\\ \\right\\} &= \\frac{1}{n!} \\frac{d^n}{dr^n}\\ \\operatorname\\mathcal{R_e} \\left\\{\\ f(r)\\ \\right\\} && \\quad \\mathrm{ at } \\quad r = 0 \\\\\n\\operatorname\\mathcal{I_m}\\left\\{\\ a_n\\ \\right\\} &= \\frac{1}{n!} \\frac{d^n}{dr^n}\\ \\operatorname\\mathcal{R_e} \\left\\{\\ f\\left( r\\ e^{-\\frac{i\\pi}{2n}} \\right)\\ \\right\\} && \\quad \\mathrm{ at } \\quad r = 0\n\\end{align}"
},
{
"math_id": 14,
"text": "\\ i\\ "
},
{
"math_id": 15,
"text": "\\ f\\ "
},
{
"math_id": 16,
"text": "\\ w\\ "
},
{
"math_id": 17,
"text": "\\ (z_m)_{m\\in\\N}\\ "
},
{
"math_id": 18,
"text": "\\ \\lim_{m\\to\\infty} |z_m| = \\infty, \\qquad \\text{and} \\qquad \\lim_{m\\to\\infty} f(z_m) = w ~."
},
{
"math_id": 19,
"text": "\\ M\\ ,"
},
{
"math_id": 20,
"text": "\\ R\\ "
},
{
"math_id": 21,
"text": "\\ n\\ "
},
{
"math_id": 22,
"text": "f"
},
{
"math_id": 23,
"text": "\\ |f(z)| \\le M |z|^n\\ "
},
{
"math_id": 24,
"text": "\\ |z| \\ge R\\ ,"
},
{
"math_id": 25,
"text": "\\ n ~."
},
{
"math_id": 26,
"text": "\\ M |z|^n \\le |f(z)|\\ "
},
{
"math_id": 27,
"text": "z"
},
{
"math_id": 28,
"text": "n"
},
{
"math_id": 29,
"text": "g:[0,\\infty)\\to[0,\\infty)"
},
{
"math_id": 30,
"text": "f(x)>g(|x|)"
},
{
"math_id": 31,
"text": "x"
},
{
"math_id": 32,
"text": "f(z)=c+\\sum_{k=1}^{\\infty}\\left(\\frac{z}{k}\\right)^{n_k}"
},
{
"math_id": 33,
"text": "c"
},
{
"math_id": 34,
"text": "n_k"
},
{
"math_id": 35,
"text": "c:=g(2)"
},
{
"math_id": 36,
"text": "k \\ge 1"
},
{
"math_id": 37,
"text": " n_k "
},
{
"math_id": 38,
"text": "\\left(\\frac{k+1}{k}\\right)^{n_k} \\ge g(k+2)"
},
{
"math_id": 39,
"text": "\\rho = \\limsup_{r\\to\\infty}\\frac{\\ln \\left (\\ln\\| f \\|_{\\infty, B_r} \\right ) }{\\ln r},"
},
{
"math_id": 40,
"text": "B_r"
},
{
"math_id": 41,
"text": "r"
},
{
"math_id": 42,
"text": "\\|f \\|_{\\infty, B_r}"
},
{
"math_id": 43,
"text": "f(z) = 0"
},
{
"math_id": 44,
"text": "m"
},
{
"math_id": 45,
"text": "f(z) = O \\left (\\exp \\left (|z|^m \\right ) \\right ), \\quad \\text{as } z \\to \\infty."
},
{
"math_id": 46,
"text": "f(z) = \\exp(2z^2)"
},
{
"math_id": 47,
"text": "f(z)=O(\\exp(|z|^m))"
},
{
"math_id": 48,
"text": "0<\\rho < \\infty,"
},
{
"math_id": 49,
"text": "\\sigma=\\limsup_{r\\to\\infty}\\frac{\\ln \\| f\\|_{\\infty, B_r}} {r^\\rho}."
},
{
"math_id": 50,
"text": "\\sigma"
},
{
"math_id": 51,
"text": " f(z)=\\sum_{n=0}^\\infty a_n z^n,"
},
{
"math_id": 52,
"text": "\\begin{align}\n\\rho &=\\limsup_{n\\to\\infty} \\frac{n\\ln n}{-\\ln|a_n|} \\\\[6pt]\n(e\\rho\\sigma)^{\\frac{1}{\\rho}} &= \\limsup_{n\\to\\infty} n^{\\frac{1}{\\rho}} |a_n|^{\\frac{1}{n}}\n\\end{align}"
},
{
"math_id": 53,
"text": "f^{(n)}"
},
{
"math_id": 54,
"text": "z_0"
},
{
"math_id": 55,
"text": "\\begin{align}\n\\rho &=\\limsup_{n\\to\\infty}\\frac{n\\ln n}{n\\ln n-\\ln|f^{(n)}(z_0)|}=\\left(1-\\limsup_{n\\to\\infty}\\frac{\\ln|f^{(n)}(z_0)|}{n\\ln n}\\right)^{-1} \\\\[6pt]\n(\\rho\\sigma)^{\\frac{1}{\\rho}} &=e^{1-\\frac{1}{\\rho}} \\limsup_{n\\to\\infty}\\frac{|f^{(n)}(z_0)|^{\\frac{1}{n}}}{n^{1-\\frac{1}{\\rho}}}\n\\end{align}"
},
{
"math_id": 56,
"text": "\\rho"
},
{
"math_id": 57,
"text": "f(z)=\\sum_{n=1}^\\infty \\left (\\frac{e\\rho\\sigma}{n} \\right )^{\\frac{n}{\\rho}} z^n"
},
{
"math_id": 58,
"text": "\\sum_{n=0}^\\infty 2^{-n^2} z^n"
},
{
"math_id": 59,
"text": "f(\\sqrt[4]z)"
},
{
"math_id": 60,
"text": "f(u)=\\cos(u)+\\cosh(u)"
},
{
"math_id": 61,
"text": "f(\\sqrt[3]z)"
},
{
"math_id": 62,
"text": "f(u)=e^u+e^{\\omega u}+e^{\\omega^2 u} = e^u+2e^{-\\frac{u}{2}}\\cos \\left (\\frac{\\sqrt 3u}{2} \\right ), \\quad \\text{with } \\omega \\text{ a complex cube root of 1}."
},
{
"math_id": 63,
"text": "\\cos \\left (a\\sqrt z \\right )"
},
{
"math_id": 64,
"text": "a\\neq 0"
},
{
"math_id": 65,
"text": "\\sigma=|a|"
},
{
"math_id": 66,
"text": "\\exp(az)"
},
{
"math_id": 67,
"text": "\\sin(z)"
},
{
"math_id": 68,
"text": "\\cosh(z)"
},
{
"math_id": 69,
"text": "1/\\Gamma(z)"
},
{
"math_id": 70,
"text": "\\sum_{n=2}^\\infty \\frac{z^n}{(n\\ln n)^n}. \\quad (\\sigma=0)"
},
{
"math_id": 71,
"text": "Ai(z)"
},
{
"math_id": 72,
"text": "\\exp(az^2)"
},
{
"math_id": 73,
"text": "\\exp(\\exp(z))"
},
{
"math_id": 74,
"text": "f(z)=z^me^{P(z)}\\prod_{n=1}^\\infty\\left(1-\\frac{z}{z_n}\\right)\\exp\\left(\\frac{z}{z_n}+\\cdots+\\frac{1}{p} \\left(\\frac{z}{z_n}\\right)^p\\right),"
},
{
"math_id": 75,
"text": "z_k"
},
{
"math_id": 76,
"text": "z_k \\neq 0"
},
{
"math_id": 77,
"text": "z = 0"
},
{
"math_id": 78,
"text": "m = 0"
},
{
"math_id": 79,
"text": "f(0) \\neq 0"
},
{
"math_id": 80,
"text": "P"
},
{
"math_id": 81,
"text": "q"
},
{
"math_id": 82,
"text": "p"
},
{
"math_id": 83,
"text": "\\sum_{n=1}^\\infty\\frac{1}{|z_n|^{p+1}}"
},
{
"math_id": 84,
"text": "g=\\max\\{p, q\\}"
},
{
"math_id": 85,
"text": "g = [ \\rho ]"
},
{
"math_id": 86,
"text": "g = \\rho-1"
},
{
"math_id": 87,
"text": "g = \\rho "
},
{
"math_id": 88,
"text": "\\sin"
},
{
"math_id": 89,
"text": "\\cos"
},
{
"math_id": 90,
"text": "\\exp"
},
{
"math_id": 91,
"text": "g = \\rho = 1"
},
{
"math_id": 92,
"text": "1"
},
{
"math_id": 93,
"text": "\\cos(\\sqrt{z})"
},
{
"math_id": 94,
"text": "z_n"
},
{
"math_id": 95,
"text": "\\rho\\leq 1"
},
{
"math_id": 96,
"text": "P(z)=a+bz+cz^2"
},
{
"math_id": 97,
"text": "b"
},
{
"math_id": 98,
"text": "c\\leq 0"
},
{
"math_id": 99,
"text": "\\left (1-\\frac{(z-d)^2}{n} \\right )^n"
},
{
"math_id": 100,
"text": "\\exp(-(z-d)^2)"
},
{
"math_id": 101,
"text": " \\frac{1}{2}\\left ( \\left (1+\\frac{iz}{n} \\right )^n+ \\left (1-\\frac{iz}{n} \\right )^n \\right )"
},
{
"math_id": 102,
"text": "\\cos(z)"
},
{
"math_id": 103,
"text": " \\prod_{m=1}^n \\left(1-\\frac{z^2}{\\left ( \\left (m-\\frac{1}{2} \\right )\\pi \\right )^2}\\right)"
}
] | https://en.wikipedia.org/wiki?curid=10372 |
1037324 | Kappa curve | In geometry, the kappa curve or Gutschoven's curve is a two-dimensional algebraic curve resembling the Greek letter . The kappa curve was first studied by Gérard van Gutschoven around 1662. In the history of mathematics, it is remembered as one of the first examples of Isaac Barrow's application of rudimentary calculus methods to determine the tangent of a curve. Isaac Newton and Johann Bernoulli continued the studies of this curve subsequently.
Using the Cartesian coordinate system it can be expressed as
formula_0
or, using parametric equations,
formula_1
In polar coordinates its equation is even simpler:
formula_2
It has two vertical asymptotes at "x"
±"a", shown as dashed blue lines in the figure at right.
The kappa curve's curvature:
formula_3
Tangential angle:
formula_4
Tangents via infinitesimals.
The tangent lines of the kappa curve can also be determined geometrically using differentials and the elementary rules of infinitesimal arithmetic. Suppose x and y are variables, while a is taken to be a constant. From the definition of the kappa curve,
formula_5
Now, an infinitesimal change in our location must also change the value of the left hand side, so
formula_6
Distributing the differential and applying appropriate rules,
formula_7
Derivative.
If we use the modern concept of a functional relationship "y"("x") and apply implicit differentiation, the slope of a tangent line to the kappa curve at a point ("x","y") is:
formula_8 | [
{
"math_id": 0,
"text": "x^2\\left(x^2 + y^2\\right) = a^2y^2"
},
{
"math_id": 1,
"text": "\\begin{align}\nx &= a\\sin t,\\\\\ny &= a\\sin t\\tan t.\n\\end{align}"
},
{
"math_id": 2,
"text": "r = a\\tan\\theta."
},
{
"math_id": 3,
"text": "\\kappa(\\theta) = \\frac{8\\left(3 - \\sin^2\\theta\\right)\\sin^4\\theta}{a \\left(\\sin^2(2\\theta) + 4\\right)^\\frac32}."
},
{
"math_id": 4,
"text": "\\phi(\\theta) = -\\arctan\\left(\\tfrac12 \\sin(2\\theta)\\right)."
},
{
"math_id": 5,
"text": " x^2\\left(x^2 + y^2\\right)-a^2y^2 = 0 "
},
{
"math_id": 6,
"text": "d \\left(x^2\\left(x^2 + y^2\\right)-a^2y^2\\right) = 0 "
},
{
"math_id": 7,
"text": "\\begin{align}\nd \\left(x^2\\left(x^2 + y^2\\right)\\right)-d \\left(a^2y^2\\right) &= 0 \\\\[6px]\n(2 x\\,dx ) \\left(x^2+y^2\\right) + x^2 (2x\\,dx + 2y\\,dy) - a^2 2y\\,dy &= 0 \\\\[6px]\n\\left( 4 x^3 + 2 x y^2\\right) dx + \\left( 2 y x^2 - 2 a^2 y \\right) dy &= 0 \\\\[6px]\nx \\left( 2 x^2 + y^2 \\right) dx + y \\left(x^2 - a^2\\right) dy &= 0 \\\\[6px]\n\\frac{ x \\left( 2 x^2 + y^2 \\right) }{ y \\left(a^2 - x^2\\right)} &= \\frac{dy}{dx} \n\\end{align}"
},
{
"math_id": 8,
"text": "\\begin{align}\n2 x \\left( x^2 + y^2 \\right) + x^2 \\left( 2x + 2 y \\frac{dy}{dx} \\right) &= 2 a^2 y \\frac{dy}{dx} \\\\[6px]\n2 x^3 + 2 x y^2 + 2 x^3 &= 2 a^2 y \\frac{dy}{dx} - 2 x^2 y \\frac{dy}{dx} \\\\[6px]\n4 x^3 + 2 x y^2 &= \\left(2 a^2 y - 2 x^2 y \\right) \\frac{dy}{dx} \\\\[6px]\n\\frac{2 x^3 + x y^2}{a^2 y - x^2 y} &= \\frac{dy}{dx} \n\\end{align}"
}
] | https://en.wikipedia.org/wiki?curid=1037324 |
1037551 | Disjoint-set data structure | Data structure for storing non-overlapping sets
In computer science, a disjoint-set data structure, also called a union–find data structure or merge–find set, is a data structure that stores a collection of disjoint (non-overlapping) sets. Equivalently, it stores a partition of a set into disjoint subsets. It provides operations for adding new sets, merging sets (replacing them by their union), and finding a representative member of a set. The last operation makes it possible to find out efficiently if any two elements are in the same or different sets.
While there are several ways of implementing disjoint-set data structures, in practice they are often identified with a particular implementation called a disjoint-set forest. This is a specialized type of forest which performs unions and finds in near-constant amortized time. To perform a sequence of m addition, union, or find operations on a disjoint-set forest with n nodes requires total time "O"("m"α("n")), where α("n") is the extremely slow-growing inverse Ackermann function. Disjoint-set forests do not guarantee this performance on a per-operation basis. Individual union and find operations can take longer than a constant times α("n") time, but each operation causes the disjoint-set forest to adjust itself so that successive operations are faster. Disjoint-set forests are both asymptotically optimal and practically efficient.
Disjoint-set data structures play a key role in Kruskal's algorithm for finding the minimum spanning tree of a graph. The importance of minimum spanning trees means that disjoint-set data structures underlie a wide variety of algorithms. In addition, disjoint-set data structures also have applications to symbolic computation, as well as in compilers, especially for register allocation problems.
History.
Disjoint-set forests were first described by Bernard A. Galler and Michael J. Fischer in 1964. In 1973, their time complexity was bounded to formula_0, the iterated logarithm of formula_1, by Hopcroft and Ullman. In 1975, Robert Tarjan was the first to prove the formula_2 (inverse Ackermann function) upper bound on the algorithm's time complexity. He also proved it to be tight. In 1979, he showed that this was the lower bound for a certain class of algorithms, that include the Galler-Fischer structure. In 1989, Fredman and Saks showed that formula_3 (amortized) words of formula_4 bits must be accessed by "any" disjoint-set data structure per operation, thereby proving the optimality of the data structure in this model.
In 1991, Galil and Italiano published a survey of data structures for disjoint-sets.
In 1994, Richard J. Anderson and Heather Woll described a parallelized version of Union–Find that never needs to block.
In 2007, Sylvain Conchon and Jean-Christophe Filliâtre developed a semi-persistent version of the disjoint-set forest data structure and formalized its correctness using the proof assistant Coq. "Semi-persistent" means that previous versions of the structure are efficiently retained, but accessing previous versions of the data structure invalidates later ones. Their fastest implementation achieves performance almost as efficient as the non-persistent algorithm. They do not perform a complexity analysis.
Variants of disjoint-set data structures with better performance on a restricted class of problems have also been considered. Gabow and Tarjan showed that if the possible unions are restricted in certain ways, then a truly linear time algorithm is possible.
Representation.
Each node in a disjoint-set forest consists of a pointer and some auxiliary information, either a size or a rank (but not both). The pointers are used to make parent pointer trees, where each node that is not the root of a tree points to its parent. To distinguish root nodes from others, their parent pointers have invalid values, such as a circular reference to the node or a sentinel value. Each tree represents a set stored in the forest, with the members of the set being the nodes in the tree. Root nodes provide set representatives: Two nodes are in the same set if and only if the roots of the trees containing the nodes are equal.
Nodes in the forest can be stored in any way convenient to the application, but a common technique is to store them in an array. In this case, parents can be indicated by their array index. Every array entry requires Θ(log "n") bits of storage for the parent pointer. A comparable or lesser amount of storage is required for the rest of the entry, so the number of bits required to store the forest is Θ("n" log "n"). If an implementation uses fixed size nodes (thereby limiting the maximum size of the forest that can be stored), then the necessary storage is linear in n.
Operations.
Disjoint-set data structures support three operations: Making a new set containing a new element; Finding the representative of the set containing a given element; and Merging two sets.
Making new sets.
The codice_0 operation adds a new element into a new set containing only the new element, and the new set is added to the data structure. If the data structure is instead viewed as a partition of a set, then the codice_0 operation enlarges the set by adding the new element, and it extends the existing partition by putting the new element into a new subset containing only the new element.
In a disjoint-set forest, codice_0 initializes the node's parent pointer and the node's size or rank. If a root is represented by a node that points to itself, then adding an element can be described using the following pseudocode:
function MakeSet("x") is
if "x" is not already in the forest then
"x".parent := "x"
"x".size := 1 "// if nodes store size"
"x".rank := 0 "// if nodes store rank"
end if
end function
This operation has constant time complexity. In particular, initializing a
disjoint-set forest with n nodes requires "O"("n")
time.
Lack of a parent assigned to the node implies that the node is not present in the forest.
In practice, codice_0 must be preceded by an operation that allocates memory to hold x. As long as memory allocation is an amortized constant-time operation, as it is for a good dynamic array implementation, it does not change the asymptotic performance of the random-set forest.
Finding set representatives.
The codice_4 operation follows the chain of parent pointers from a specified query node x until it reaches a root element. This root element represents the set to which x belongs and may be x itself. codice_4 returns the root element it reaches.
Performing a codice_4 operation presents an important opportunity for improving the forest. The time in a codice_4 operation is spent chasing parent pointers, so a flatter tree leads to faster codice_4 operations. When a codice_4 is executed, there is no faster way to reach the root than by following each parent pointer in succession. However, the parent pointers visited during this search can be updated to point closer to the root. Because every element visited on the way to a root is part of the same set, this does not change the sets stored in the forest. But it makes future codice_4 operations faster, not only for the nodes between the query node and the root, but also for their descendants. This updating is an important part of the disjoint-set forest's amortized performance guarantee.
There are several algorithms for codice_4 that achieve the asymptotically optimal time complexity. One family of algorithms, known as path compression, makes every node between the query node and the root point to the root. Path compression can be implemented using a simple recursion as follows:
function Find("x") is
if "x".parent ≠ "x" then
"x".parent := Find("x".parent)
return "x".parent
else
return "x"
end if
end function
This implementation makes two passes, one up the tree and one back down. It requires enough scratch memory to store the path from the query node to the root (in the above pseudocode, the path is implicitly represented using the call stack). This can be decreased to a constant amount of memory by performing both passes in the same direction. The constant memory implementation walks from the query node to the root twice, once to find the root and once to update pointers:
function Find("x") is
"root" := "x"
while "root".parent ≠ "root" do
"root" := "root".parent
end while
while "x".parent ≠ "root" do
"parent" := "x".parent
"x".parent := "root"
"x" := "parent"
end while
return "root"
end function
Tarjan and Van Leeuwen also developed one-pass codice_4 algorithms that retain the same worst-case complexity but are more efficient in practice. These are called path splitting and path halving. Both of these update the parent pointers of nodes on the path between the query node and the root. Path splitting replaces every parent pointer on that path by a pointer to the node's grandparent:
function Find("x") is
while "x".parent ≠ "x" do
("x", "x".parent) := ("x".parent, "x".parent.parent)
end while
return "x"
end function
Path halving works similarly but replaces only every other parent pointer:
function Find("x") is
while "x".parent ≠ "x" do
"x".parent := "x".parent.parent
"x" := "x".parent
end while
return "x"
end function
Merging two sets.
The operation codice_13 replaces the set containing x and the set containing y with their union. codice_14 first uses codice_4 to determine the roots of the trees containing x and y. If the roots are the same, there is nothing more to do. Otherwise, the two trees must be merged. This is done by either setting the parent pointer of x's root to y's, or setting the parent pointer of y's root to x's.
The choice of which node becomes the parent has consequences for the complexity of future operations on the tree. If it is done carelessly, trees can become excessively tall. For example, suppose that codice_14 always made the tree containing x a subtree of the tree containing y. Begin with a forest that has just been initialized with elements formula_5 and execute codice_17, codice_18, ..., codice_19. The resulting forest contains a single tree whose root is n, and the path from 1 to n passes through every node in the tree. For this forest, the time to run codice_20 is "O"("n").
In an efficient implementation, tree height is controlled using union by size or union by rank. Both of these require a node to store information besides just its parent pointer. This information is used to decide which root becomes the new parent. Both strategies ensure that trees do not become too deep.
Union by size.
In the case of union by size, a node stores its size, which is simply its number of descendants (including the node itself). When the trees with roots x and y are merged, the node with more descendants becomes the parent. If the two nodes have the same number of descendants, then either one can become the parent. In both cases, the size of the new parent node is set to its new total number of descendants.
function Union("x", "y") is
"// Replace nodes by roots"
"x" := Find("x")
"y" := Find("y")
if "x" = "y" then
return "// x and y are already in the same set"
end if
"// If necessary, swap variables to ensure that"
"// x has at least as many descendants as y"
if "x".size < "y".size then
("x", "y") := ("y", "x")
end if
"// Make x the new root"
"y".parent := "x"
"// Update the size of x"
"x".size := "x".size + "y".size
end function
The number of bits necessary to store the size is clearly the number of bits necessary to store n. This adds a constant factor to the forest's required storage.
Union by rank.
For union by rank, a node stores its rank, which is an upper bound for its height. When a node is initialized, its rank is set to zero. To merge trees with roots x and y, first compare their ranks. If the ranks are different, then the larger rank tree becomes the parent, and the ranks of x and y do not change. If the ranks are the same, then either one can become the parent, but the new parent's rank is incremented by one. While the rank of a node is clearly related to its height, storing ranks is more efficient than storing heights. The height of a node can change during a codice_4 operation, so storing ranks avoids the extra effort of keeping the height correct. In pseudocode, union by rank is:
function Union("x", "y") is
"// Replace nodes by roots"
"x" := Find("x")
"y" := Find("y")
if "x" = "y" then
return "// x and y are already in the same set"
end if
"// If necessary, rename variables to ensure that"
"// x has rank at least as large as that of y"
if "x".rank < "y".rank then
("x", "y") := ("y", "x")
end if
"// Make x the new root"
"y".parent := "x"
"// If necessary, increment the rank of x"
if "x".rank = "y".rank then
"x".rank := "x".rank + 1
end if
end function
It can be shown that every node has rank formula_6 or less. Consequently each rank can be stored in "O"(log log "n") bits and all the ranks can be stored in "O"("n" log log "n") bits. This makes the ranks an asymptotically negligible portion of the forest's size.
It is clear from the above implementations that the size and rank of a node do not matter unless a node is the root of a tree. Once a node becomes a child, its size and rank are never accessed again.
Time complexity.
A disjoint-set forest implementation in which codice_4 does not update parent pointers, and in which codice_14 does not attempt to control tree heights, can have trees with height "O"("n"). In such a situation, the codice_4 and codice_14 operations require "O"("n") time.
If an implementation uses path compression alone, then a sequence of n codice_0 operations, followed by up to "n" − 1 codice_14 operations and "f" codice_4 operations, has a worst-case running time of formula_7.
Using union by rank, but without updating parent pointers during codice_4, gives a running time of formula_8 for m operations of any type, up to n of which are codice_0 operations.
The combination of path compression, splitting, or halving, with union by size or by rank, reduces the running time for m operations of any type, up to n of which are codice_0 operations, to formula_9. This makes the amortized running time of each operation formula_10. This is asymptotically optimal, meaning that every disjoint set data structure must use formula_3 amortized time per operation. Here, the function formula_11 is the inverse Ackermann function. The inverse Ackermann function grows extraordinarily slowly, so this factor is 4 or less for any n that can actually be written in the physical universe. This makes disjoint-set operations practically amortized constant time.
Proof of O(m log* n) time complexity of Union-Find.
The precise analysis of the performance of a disjoint-set forest is somewhat intricate. However, there is a much simpler analysis that proves that the amortized time for any m codice_4 or codice_14 operations on a disjoint-set forest containing n objects is "O"("m" log* "n"), where log* denotes the iterated logarithm.
Lemma 1: As the find function follows the path along to the root, the rank of node it encounters is increasing.
<templatestyles src="Math_proof/styles.css" />Proof
claim that as Find and Union operations are applied to the data set, this fact remains true over time. Initially when each node is the root of its own tree, it's trivially true. The only case when the rank of a node might be changed is when the Union by Rank operation is applied. In this case, a tree with smaller rank will be attached to a tree with greater rank, rather than vice versa. And during the find operation, all nodes visited along the path will be attached to the root, which has larger rank than its children, so this operation won't change this fact either.
Lemma 2: A node u which is root of a subtree with rank r has at least formula_12 nodes.
<templatestyles src="Math_proof/styles.css" />Proof
Initially when each node is the root of its own tree, it's trivially true. Assume that a node u with rank r has at least 2"r" nodes. Then when two trees with rank r are merged using the operation Union by Rank, a tree with rank "r" + 1 results, the root of which has at least formula_13 nodes.
Lemma 3: The maximum number of nodes of rank r is at most formula_14
<templatestyles src="Math_proof/styles.css" />Proof
For convenience, we define "bucket" here: a bucket is a set that contains vertices with particular ranks.
We create some buckets and put vertices into the buckets according to their ranks inductively. That is, vertices with rank 0 go into the zeroth bucket, vertices with rank 1 go into the first bucket, vertices with ranks 2 and 3 go into the second bucket. If the B-th bucket contains vertices with ranks from interval formula_15 then the (B+1)st bucket will contain vertices with ranks from interval formula_16
We can make two observations about the buckets.
Let F represent the list of "find" operations performed, and let
formula_21
formula_22
formula_23
Then the total cost of m finds is formula_24
Since each find operation makes exactly one traversal that leads to a root, we have "T"1 = "O"("m").
Also, from the bound above on the number of buckets, we have "T"2 = "O"("m"log*"n").
For T3, suppose we are traversing an edge from u to v, where u and v have rank in the bucket ["B", 2"B" − 1] and v is not the root (at the time of this traversing, otherwise the traversal would be accounted for in T1). Fix u and consider the sequence formula_25 that take the role of v in different find operations. Because of path compression and not accounting for the edge to a root, this sequence contains only different nodes and because of Lemma 1 we know that the ranks of the nodes in this sequence are strictly increasing. By both of the nodes being in the bucket we can conclude that the length k of the sequence (the number of times node u is attached to a different root in the same bucket) is at most the number of ranks in the buckets B, that is, at most formula_26
Therefore, formula_27
From Observations 1 and 2, we can conclude that formula_28
Therefore, formula_29
Other structures.
Better worst-case time per operation.
The worst-case time of the codice_4 operation in trees with Union by rank or Union by weight is formula_30 (i.e., it is formula_4 and this bound is tight).
In 1985, N. Blum gave an implementation of the operations that does not use path compression, but compresses trees during formula_31. His implementation runs in formula_32 time per operation, and thus in comparison with Galler and Fischer's structure it has a better worst-case time per operation, but inferior amortized time. In 1999, Alstrup et al. gave a structure that has optimal worst-case
time formula_32 together with inverse-Ackermann amortized time.
Deletion.
The regular implementation as disjoint-set forests does not react favorably to the deletion of elements,
in the sense that the time for codice_4 will not improve as a result of the decrease in the number of elements. However, there exist modern implementations that allow for constant-time deletion and where the time-bound for codice_4 depends on the "current" number of elements
Applications.
Disjoint-set data structures model the partitioning of a set, for example to keep track of the connected components of an undirected graph. This model can then be used to determine whether two vertices belong to the same component, or whether adding an edge between them would result in a cycle. The Union–Find algorithm is used in high-performance implementations of unification.
This data structure is used by the Boost Graph Library to implement its Incremental Connected Components functionality. It is also a key component in implementing Kruskal's algorithm to find the minimum spanning tree of a graph.
The Hoshen-Kopelman algorithm uses a Union-Find in the algorithm.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "O(\\log^{*}(n))"
},
{
"math_id": 1,
"text": "n"
},
{
"math_id": 2,
"text": "O(m\\alpha(n))"
},
{
"math_id": 3,
"text": "\\Omega(\\alpha(n))"
},
{
"math_id": 4,
"text": "O(\\log n)"
},
{
"math_id": 5,
"text": "1, 2, 3, \\ldots, n,"
},
{
"math_id": 6,
"text": "\\lfloor \\log n \\rfloor"
},
{
"math_id": 7,
"text": "\\Theta(n+f \\cdot \\left(1 + \\log_{2+f/n} n\\right))"
},
{
"math_id": 8,
"text": "\\Theta(m \\log n)"
},
{
"math_id": 9,
"text": "\\Theta(m\\alpha(n))"
},
{
"math_id": 10,
"text": "\\Theta(\\alpha(n))"
},
{
"math_id": 11,
"text": "\\alpha(n)"
},
{
"math_id": 12,
"text": "2^r"
},
{
"math_id": 13,
"text": "2^r + 2^r = 2^{r + 1}"
},
{
"math_id": 14,
"text": "\\frac{n}{2^r}."
},
{
"math_id": 15,
"text": "\\left[r, 2^r - 1\\right] = [r, R - 1]"
},
{
"math_id": 16,
"text": "\\left[R, 2^R - 1\\right]."
},
{
"math_id": 17,
"text": "\\left[B, 2^B - 1\\right]"
},
{
"math_id": 18,
"text": "\\left[2^B, 2^{2^B} - 1\\right]"
},
{
"math_id": 19,
"text": "\\frac{2n}{2^B}"
},
{
"math_id": 20,
"text": "\\frac{n}{2^B} + \\frac{n}{2^{B+1}} + \\frac{n}{2^{B+2}} + \\cdots + \\frac{n}{2^{2^B - 1}} \\leq \\frac{2n}{2^B}."
},
{
"math_id": 21,
"text": "T_1 = \\sum_F\\text{(link to the root)}"
},
{
"math_id": 22,
"text": "T_2 = \\sum_F\\text{(number of links traversed where the buckets are different)}"
},
{
"math_id": 23,
"text": "T_3 = \\sum_F\\text{(number of links traversed where the buckets are the same).}"
},
{
"math_id": 24,
"text": "T = T_1 + T_2 + T_3."
},
{
"math_id": 25,
"text": "v_1, v_2, \\ldots, v_k"
},
{
"math_id": 26,
"text": "2^B - 1 - B < 2^B."
},
{
"math_id": 27,
"text": "T_3 \\leq \\sum_{[B, 2^B - 1]} \\sum_u 2^B."
},
{
"math_id": 28,
"text": "T_3 \\leq \\sum_{B} 2^B \\frac{2n}{2^B} \\leq 2 n \\log^* n."
},
{
"math_id": 29,
"text": "T = T_1 + T_2 + T_3 = O(m \\log^*n)."
},
{
"math_id": 30,
"text": "\\Theta(\\log n)"
},
{
"math_id": 31,
"text": "union"
},
{
"math_id": 32,
"text": "O(\\log n / \\log\\log n)"
}
] | https://en.wikipedia.org/wiki?curid=1037551 |
10375913 | International dollar | Hypothetical unit of currency
The international dollar (int'l dollar or intl dollar, symbols Int'l$., Intl$., Int$), also known as Geary–Khamis dollar (symbols G–K$ or GK$), is a hypothetical unit of currency that has the same purchasing power parity that the U.S. dollar had in the United States at a given point in time. It is mainly used in economics and financial statistics for various purposes, most notably to determine and compare the purchasing power parity and gross domestic product of various countries and markets. The year 1990 or 2000 is often used as a benchmark year for comparisons that run through time. The unit is often abbreviated, e.g. 2000 US dollars or 2000 International$ (if the benchmark year is 2000).
It is based on the twin concepts of purchasing power parities (PPP) of currencies and the international average prices of commodities. It shows how much a local currency unit is worth within the country's borders. It is used to make comparisons both between countries and over time. For example, comparing per capita gross domestic product (GDP) of various countries in international dollars, rather than based simply on exchange rates, provides a more valid measure to compare standards of living. It was proposed by Roy C. Geary in 1958 and developed by Salem Hanna Khamis between 1970 and 1982.
Figures expressed in international dollars cannot be converted to another country's currency using current market exchange rates; instead they must be converted using the country's PPP exchange rate used in the study.
Exchange rate by country.
According to IMF, below is the implied PPP rate of International dollar to local currency of respective countries in 2022:
Short description of Geary-Khamis system.
This system is valuing the matrix of quantities using the international prices vector. The vector is obtained by averaging the national prices in the participating countries after their conversion into a common currency with PPP and weighing quantities. PPPs are obtained by averaging the shares of national and international prices in the participating countries weighted by expenditure. International prices and PPPs are defined by a system of interrelated linear equations that need to be solved simultaneously. The GK method produces PPPs that are transitive and actual final expenditures that are additive.
Inflation adjusting.
When comparing between countries and between years, the international dollar figures may be adjusted to compensate for inflation. In that case, the base year is chosen, and all figures will be expressed in constant international dollars for that specified base year. Researchers must understand which adjustments are reflected in the data (Marty Schmidt):
Description of Geary-Khamis system.
Suppose PPPj is the parity of j-th currency with a currency called international dollars, which may reflect any currency, however, US dollar is the most commonly used. Then the international price Pi is defined as an international average of prices of i-th commodity in various countries. Prices in these countries are expressed in their national currencies. Geary-Khamis method solves this by using national prices after conversion into a common currency using the purchasing power parities (PPP). Hence, the international price, Pi of i-th commodity is defined as:
formula_0
This equation implies that the international price of i-th commodity is calculated by dividing the total output of i-th commodity in all selected countries, converted in international dollars, using purchasing power parities, by the total quantity produced of i-th commodity. Previous equation can be rewritten as follows:
formula_1
This equation suggests that Pi is weighted average of international prices pij after conversion into international dollars using PPPj. PPPj is by Geary-Khamis system defined through this equation:
formula_2
The numerator of the equation represents the total value of output in j-th country expressed in national currency, and the denominator is the value of j-th country output evaluated by repricing at international prices Pi in international dollars. Then PPPj gives the number of national currency units per international dollar.
Advantages of Geary-Khamis method.
Geary-Khamis international dollar is widely used by foreign investors and institutions such as IMF, FAO and World Bank. It has become so widely used because it made possible to compare living standards between countries. Thanks to the international dollar they can see more trustworthy economic situation in the country and decide whether to provide additional loans (or any other investments) to said country, or not. It also offers some comparison of purchasing power parities all around the world (developing countries tend to have higher PPPs). Some traders even use Geary-Khamis method to determine if country's currency is undervalued or overvalued. Exchange rates are frequently used for comparing currencies, however, this approach does not reflect real value of currency in said country. It is better to include PPP or prices of goods in said country. International dollar solves this by taking into account exchange rates, PPP and average commodity prices. Geary-Khamis method is the best method for comparisons of agricultural outputs.
Criticism of using 1990 US dollars for long run comparisons.
Economists and historians use many methods when they want to research economic development in the past. For example, if we take the United States of America and United Kingdom (these two examples were compared many times in various researches), someone may use nominal exchange rates, Lindert and Williamson (2016) used PPP exchange rates and Broadberry (2003) used growth rates using own-country price indices. However, none of them is somehow better than the others (or theoretically justifiable). There is a high probability that these three methods will give three different answers, and, in fact, Brunt and Fidalgo (2018) showed in their paper that "these three approaches do give three different answers when estimating output levels and growth rates in the US and UK – and they are not only different to one another, but also different to a comparison using the (more theoretically justifiable) chained GK prices." Even though it is more theoretically justifiable, it does not mean it should be used without considering every aspect of this method.
For example, Maddison (2001) used the 1990 international dollar when he examined prices during the time of Christ. Ideally, we would use a price benchmark which is significantly closer to the time of Christ. However, there are no such benchmarks. Another problem is that there is no set of international prices, which we could use for valid cross-country comparisons. Comparing GDP levels across countries using their own prices converted at the nominal exchange rate has no value whatsoever. This approach is quite arbitrary because the exchange rate is determined simply by the supply and demand for currency and these metrics are greatly dependent on the volumes of trade balances. It makes little (or no) sense to value all goods (both traded and non-traded at the nominal exchange rate, especially since the absolute volumes of trades may be small compared to total output in both countries.
Economists therefore create PPP exchange rates, deriving the exchange rate by valuing a basket of goods in the two countries at two sets of prices (and expressing them as a ratio afterwards). This allows us to see how much it actually costs to live in said country. Although with this approach emerges another problem. What should we choose to be in the basket? Brunt and Fidalgo (2018) use examples of an English basket in 1775 and Chinese basket in 1775. While the English one would have a lot of wheat, the Chinese one would have a lot of rice. Wheat was quite affordable in England and rice was quite affordable in China, however, if we switch these goods, they both would be relatively expensive. This nicely illustrates how choice of the content of the basket will influence the comparison. Simply by using English basket, China would seem like an expensive place to live and vice versa.
Geary-Khamis tries to solve this by estimating a weighted average price of each commodity using the shares of countries in world production to weight the country prices. Another problem emerges when researchers compare countries which have different price structure than the international price structure. Brunt and Fidalgo (2018) show examples of Ireland (which has really similar price structure to the international) and South Africa (which has really different price structure to the international). So, when using domestic and international price indices, Ireland's growth rates move in very similar direction, but when domestic and international prices are applied to South Africa, they, in fact, move in opposite directions. It is worth noting, that bigger countries tend to have a price index that moves more similarly to the international price index. It is simply because bigger countries have a bigger weight in creation of this index.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "p_i=\\frac{\\left(p_{i1}q_{i1}/{\\rm PPP}_1\\right)+\\left(p_{i2}q_{i2}/{\\rm PPP}_2\\right)+\\cdots+\\left(p_{iM}q_{iM}/{\\rm PPP}_M\\right)}{q_{i1}+q_{i2}+\\cdots+q_{iM}}"
},
{
"math_id": 1,
"text": "p_i=\\sum_{j=1}^{M}{\\left(p_{ij}/{\\rm PPP}_j\\right)\\frac{q_{ij}}{\\sum_{j=1}^{M}q_{ij}}}"
},
{
"math_id": 2,
"text": "{\\rm PPP}_j=\\frac{\\sum_{i=1}^{N}{p_{ij}q_{ij}}}{\\sum_{i=1}^{N}{p_iq_{ij}}}"
}
] | https://en.wikipedia.org/wiki?curid=10375913 |
10376 | Euclidean domain | Commutative ring with a Euclidean division
In mathematics, more specifically in ring theory, a Euclidean domain (also called a Euclidean ring) is an integral domain that can be endowed with a Euclidean function which allows a suitable generalization of the Euclidean division of integers. This generalized Euclidean algorithm can be put to many of the same uses as Euclid's original algorithm in the ring of integers: in any Euclidean domain, one can apply the Euclidean algorithm to compute the greatest common divisor of any two elements. In particular, the greatest common divisor of any two elements exists and can be written as a linear combination
of them (Bézout's identity). Also every ideal in a Euclidean domain is principal, which implies a suitable generalization of the fundamental theorem of arithmetic: every Euclidean domain is a unique factorization domain.
It is important to compare the class of Euclidean domains with the larger class of principal ideal domains (PIDs). An arbitrary PID has much the same "structural properties" of a Euclidean domain (or, indeed, even of the ring of integers), but when an explicit algorithm for Euclidean division is known, one may use the Euclidean algorithm and extended Euclidean algorithm to compute greatest common divisors and Bézout's identity. In particular, the existence of efficient algorithms for Euclidean division of integers and of polynomials in one variable over a field is of basic importance in computer algebra.
So, given an integral domain R, it is often very useful to know that R has a Euclidean function: in particular, this implies that R is a PID. However, if there is no "obvious" Euclidean function, then determining whether R is a PID is generally a much easier problem than determining whether it is a Euclidean domain.
Euclidean domains appear in the following chain of class inclusions:
rngs ⊃ rings ⊃ commutative rings ⊃ integral domains ⊃ integrally closed domains ⊃ GCD domains ⊃ unique factorization domains ⊃ principal ideal domains ⊃ Euclidean domains ⊃ fields ⊃ algebraically closed fields
Definition.
Let R be an integral domain. A Euclidean function on R is a function f from "R" \&hairsp;{0}to the non-negative integers satisfying the following fundamental division-with-remainder property:
"bq" + "r" and either "r" = 0 or "f" ("r") < "f" ("b").
A Euclidean domain is an integral domain which can be endowed with at least one Euclidean function. A particular Euclidean function f is "not" part of the definition of a Euclidean domain, as, in general, a Euclidean domain may admit many different Euclidean functions.
In this context, q and r are called respectively a "quotient" and a "remainder" of the "division" (or "Euclidean division") of a by b. In contrast with the case of integers and polynomials, the quotient is generally not uniquely defined, but when a quotient has been chosen, the remainder is uniquely defined.
Most algebra texts require a Euclidean function to have the following additional property:
However, one can show that (EF1) alone suffices to define a Euclidean domain; if an integral domain R is endowed with a function g satisfying (EF1), then R can also be endowed with a function satisfying both (EF1) and (EF2) simultaneously. Indeed, for a in "R" \&hairsp;{0}, one can define "f" ("a") as follows:
formula_0
In words, one may define "f" ("a") to be the minimum value attained by g on the set of all non-zero elements of the principal ideal generated by a.
A Euclidean function f is multiplicative if "f" ("ab")
"f" ("a") "f" ("b") and "f" ("a") is never zero. It follows that "f" (1)
1. More generally, "f" ("a")
1 if and only if a is a unit.
Notes on the definition.
Many authors use other terms in place of "Euclidean function", such as "degree function", "valuation function", "gauge function" or "norm function". Some authors also require the domain of the Euclidean function to be the entire ring R; however, this does not essentially affect the definition, since (EF1) does not involve the value of "f" (0). The definition is sometimes generalized by allowing the Euclidean function to take its values in any well-ordered set; this weakening does not affect the most important implications of the Euclidean property.
The property (EF1) can be restated as follows: for any principal ideal I of R with nonzero generator b, all nonzero classes of the quotient ring "R"/"I" have a representative r with "f" ("r") < "f" ("b"). Since the possible values of f are well-ordered, this property can be established by showing that "f" ("r") < "f" ("b") for any "r" ∉ "I" with minimal value of "f" ("r") in its class. Note that, for a Euclidean function that is so established, there need not exist an effective method to determine q and r in (EF1).
Examples.
Examples of Euclidean domains include:
1 for all nonzero x.
|"n"|, the absolute value of n.
"a"2 + "b"2, the norm of the Gaussian integer "a" + "bi".
"a"2 − "ab" + "b"2, the norm of the Eisenstein integer "a" + "b"ω.
"v". The previous example "K""X" is a special case of this.
Examples of domains that are "not" Euclidean domains include:
Properties.
Let "R" be a domain and "f" a Euclidean function on "R". Then:
However, in many finite extensions of Q with trivial class group, the ring of integers is Euclidean (not necessarily with respect to the absolute value of the field norm; see below).
Assuming the extended Riemann hypothesis, if "K" is a finite extension of Q and the ring of integers of "K" is a PID with an infinite number of units, then the ring of integers is Euclidean.
In particular this applies to the case of totally real quadratic number fields with trivial class group.
In addition (and without assuming ERH), if the field "K" is a Galois extension of Q, has trivial class group and unit rank strictly greater than three, then the ring of integers is Euclidean.
An immediate corollary of this is that if the number field is Galois over Q, its class group is trivial and the extension has degree greater than 8 then the ring of integers is necessarily Euclidean.
Norm-Euclidean fields.
Algebraic number fields "K" come with a canonical norm function on them: the absolute value of the field norm "N" that takes an algebraic element "α" to the product of all the conjugates of "α". This norm maps the ring of integers of a number field "K", say "O""K", to the nonnegative rational integers, so it is a candidate to be a Euclidean norm on this ring. If this norm satisfies the axioms of a Euclidean function then the number field "K" is called "norm-Euclidean" or simply "Euclidean". Strictly speaking it is the ring of integers that is Euclidean since fields are trivially Euclidean domains, but the terminology is standard.
If a field is not norm-Euclidean then that does not mean the ring of integers is not Euclidean, just that the field norm does not satisfy the axioms of a Euclidean function. In fact, the rings of integers of number fields may be divided in several classes:
The norm-Euclidean quadratic fields have been fully classified; they are formula_5 where formula_10 takes the values
−11, −7, −3, −2, −1, 2, 3, 5, 6, 7, 11, 13, 17, 19, 21, 29, 33, 37, 41, 57, 73 (sequence in the OEIS).
Every Euclidean imaginary quadratic field is norm-Euclidean and is one of the five first fields in the preceding list. | [
{
"math_id": 0,
"text": "f(a) = \\min_{x \\in R \\setminus \\{0\\}} g(xa)"
},
{
"math_id": 1,
"text": "f(x) = \\sum_{i=1}^n v_i(x)"
},
{
"math_id": 2,
"text": "p\\in A"
},
{
"math_id": 3,
"text": "A^\\times\\to(A/p)^\\times"
},
{
"math_id": 4,
"text": "A\\to A/p"
},
{
"math_id": 5,
"text": "\\mathbf{Q}(\\sqrt{d}\\,)"
},
{
"math_id": 6,
"text": "\\mathbf{Q}(\\sqrt{-5}\\,)"
},
{
"math_id": 7,
"text": "\\mathbf{Q}(\\sqrt{-19}\\,)"
},
{
"math_id": 8,
"text": "\\mathbf{Q}(\\sqrt{69}\\,)"
},
{
"math_id": 9,
"text": "\\mathbf{Q}(\\sqrt{-1}\\,)"
},
{
"math_id": 10,
"text": "d"
}
] | https://en.wikipedia.org/wiki?curid=10376 |
10377 | Euclidean algorithm | Algorithm for computing greatest common divisors
In mathematics, the Euclidean algorithm, or Euclid's algorithm, is an efficient method for computing the greatest common divisor (GCD) of two integers (numbers), the largest number that divides them both without a remainder. It is named after the ancient Greek mathematician Euclid, who first described it in his "Elements" (c. 300 BC).
It is an example of an "algorithm", a step-by-step procedure for performing a calculation according to well-defined rules,
and is one of the oldest algorithms in common use. It can be used to reduce fractions to their simplest form, and is a part of many other number-theoretic and cryptographic calculations.
The Euclidean algorithm is based on the principle that the greatest common divisor of two numbers does not change if the larger number is replaced by its difference with the smaller number. For example, 21 is the GCD of 252 and 105 (as 252
21 × 12 and 105
21 × 5), and the same number 21 is also the GCD of 105 and 252 − 105
147. Since this replacement reduces the larger of the two numbers, repeating this process gives successively smaller pairs of numbers until the two numbers become equal. When that occurs, that number is the GCD of the original two numbers. By reversing the steps or using the extended Euclidean algorithm, the GCD can be expressed as a linear combination of the two original numbers, that is the sum of the two numbers, each multiplied by an integer (for example, 21
5 × 105 + (−2) × 252). The fact that the GCD can always be expressed in this way is known as Bézout's identity.
The version of the Euclidean algorithm described above—which follows Euclid's original presentation—can take many subtraction steps to find the GCD when one of the given numbers is much bigger than the other. A more efficient version of the algorithm shortcuts these steps, instead replacing the larger of the two numbers by its remainder when divided by the smaller of the two (with this version, the algorithm stops when reaching a zero remainder). With this improvement, the algorithm never requires more steps than five times the number of digits (base 10) of the smaller integer. This was proven by Gabriel Lamé in 1844 (Lamé's Theorem), and marks the beginning of computational complexity theory. Additional methods for improving the algorithm's efficiency were developed in the 20th century.
The Euclidean algorithm has many theoretical and practical applications. It is used for reducing fractions to their simplest form and for performing division in modular arithmetic. Computations using this algorithm form part of the cryptographic protocols that are used to secure internet communications, and in methods for breaking these cryptosystems by factoring large composite numbers. The Euclidean algorithm may be used to solve Diophantine equations, such as finding numbers that satisfy multiple congruences according to the Chinese remainder theorem, to construct continued fractions, and to find accurate rational approximations to real numbers. Finally, it can be used as a basic tool for proving theorems in number theory such as Lagrange's four-square theorem and the uniqueness of prime factorizations.
The original algorithm was described only for natural numbers and geometric lengths (real numbers), but the algorithm was generalized in the 19th century to other types of numbers, such as Gaussian integers and polynomials of one variable. This led to modern abstract algebraic notions such as Euclidean domains.
Background: greatest common divisor.
The Euclidean algorithm calculates the greatest common divisor (GCD) of two natural numbers a and b. The greatest common divisor g is the largest natural number that divides both a and b without leaving a remainder. Synonyms for GCD include "greatest common factor" (GCF), "highest common factor" (HCF), "highest common divisor" (HCD), and "greatest common measure" (GCM). The greatest common divisor is often written as gcd("a", "b") or, more simply, as ("a", "b"), although the latter notation is ambiguous, also used for concepts such as an ideal in the ring of integers, which is closely related to GCD.
If gcd("a", "b")
1, then a and b are said to be coprime (or relatively prime). This property does not imply that a or b are themselves prime numbers. For example, 6 and 35 factor as 6 = 2 × 3 and 35 = 5 × 7, so they are not prime, but their prime factors are different, so 6 and 35 are coprime, with no common factors other than 1.
Let "g"
gcd("a", "b"). Since a and b are both multiples of g, they can be written "a"
"mg" and "b"
"ng", and there is no larger number "G" > "g" for which this is true. The natural numbers m and n must be coprime, since any common factor could be factored out of m and n to make g greater. Thus, any other number c that divides both a and b must also divide g. The greatest common divisor g of a and b is the unique (positive) common divisor of a and b that is divisible by any other common divisor c.
The greatest common divisor can be visualized as follows. Consider a rectangular area a by b, and any common divisor c that divides both a and b exactly. The sides of the rectangle can be divided into segments of length c, which divides the rectangle into a grid of squares of side length c. The GCD g is the largest value of c for which this is possible. For illustration, a 24×60 rectangular area can be divided into a grid of: 1×1 squares, 2×2 squares, 3×3 squares, 4×4 squares, 6×6 squares or 12×12 squares. Therefore, 12 is the GCD of 24 and 60. A 24×60 rectangular area can be divided into a grid of 12×12 squares, with two squares along one edge (24/12
2) and five squares along the other (60/12
5).
The greatest common divisor of two numbers a and b is the product of the prime factors shared by the two numbers, where each prime factor can be repeated as many times as it divides both a and b. For example, since 1386 can be factored into 2 × 3 × 3 × 7 × 11, and 3213 can be factored into 3 × 3 × 3 × 7 × 17, the GCD of 1386 and 3213 equals 63
3 × 3 × 7, the product of their shared prime factors (with 3 repeated since 3 × 3 divides both). If two numbers have no common prime factors, their GCD is 1 (obtained here as an instance of the empty product); in other words, they are coprime. A key advantage of the Euclidean algorithm is that it can find the GCD efficiently without having to compute the prime factors. Factorization of large integers is believed to be a computationally very difficult problem, and the security of many widely used cryptographic protocols is based upon its infeasibility.
Another definition of the GCD is helpful in advanced mathematics, particularly ring theory. The greatest common divisor g of two nonzero numbers a and b is also their smallest positive integral linear combination, that is, the smallest positive number of the form "ua" + "vb" where u and v are integers. The set of all integral linear combinations of a and b is actually the same as the set of all multiples of g (mg, where m is an integer). In modern mathematical language, the ideal generated by a and b is the ideal generated by g alone (an ideal generated by a single element is called a principal ideal, and all ideals of the integers are principal ideals). Some properties of the GCD are in fact easier to see with this description, for instance the fact that any common divisor of a and b also divides the GCD (it divides both terms of "ua" + "vb"). The equivalence of this GCD definition with the other definitions is described below.
The GCD of three or more numbers equals the product of the prime factors common to all the numbers, but it can also be calculated by repeatedly taking the GCDs of pairs of numbers. For example,
gcd("a", "b", "c") = gcd("a", gcd("b", "c")) = gcd(gcd("a", "b"), "c") = gcd(gcd("a", "c"), "b").
Thus, Euclid's algorithm, which computes the GCD of two integers, suffices to calculate the GCD of arbitrarily many integers.
Description.
Procedure.
The Euclidean algorithm can be thought of as constructing a sequence of non-negative integers that begins with the two given integers formula_0 and formula_1 and will eventually terminate with the integer zero: formula_2 with formula_3. The integer formula_4 will then be the GCD and we can state formula_5. The algorithm indicates how to construct the intermediate remainders formula_6 via division-with-remainder on the preceding pair formula_7 by finding an integer quotient formula_8 so that:
formula_9
Because the sequence of non-negative integers formula_10 is strictly decreasing, it eventually must terminate. In other words, since formula_11 for every formula_12, and each formula_6 is an integer that is strictly smaller than the preceding formula_13, there eventually cannot be a non-negative integer smaller than zero, and hence the algorithm must terminate. In fact, the algorithm will always terminate at the n-th step with formula_14 equal to zero.
To illustrate, suppose the GCD of 1071 and 462 is requested. The sequence is initially formula_15 and in order to find formula_16, we need to find integers formula_17 and formula_18 such that:
formula_19.
This is the quotient formula_20 since formula_21. This determines formula_22 and so the sequence is now formula_23. The next step is to continue the sequence to find formula_24 by finding integers formula_25 and formula_26 such that:
formula_27.
This is the quotient formula_28 since formula_29. This determines formula_30 and so the sequence is now formula_31. The next step is to continue the sequence to find formula_32 by finding integers formula_33 and formula_34 such that:
formula_35.
This is the quotient formula_36 since formula_37. This determines formula_38 and so the sequence is completed as formula_39 as no further non-negative integer smaller than formula_40 can be found. The penultimate remainder formula_41 is therefore the requested GCD:
formula_42
We can generalize slightly by dropping any ordering requirement on the initial two values formula_43 and formula_44. If formula_45, the algorithm may continue and trivially find that formula_46 as the sequence of remainders will be formula_47. If formula_48, then we can also continue since formula_49, suggesting the next remainder should be formula_43 itself, and the sequence is formula_50. Normally, this would be invalid because it breaks the requirement formula_18 but now we have formula_48 by construction, so the requirement is automatically satisfied and the Euclidean algorithm can continue as normal. Therefore, dropping any ordering between the first two integers does not affect the conclusion that the sequence must eventually terminate because the next remainder will always satisfy formula_51 and everything continues as above. The only modifications that need to be made are that formula_52 only for formula_53, and that the sub-sequence of non-negative integers formula_54 for formula_53 is strictly decreasing, therefore excluding formula_55 from both statements.
Proof of validity.
The validity of the Euclidean algorithm can be proven by a two-step argument. In the first step, the final nonzero remainder "r""N"−1 is shown to divide both "a" and "b". Since it is a common divisor, it must be less than or equal to the greatest common divisor "g". In the second step, it is shown that any common divisor of "a" and "b", including "g", must divide "r""N"−1; therefore, "g" must be less than or equal to "r""N"−1. These two opposite inequalities imply "r""N"−1 = "g".
To demonstrate that "r""N"−1 divides both "a" and "b" (the first step), "r""N"−1 divides its predecessor "r""N"−2
"r""N"−2 = "q""N" "r""N"−1
since the final remainder "r""N" is zero. "r""N"−1 also divides its next predecessor "r""N"−3
"r""N"−3 = "q""N"−1 "r""N"−2 + "r""N"−1
because it divides both terms on the right-hand side of the equation. Iterating the same argument, "r""N"−1 divides all the preceding remainders, including "a" and "b". None of the preceding remainders "r""N"−2, "r""N"−3, etc. divide "a" and "b", since they leave a remainder. Since "r""N"−1 is a common divisor of "a" and "b", "r""N"−1 ≤ "g".
In the second step, any natural number "c" that divides both "a" and "b" (in other words, any common divisor of "a" and "b") divides the remainders "r""k". By definition, "a" and "b" can be written as multiples of "c" : "a" = "mc" and "b" = "nc", where "m" and "n" are natural numbers. Therefore, "c" divides the initial remainder "r"0, since "r"0 = "a" − "q"0"b" = "mc" − "q"0"nc" = ("m" − "q"0"n")"c". An analogous argument shows that "c" also divides the subsequent remainders "r"1, "r"2, etc. Therefore, the greatest common divisor "g" must divide "r""N"−1, which implies that "g" ≤ "r""N"−1. Since the first part of the argument showed the reverse ("r""N"−1 ≤ "g"), it follows that "g" = "r""N"−1. Thus, "g" is the greatest common divisor of all the succeeding pairs:
"g" = gcd("a", "b") = gcd("b", "r"0) = gcd("r"0, "r"1) = … = gcd("r""N"−2, "r""N"−1) = "r""N"−1.
Worked example.
For illustration, the Euclidean algorithm can be used to find the greatest common divisor of "a" = 1071 and "b" = 462. To begin, multiples of 462 are subtracted from 1071 until the remainder is less than 462. Two such multiples can be subtracted ("q"0 = 2), leaving a remainder of 147:
1071 = 2 × 462 + 147.
Then multiples of 147 are subtracted from 462 until the remainder is less than 147. Three multiples can be subtracted ("q"1 = 3), leaving a remainder of 21:
462 = 3 × 147 + 21.
Then multiples of 21 are subtracted from 147 until the remainder is less than 21. Seven multiples can be subtracted ("q"2 = 7), leaving no remainder:
147 = 7 × 21 + 0.
Since the last remainder is zero, the algorithm ends with 21 as the greatest common divisor of 1071 and 462. This agrees with the gcd(1071, 462) found by prime factorization . In tabular form, the steps are:
Visualization.
The Euclidean algorithm can be visualized in terms of the tiling analogy given above for the greatest common divisor. Assume that we wish to cover an "a"×"b" rectangle with square tiles exactly, where "a" is the larger of the two numbers. We first attempt to tile the rectangle using "b"×"b" square tiles; however, this leaves an "r"0×"b" residual rectangle untiled, where "r"0 < "b". We then attempt to tile the residual rectangle with "r"0×"r"0 square tiles. This leaves a second residual rectangle "r"1×"r"0, which we attempt to tile using "r"1×"r"1 square tiles, and so on. The sequence ends when there is no residual rectangle, i.e., when the square tiles cover the previous residual rectangle exactly. The length of the sides of the smallest square tile is the GCD of the dimensions of the original rectangle. For example, the smallest square tile in the adjacent figure is 21×21 (shown in red), and 21 is the GCD of 1071 and 462, the dimensions of the original rectangle (shown in green).
Euclidean division.
At every step "k", the Euclidean algorithm computes a quotient "q""k" and remainder "r""k" from two numbers "r""k"−1 and "r""k"−2
"r""k"−2 = "q""k" "r""k"−1 + "r""k"
where the "r""k" is non-negative and is strictly less than the absolute value of "r""k"−1. The theorem which underlies the definition of the Euclidean division ensures that such a quotient and remainder always exist and are unique.
In Euclid's original version of the algorithm, the quotient and remainder are found by repeated subtraction; that is, "r""k"−1 is subtracted from "r""k"−2 repeatedly until the remainder "r""k" is smaller than "r""k"−1. After that "r""k" and "r""k"−1 are exchanged and the process is iterated. Euclidean division reduces all the steps between two exchanges into a single step, which is thus more efficient. Moreover, the quotients are not needed, thus one may replace Euclidean division by the modulo operation, which gives only the remainder. Thus the iteration of the Euclidean algorithm becomes simply
"r""k" = "r""k"−2 mod "r""k"−1.
Implementations.
Implementations of the algorithm may be expressed in pseudocode. For example, the division-based version may be programmed as
function gcd(a, b)
while b ≠ 0
t := b
b := a mod b
a := t
return a
At the beginning of the "k"th iteration, the variable "b" holds the latest remainder "r""k"−1, whereas the variable "a" holds its predecessor, "r""k"−2. The step "b" := "a" mod "b" is equivalent to the above recursion formula "r""k" ≡ "r""k"−2 mod "r""k"−1. The temporary variable "t" holds the value of "r""k"−1 while the next remainder "r""k" is being calculated. At the end of the loop iteration, the variable "b" holds the remainder "r""k", whereas the variable "a" holds its predecessor, "r""k"−1.
In the subtraction-based version, which was Euclid's original version, the remainder calculation (codice_0) is replaced by repeated subtraction. Contrary to the division-based version, which works with arbitrary integers as input, the subtraction-based version supposes that the input consists of positive integers and stops when "a" = "b":
function gcd(a, b)
while a ≠ b
if a > b
a := a − b
else
b := b − a
return a
The variables "a" and "b" alternate holding the previous remainders "r""k"−1 and "r""k"−2. Assume that "a" is larger than "b" at the beginning of an iteration; then "a" equals "r""k"−2, since "r""k"−2 > "r""k"−1. During the loop iteration, "a" is reduced by multiples of the previous remainder "b" until "a" is smaller than "b". Then "a" is the next remainder "r""k". Then "b" is reduced by multiples of "a" until it is again smaller than "a", giving the next remainder "r""k"+1, and so on.
The recursive version is based on the equality of the GCDs of successive remainders and the stopping condition gcd("r""N"−1, 0) = "r""N"−1.
function gcd(a, b)
if b = 0
return a
else
return gcd(b, a mod b)
For illustration, the gcd(1071, 462) is calculated from the equivalent gcd(462, 1071 mod 462) = gcd(462, 147). The latter GCD is calculated from the gcd(147, 462 mod 147) = gcd(147, 21), which in turn is calculated from the gcd(21, 147 mod 21) = gcd(21, 0) = 21.
Method of least absolute remainders.
In another version of Euclid's algorithm, the quotient at each step is increased by one if the resulting negative remainder is smaller in magnitude than the typical positive remainder. Previously, the equation
"r""k"−2 = "q""k" "r""k"−1 + "r""k"
assumed that |"r""k"−1| > "r""k" > 0. However, an alternative negative remainder "e""k" can be computed:
"r""k"−2 = ("q""k" + 1) "r""k"−1 + "e""k"
if "r""k"−1 > 0 or
"r""k"−2 = ("q""k" − 1) "r""k"−1 + "e""k"
if "r""k"−1 < 0.
If "r""k" is replaced by "e""k". when |"e""k"| < |"r""k"|, then one gets a variant of Euclidean algorithm such that
|"r""k"| ≤ |"r""k"−1| / 2
at each step.
Leopold Kronecker has shown that this version requires the fewest steps of any version of Euclid's algorithm. More generally, it has been proven that, for every input numbers "a" and "b", the number of steps is minimal if and only if "q""k" is chosen in order that formula_56 where formula_57 is the golden ratio.
Historical development.
The Euclidean algorithm is one of the oldest algorithms in common use. It appears in Euclid's "Elements" (c. 300 BC), specifically in Book 7 (Propositions 1–2) and Book 10 (Propositions 2–3). In Book 7, the algorithm is formulated for integers, whereas in Book 10, it is formulated for lengths of line segments. (In modern usage, one would say it was formulated there for real numbers. But lengths, areas, and volumes, represented as real numbers in modern usage, are not measured in the same units and there is no natural unit of length, area, or volume; the concept of real numbers was unknown at that time.) The latter algorithm is geometrical. The GCD of two lengths "a" and "b" corresponds to the greatest length "g" that measures "a" and "b" evenly; in other words, the lengths "a" and "b" are both integer multiples of the length "g".
The algorithm was probably not discovered by Euclid, who compiled results from earlier mathematicians in his "Elements". The mathematician and historian B. L. van der Waerden suggests that Book VII derives from a textbook on number theory written by mathematicians in the school of Pythagoras. The algorithm was probably known by Eudoxus of Cnidus (about 375 BC). The algorithm may even pre-date Eudoxus, judging from the use of the technical term ἀνθυφαίρεσις ("anthyphairesis", reciprocal subtraction) in works by Euclid and Aristotle.
Centuries later, Euclid's algorithm was discovered independently both in India and in China, primarily to solve Diophantine equations that arose in astronomy and making accurate calendars. In the late 5th century, the Indian mathematician and astronomer Aryabhata described the algorithm as the "pulverizer", perhaps because of its effectiveness in solving Diophantine equations. Although a special case of the Chinese remainder theorem had already been described in the Chinese book "Sunzi Suanjing", the general solution was published by Qin Jiushao in his 1247 book "Shushu Jiuzhang" (數書九章 "Mathematical Treatise in Nine Sections"). The Euclidean algorithm was first described "numerically" and popularized in Europe in the second edition of Bachet's "Problèmes plaisants et délectables" ("Pleasant and enjoyable problems", 1624). In Europe, it was likewise used to solve Diophantine equations and in developing continued fractions. The extended Euclidean algorithm was published by the English mathematician Nicholas Saunderson, who attributed it to Roger Cotes as a method for computing continued fractions efficiently.
In the 19th century, the Euclidean algorithm led to the development of new number systems, such as Gaussian integers and Eisenstein integers. In 1815, Carl Gauss used the Euclidean algorithm to demonstrate unique factorization of Gaussian integers, although his work was first published in 1832. Gauss mentioned the algorithm in his "Disquisitiones Arithmeticae" (published 1801), but only as a method for continued fractions. Peter Gustav Lejeune Dirichlet seems to have been the first to describe the Euclidean algorithm as the basis for much of number theory. Lejeune Dirichlet noted that many results of number theory, such as unique factorization, would hold true for any other system of numbers to which the Euclidean algorithm could be applied. Lejeune Dirichlet's lectures on number theory were edited and extended by Richard Dedekind, who used Euclid's algorithm to study algebraic integers, a new general type of number. For example, Dedekind was the first to prove Fermat's two-square theorem using the unique factorization of Gaussian integers. Dedekind also defined the concept of a Euclidean domain, a number system in which a generalized version of the Euclidean algorithm can be defined (as described below). In the closing decades of the 19th century, the Euclidean algorithm gradually became eclipsed by Dedekind's more general theory of ideals.
Other applications of Euclid's algorithm were developed in the 19th century. In 1829, Charles Sturm showed that the algorithm was useful in the Sturm chain method for counting the real roots of polynomials in any given interval.
The Euclidean algorithm was the first integer relation algorithm, which is a method for finding integer relations between commensurate real numbers. Several novel integer relation algorithms have been developed, such as the algorithm of Helaman Ferguson and R.W. Forcade (1979) and the LLL algorithm.
In 1969, Cole and Davie developed a two-player game based on the Euclidean algorithm, called "The Game of Euclid", which has an optimal strategy. The players begin with two piles of "a" and "b" stones. The players take turns removing "m" multiples of the smaller pile from the larger. Thus, if the two piles consist of "x" and "y" stones, where "x" is larger than "y", the next player can reduce the larger pile from "x" stones to "x" − "my" stones, as long as the latter is a nonnegative integer. The winner is the first player to reduce one pile to zero stones.
Mathematical applications.
Bézout's identity.
Bézout's identity states that the greatest common divisor "g" of two integers "a" and "b" can be represented as a linear sum of the original two numbers "a" and "b". In other words, it is always possible to find integers "s" and "t" such that "g" = "sa" + "tb".
The integers "s" and "t" can be calculated from the quotients "q"0, "q"1, etc. by reversing the order of equations in Euclid's algorithm. Beginning with the next-to-last equation, "g" can be expressed in terms of the quotient "q""N"−1 and the two preceding remainders, "r""N"−2 and "r""N"−3:
"g" = "r""N"−1 = "r""N"−3 − "q""N"−1 "r""N"−2 .
Those two remainders can be likewise expressed in terms of their quotients and preceding remainders,
"r""N"−2 = "r""N"−4 − "q""N"−2 "r""N"−3 and
"r""N"−3 = "r""N"−5 − "q""N"−3 "r""N"−4 .
Substituting these formulae for "r""N"−2 and "r""N"−3 into the first equation yields "g" as a linear sum of the remainders "r""N"−4 and "r""N"−5. The process of substituting remainders by formulae involving their predecessors can be continued until the original numbers "a" and "b" are reached:
"r"2 = "r"0 − "q"2 "r"1
"r"1 = "b" − "q"1 "r"0
"r"0 = "a" − "q"0 "b".
After all the remainders "r"0, "r"1, etc. have been substituted, the final equation expresses "g" as a linear sum of "a" and "b", so that "g" = "sa" + "tb".
The Euclidean algorithm, and thus Bezout's identity, can be generalized to the context of Euclidean domains.
Principal ideals and related problems.
Bézout's identity provides yet another definition of the greatest common divisor "g" of two numbers "a" and "b". Consider the set of all numbers "ua" + "vb", where "u" and "v" are any two integers. Since "a" and "b" are both divisible by "g", every number in the set is divisible by "g". In other words, every number of the set is an integer multiple of "g". This is true for every common divisor of "a" and "b". However, unlike other common divisors, the greatest common divisor is a member of the set; by Bézout's identity, choosing "u" = "s" and "v" = "t" gives "g". A smaller common divisor cannot be a member of the set, since every member of the set must be divisible by "g". Conversely, any multiple "m" of "g" can be obtained by choosing "u" = "ms" and "v" = "mt", where "s" and "t" are the integers of Bézout's identity. This may be seen by multiplying Bézout's identity by "m",
"mg" = "msa" + "mtb".
Therefore, the set of all numbers "ua" + "vb" is equivalent to the set of multiples "m" of "g". In other words, the set of all possible sums of integer multiples of two numbers ("a" and "b") is equivalent to the set of multiples of gcd("a", "b"). The GCD is said to be the generator of the ideal of "a" and "b". This GCD definition led to the modern abstract algebraic concepts of a principal ideal (an ideal generated by a single element) and a principal ideal domain (a domain in which every ideal is a principal ideal).
Certain problems can be solved using this result. For example, consider two measuring cups of volume "a" and "b". By adding/subtracting "u" multiples of the first cup and "v" multiples of the second cup, any volume "ua" + "vb" can be measured out. These volumes are all multiples of "g" = gcd("a", "b").
Extended Euclidean algorithm.
The integers "s" and "t" of Bézout's identity can be computed efficiently using the extended Euclidean algorithm. This extension adds two recursive equations to Euclid's algorithm
"s""k" = "s""k"−2 − "q""k""s""k"−1
"t""k" = "t""k"−2 − "q""k""t""k"−1
with the starting values
"s"−2 = 1, "t"−2 = 0
"s"−1 = 0, "t"−1 = 1.
Using this recursion, Bézout's integers "s" and "t" are given by "s" = "s""N" and "t" = "t""N", where "N+1" is the step on which the algorithm terminates with "r""N+1" = 0.
The validity of this approach can be shown by induction. Assume that the recursion formula is correct up to step "k" − 1 of the algorithm; in other words, assume that
"r""j" = "s""j" "a" + "t""j" "b"
for all "j" less than "k". The "k"th step of the algorithm gives the equation
"r""k" = "r""k"−2 − "q""k""r""k"−1.
Since the recursion formula has been assumed to be correct for "r""k"−2 and "r""k"−1, they may be expressed in terms of the corresponding "s" and "t" variables
"r""k" = ("s""k"−2 "a" + "t""k"−2 "b") − "q""k"("s""k"−1 "a" + "t""k"−1 "b").
Rearranging this equation yields the recursion formula for step "k", as required
"r""k" = "s""k" "a" + "t""k" "b" = ("s""k"−2 − "q""k""s""k"−1) "a" + ("t""k"−2 − "q""k""t""k"−1) "b".
Matrix method.
The integers "s" and "t" can also be found using an equivalent matrix method. The sequence of equations of Euclid's algorithm
formula_58
can be written as a product of 2×2 quotient matrices multiplying a two-dimensional remainder vector
formula_59
Let M represent the product of all the quotient matrices
formula_60
This simplifies the Euclidean algorithm to the form
formula_61
To express "g" as a linear sum of "a" and "b", both sides of this equation can be multiplied by the inverse of the matrix M. The determinant of M equals (−1)"N"+1, since it equals the product of the determinants of the quotient matrices, each of which is negative one. Since the determinant of M is never zero, the vector of the final remainders can be solved using the inverse of M
formula_62
Since the top equation gives
"g" = (−1)"N"+1 ( "m"22 "a" − "m"12 "b"),
the two integers of Bézout's identity are "s" = (−1)"N"+1"m"22 and "t" = (−1)"N""m"12. The matrix method is as efficient as the equivalent recursion, with two multiplications and two additions per step of the Euclidean algorithm.
Euclid's lemma and unique factorization.
Bézout's identity is essential to many applications of Euclid's algorithm, such as demonstrating the unique factorization of numbers into prime factors. To illustrate this, suppose that a number "L" can be written as a product of two factors "u" and "v", that is, "L" = "uv". If another number "w" also divides "L" but is coprime with "u", then "w" must divide "v", by the following argument: If the greatest common divisor of "u" and "w" is 1, then integers "s" and "t" can be found such that
1 = "su" + "tw"
by Bézout's identity. Multiplying both sides by "v" gives the relation:
"v" = "suv" + "twv" = "sL" + "twv"
Since "w" divides both terms on the right-hand side, it must also divide the left-hand side, "v". This result is known as Euclid's lemma. Specifically, if a prime number divides "L", then it must divide at least one factor of "L". Conversely, if a number "w" is coprime to each of a series of numbers "a"1, "a"2, ..., "a""n", then "w" is also coprime to their product, "a"1 × "a"2 × ... × "a""n".
Euclid's lemma suffices to prove that every number has a unique factorization into prime numbers. To see this, assume the contrary, that there are two independent factorizations of "L" into "m" and "n" prime factors, respectively
"L" = "p"1"p"2…"p""m" = "q"1"q"2…"q""n" .
Since each prime "p" divides "L" by assumption, it must also divide one of the "q" factors; since each "q" is prime as well, it must be that "p" = "q". Iteratively dividing by the "p" factors shows that each "p" has an equal counterpart "q"; the two prime factorizations are identical except for their order. The unique factorization of numbers into primes has many applications in mathematical proofs, as shown below.
Linear Diophantine equations.
Diophantine equations are equations in which the solutions are restricted to integers; they are named after the 3rd-century Alexandrian mathematician Diophantus. A typical "linear" Diophantine equation seeks integers "x" and "y" such that
"ax" + "by" = "c"
where "a", "b" and "c" are given integers. This can be written as an equation for "x" in modular arithmetic:
"ax" ≡ "c" mod "b".
Let "g" be the greatest common divisor of "a" and "b". Both terms in "ax" + "by" are divisible by "g"; therefore, "c" must also be divisible by "g", or the equation has no solutions. By dividing both sides by "c"/"g", the equation can be reduced to Bezout's identity
"sa" + "tb" = "g"
where "s" and "t" can be found by the extended Euclidean algorithm. This provides one solution to the Diophantine equation, "x"1 = "s" ("c"/"g") and "y"1 = "t" ("c"/"g").
In general, a linear Diophantine equation has no solutions, or an infinite number of solutions. To find the latter, consider two solutions, ("x"1, "y"1) and ("x"2, "y"2), where
"ax"1 + "by"1 = "c" = "ax"2 + "by"2
or equivalently
"a"("x"1 − "x"2) = "b"("y"2 − "y"1).
Therefore, the smallest difference between two "x" solutions is "b"/"g", whereas the smallest difference between two "y" solutions is "a"/"g". Thus, the solutions may be expressed as
"x" = "x"1 − "bu"/"g"
"y" = "y"1 + "au"/"g".
By allowing "u" to vary over all possible integers, an infinite family of solutions can be generated from a single solution ("x"1, "y"1). If the solutions are required to be "positive" integers ("x" > 0, "y" > 0), only a finite number of solutions may be possible. This restriction on the acceptable solutions allows some systems of Diophantine equations with more unknowns than equations to have a finite number of solutions; this is impossible for a system of linear equations when the solutions can be any real number (see Underdetermined system).
Multiplicative inverses and the RSA algorithm.
A finite field is a set of numbers with four generalized operations. The operations are called addition, subtraction, multiplication and division and have their usual properties, such as commutativity, associativity and distributivity. An example of a finite field is the set of 13 numbers {0, 1, 2, ..., 12} using modular arithmetic. In this field, the results of any mathematical operation (addition, subtraction, multiplication, or division) is reduced modulo 13; that is, multiples of 13 are added or subtracted until the result is brought within the range 0–12. For example, the result of 5 × 7 = 35 mod 13 = 9. Such finite fields can be defined for any prime "p"; using more sophisticated definitions, they can also be defined for any power "m" of a prime "p" "m". Finite fields are often called Galois fields, and are abbreviated as GF("p") or GF("p" "m").
In such a field with "m" numbers, every nonzero element "a" has a unique modular multiplicative inverse, "a"−1 such that "aa"−1 = "a"−1"a" ≡ 1 mod "m". This inverse can be found by solving the congruence equation "ax" ≡ 1 mod "m", or the equivalent linear Diophantine equation
"ax" + "my" = 1.
This equation can be solved by the Euclidean algorithm, as described above. Finding multiplicative inverses is an essential step in the RSA algorithm, which is widely used in electronic commerce; specifically, the equation determines the integer used to decrypt the message. Although the RSA algorithm uses rings rather than fields, the Euclidean algorithm can still be used to find a multiplicative inverse where one exists. The Euclidean algorithm also has other applications in error-correcting codes; for example, it can be used as an alternative to the Berlekamp–Massey algorithm for decoding BCH and Reed–Solomon codes, which are based on Galois fields.
Chinese remainder theorem.
Euclid's algorithm can also be used to solve multiple linear Diophantine equations. Such equations arise in the Chinese remainder theorem, which describes a novel method to represent an integer "x". Instead of representing an integer by its digits, it may be represented by its remainders "x""i" modulo a set of "N" coprime numbers "m""i":
formula_63
The goal is to determine "x" from its "N" remainders "x""i". The solution is to combine the multiple equations into a single linear Diophantine equation with a much larger modulus "M" that is the product of all the individual moduli "m""i", and define "M""i" as
formula_64
Thus, each "M""i" is the product of all the moduli "except" "m""i". The solution depends on finding "N" new numbers "h""i" such that
formula_65
With these numbers "h""i", any integer "x" can be reconstructed from its remainders "x""i" by the equation
formula_66
Since these numbers "h""i" are the multiplicative inverses of the "M""i", they may be found using Euclid's algorithm as described in the previous subsection.
Stern–Brocot tree.
The Euclidean algorithm can be used to arrange the set of all positive rational numbers into an infinite binary search tree, called the Stern–Brocot tree.
The number 1 (expressed as a fraction 1/1) is placed at the root of the tree, and the location of any other number "a"/"b" can be found by computing gcd("a","b") using the original form of the Euclidean algorithm, in which each step replaces the larger of the two given numbers by its difference with the smaller number (not its remainder), stopping when two equal numbers are reached. A step of the Euclidean algorithm that replaces the first of the two numbers corresponds to a step in the tree from a node to its right child, and a step that replaces the second of the two numbers corresponds to a step in the tree from a node to its left child. The sequence of steps constructed in this way does not depend on whether "a"/"b" is given in lowest terms, and forms a path from the root to a node containing the number "a"/"b". This fact can be used to prove that each positive rational number appears exactly once in this tree.
For example, 3/4 can be found by starting at the root, going to the left once, then to the right twice:
formula_67
The Euclidean algorithm has almost the same relationship to another binary tree on the rational numbers called the Calkin–Wilf tree. The difference is that the path is reversed: instead of producing a path from the root of the tree to a target, it produces a path from the target to the root.
Continued fractions.
The Euclidean algorithm has a close relationship with continued fractions. The sequence of equations can be written in the form
formula_68
The last term on the right-hand side always equals the inverse of the left-hand side of the next equation. Thus, the first two equations may be combined to form
formula_69
The third equation may be used to substitute the denominator term "r"1/"r"0, yielding
formula_70
The final ratio of remainders "r""k"/"r""k"−1 can always be replaced using the next equation in the series, up to the final equation. The result is a continued fraction
formula_71
In the worked example above, the gcd(1071, 462) was calculated, and the quotients "q""k" were 2, 3 and 7, respectively. Therefore, the fraction 1071/462 may be written
formula_72
as can be confirmed by calculation.
Factorization algorithms.
Calculating a greatest common divisor is an essential step in several integer factorization algorithms, such as Pollard's rho algorithm, Shor's algorithm, Dixon's factorization method and the Lenstra elliptic curve factorization. The Euclidean algorithm may be used to find this GCD efficiently. Continued fraction factorization uses continued fractions, which are determined using Euclid's algorithm.
Algorithmic efficiency.
The computational efficiency of Euclid's algorithm has been studied thoroughly. This efficiency can be described by the number of division steps the algorithm requires, multiplied by the computational expense of each step. The first known analysis of Euclid's algorithm is due to A. A. L. Reynaud in 1811, who showed that the number of division steps on input ("u", "v") is bounded by "v"; later he improved this to "v"/2 + 2. Later, in 1841, P. J. E. Finck showed that the number of division steps is at most 2 log2 "v" + 1, and hence Euclid's algorithm runs in time polynomial in the size of the input. Émile Léger, in 1837, studied the worst case, which is when the inputs are consecutive Fibonacci numbers. Finck's analysis was refined by Gabriel Lamé in 1844, who showed that the number of steps required for completion is never more than five times the number "h" of base-10 digits of the smaller number "b".
In the uniform cost model (suitable for analyzing the complexity of gcd calculation on numbers that fit into a single machine word), each step of the algorithm takes constant time, and Lamé's analysis implies that the total running time is also "O"("h"). However, in a model of computation suitable for computation with larger numbers, the computational expense of a single remainder computation in the algorithm can be as large as "O"("h"2). In this case the total time for all of the steps of the algorithm can be analyzed using a telescoping series, showing that it is also "O"("h"2). Modern algorithmic techniques based on the Schönhage–Strassen algorithm for fast integer multiplication can be used to speed this up, leading to quasilinear algorithms for the GCD.
Number of steps.
The number of steps to calculate the GCD of two natural numbers, "a" and "b", may be denoted by "T"("a", "b"). If "g" is the GCD of "a" and "b", then "a" = "mg" and "b" = "ng" for two coprime numbers "m" and "n". Then
"T"("a", "b") = "T"("m", "n")
as may be seen by dividing all the steps in the Euclidean algorithm by "g". By the same argument, the number of steps remains the same if "a" and "b" are multiplied by a common factor "w": "T"("a", "b") = "T"("wa", "wb"). Therefore, the number of steps "T" may vary dramatically between neighboring pairs of numbers, such as T("a", "b") and T("a", "b" + 1), depending on the size of the two GCDs.
The recursive nature of the Euclidean algorithm gives another equation
"T"("a", "b") = 1 + "T"("b", "r"0) = 2 + "T"("r"0, "r"1) = … = "N" + "T"("r""N"−2, "r""N"−1) = "N" + 1
where "T"("x", 0) = 0 by assumption.
Worst-case.
If the Euclidean algorithm requires "N" steps for a pair of natural numbers "a" > "b" > 0, the smallest values of "a" and "b" for which this is true are the Fibonacci numbers "F""N"+2 and "F""N"+1, respectively. More precisely, if the Euclidean algorithm requires "N" steps for the pair "a" > "b", then one has "a" ≥ "F""N"+2 and "b" ≥ "F""N"+1. This can be shown by induction. If "N" = 1, "b" divides "a" with no remainder; the smallest natural numbers for which this is true is "b" = 1 and "a" = 2, which are "F"2 and "F"3, respectively. Now assume that the result holds for all values of "N" up to "M" − 1. The first step of the "M"-step algorithm is "a" = "q"0"b" + "r"0, and the Euclidean algorithm requires "M" − 1 steps for the pair "b" > "r"0. By induction hypothesis, one has "b" ≥ "F""M"+1 and "r"0 ≥ "F""M". Therefore, "a" = "q"0"b" + "r"0 ≥ "b" + "r"0 ≥ "F""M"+1 + "F""M" = "F""M"+2,
which is the desired inequality.
This proof, published by Gabriel Lamé in 1844, represents the beginning of computational complexity theory, and also the first practical application of the Fibonacci numbers.
This result suffices to show that the number of steps in Euclid's algorithm can never be more than five times the number of its digits (base 10). For if the algorithm requires "N" steps, then "b" is greater than or equal to "F""N"+1 which in turn is greater than or equal to "φ""N"−1, where "φ" is the golden ratio. Since "b" ≥ "φ""N"−1, then "N" − 1 ≤ log"φ""b". Since log10"φ" > 1/5, ("N" − 1)/5 < log10"φ" log"φ""b" = log10"b". Thus, "N" ≤ 5 log10"b". Thus, the Euclidean algorithm always needs less than "O"("h") divisions, where "h" is the number of digits in the smaller number "b".
Average.
The average number of steps taken by the Euclidean algorithm has been defined in three different ways. The first definition is the average time "T"("a") required to calculate the GCD of a given number "a" and a smaller natural number "b" chosen with equal probability from the integers 0 to "a" − 1
formula_73
However, since "T"("a", "b") fluctuates dramatically with the GCD of the two numbers, the averaged function "T"("a") is likewise "noisy".
To reduce this noise, a second average "τ"("a") is taken over all numbers coprime with "a"
formula_74
There are "φ"("a") coprime integers less than "a", where "φ" is Euler's totient function. This tau average grows smoothly with "a"
formula_75
with the residual error being of order "a"−(1/6) + "ε", where "ε" is infinitesimal. The constant "C" in this formula is called Porter's constant and equals
formula_76
where "γ" is the Euler–Mascheroni constant and ζ' is the derivative of the Riemann zeta function. The leading coefficient (12/π2) ln 2 was determined by two independent methods.
Since the first average can be calculated from the tau average by summing over the divisors "d" of "a"
formula_77
it can be approximated by the formula
formula_78
where Λ("d") is the Mangoldt function.
A third average "Y"("n") is defined as the mean number of steps required when both "a" and "b" are chosen randomly (with uniform distribution) from 1 to "n"
formula_79
Substituting the approximate formula for "T"("a") into this equation yields an estimate for "Y"("n")
formula_80
Computational expense per step.
In each step "k" of the Euclidean algorithm, the quotient "q""k" and remainder "r""k" are computed for a given pair of integers "r""k"−2 and "r""k"−1
"r""k"−2 = "q""k" "r""k"−1 + "r""k".
The computational expense per step is associated chiefly with finding "q""k", since the remainder "r""k" can be calculated quickly from "r""k"−2, "r""k"−1, and "q""k"
"r""k" = "r""k"−2 − "q""k" "r""k"−1.
The computational expense of dividing "h"-bit numbers scales as "O"("h"("ℓ"+1)), where ℓ is the length of the quotient.
For comparison, Euclid's original subtraction-based algorithm can be much slower. A single integer division is equivalent to the quotient "q" number of subtractions. If the ratio of "a" and "b" is very large, the quotient is large and many subtractions will be required. On the other hand, it has been shown that the quotients are very likely to be small integers. The probability of a given quotient "q" is approximately ln |"u"/("u" − 1)| where "u" = ("q" + 1)2. For illustration, the probability of a quotient of 1, 2, 3, or 4 is roughly 41.5%, 17.0%, 9.3%, and 5.9%, respectively. Since the operation of subtraction is faster than division, particularly for large numbers, the subtraction-based Euclid's algorithm is competitive with the division-based version. This is exploited in the binary version of Euclid's algorithm.
Combining the estimated number of steps with the estimated computational expense per step shows that the Euclid's algorithm grows quadratically ("h"2) with the average number of digits "h" in the initial two numbers "a" and "b". Let "h"0, "h"1, ..., "h""N"−1 represent the number of digits in the successive remainders "r"0, "r"1, ..., "r""N"−1. Since the number of steps "N" grows linearly with "h", the running time is bounded by
formula_81
Alternative methods.
Euclid's algorithm is widely used in practice, especially for small numbers, due to its simplicity. For comparison, the efficiency of alternatives to Euclid's algorithm may be determined.
One inefficient approach to finding the GCD of two natural numbers "a" and "b" is to calculate all their common divisors; the GCD is then the largest common divisor. The common divisors can be found by dividing both numbers by successive integers from 2 to the smaller number "b". The number of steps of this approach grows linearly with "b", or exponentially in the number of digits. Another inefficient approach is to find the prime factors of one or both numbers. As noted above, the GCD equals the product of the prime factors shared by the two numbers "a" and "b". Present methods for prime factorization are also inefficient; many modern cryptography systems even rely on that inefficiency.
The binary GCD algorithm is an efficient alternative that substitutes division with faster operations by exploiting the binary representation used by computers. However, this alternative also scales like "O"("h"²). It is generally faster than the Euclidean algorithm on real computers, even though it scales in the same way. Additional efficiency can be gleaned by examining only the leading digits of the two numbers "a" and "b". The binary algorithm can be extended to other bases ("k"-ary algorithms), with up to fivefold increases in speed. Lehmer's GCD algorithm uses the same general principle as the binary algorithm to speed up GCD computations in arbitrary bases.
A recursive approach for very large integers (with more than 25,000 digits) leads to quasilinear integer GCD algorithms, such as those of Schönhage, and Stehlé and Zimmermann. These algorithms exploit the 2×2 matrix form of the Euclidean algorithm given above. These quasilinear methods generally scale as "O"("h" log "h"2 log log "h").
Generalizations.
Although the Euclidean algorithm is used to find the greatest common divisor of two natural numbers (positive integers), it may be generalized to the real numbers, and to other mathematical objects, such as polynomials, quadratic integers and Hurwitz quaternions. In the latter cases, the Euclidean algorithm is used to demonstrate the crucial property of unique factorization, i.e., that such numbers can be factored uniquely into irreducible elements, the counterparts of prime numbers. Unique factorization is essential to many proofs of number theory.
Rational and real numbers.
Euclid's algorithm can be applied to real numbers, as described by Euclid in Book 10 of his "Elements". The goal of the algorithm is to identify a real number g such that two given real numbers, a and b, are integer multiples of it: "a" = "mg" and "b" = "ng", where m and n are integers. This identification is equivalent to finding an integer relation among the real numbers a and b; that is, it determines integers s and t such that "sa" + "tb" = 0. If such an equation is possible, "a" and "b" are called commensurable lengths, otherwise they are incommensurable lengths.
The real-number Euclidean algorithm differs from its integer counterpart in two respects. First, the remainders "r""k" are real numbers, although the quotients "q""k" are integers as before. Second, the algorithm is not guaranteed to end in a finite number N of steps. If it does, the fraction "a"/"b" is a rational number, i.e., the ratio of two integers
formula_82
and can be written as a finite continued fraction ["q"0; "q"1, "q"2, ..., "q""N"]. If the algorithm does not stop, the fraction "a"/"b" is an irrational number and can be described by an infinite continued fraction ["q"0; "q"1, "q"2, …]. Examples of infinite continued fractions are the golden ratio "φ" = [1; 1, 1, ...] and the square root of two, √2 = [1; 2, 2, ...]. The algorithm is unlikely to stop, since almost all ratios "a"/"b" of two real numbers are irrational.
An infinite continued fraction may be truncated at a step "k" ["q"0; "q"1, "q"2, ..., "q""k"] to yield an approximation to "a"/"b" that improves as k is increased. The approximation is described by convergents "m""k"/"n""k"; the numerator and denominators are coprime and obey the recurrence relation
formula_83
where "m"−1 = "n"−2 = 1 and "m"−2 = "n"−1 = 0 are the initial values of the recursion. The convergent "m""k"/"n""k" is the best rational number approximation to "a"/"b" with denominator "n""k":
formula_84
Polynomials.
Polynomials in a single variable "x" can be added, multiplied and factored into irreducible polynomials, which are the analogs of the prime numbers for integers. The greatest common divisor polynomial "g"("x") of two polynomials "a"("x") and "b"("x") is defined as the product of their shared irreducible polynomials, which can be identified using the Euclidean algorithm. The basic procedure is similar to that for integers. At each step k, a quotient polynomial "q""k"("x") and a remainder polynomial "r""k"("x") are identified to satisfy the recursive equation
formula_85
where "r"−2("x") = "a"("x") and "r"−1("x") = "b"("x"). Each quotient polynomial is chosen such that each remainder is either zero or has a degree that is smaller than the degree of its predecessor: deg["r""k"("x")] < deg["r""k"−1("x")]. Since the degree is a nonnegative integer, and since it decreases with every step, the Euclidean algorithm concludes in a finite number of steps. The last nonzero remainder is the greatest common divisor of the original two polynomials, "a"("x") and "b"("x").
For example, consider the following two quartic polynomials, which each factor into two quadratic polynomials
formula_86
Dividing "a"("x") by "b"("x") yields a remainder "r"0("x") = "x"3 + (2/3)"x"2 + (5/3)"x" − (2/3). In the next step, "b"("x") is divided by "r"0("x") yielding a remainder "r"1("x") = "x"2 + "x" + 2. Finally, dividing "r"0("x") by "r"1("x") yields a zero remainder, indicating that "r"1("x") is the greatest common divisor polynomial of "a"("x") and "b"("x"), consistent with their factorization.
Many of the applications described above for integers carry over to polynomials. The Euclidean algorithm can be used to solve linear Diophantine equations and Chinese remainder problems for polynomials; continued fractions of polynomials can also be defined.
The polynomial Euclidean algorithm has other applications, such as Sturm chains, a method for counting the zeros of a polynomial that lie inside a given real interval. This in turn has applications in several areas, such as the Routh–Hurwitz stability criterion in control theory.
Finally, the coefficients of the polynomials need not be drawn from integers, real numbers or even the complex numbers. For example, the coefficients may be drawn from a general field, such as the finite fields GF("p") described above. The corresponding conclusions about the Euclidean algorithm and its applications hold even for such polynomials.
Gaussian integers.
The Gaussian integers are complex numbers of the form "α" = "u" + "vi", where u and v are ordinary integers and i is the square root of negative one. By defining an analog of the Euclidean algorithm, Gaussian integers can be shown to be uniquely factorizable, by the argument above. This unique factorization is helpful in many applications, such as deriving all Pythagorean triples or proving Fermat's theorem on sums of two squares. In general, the Euclidean algorithm is convenient in such applications, but not essential; for example, the theorems can often be proven by other arguments.
The Euclidean algorithm developed for two Gaussian integers α and β is nearly the same as that for ordinary integers, but differs in two respects. As before, we set "r"−2 = "α" and "r"−1 = "β", and the task at each step k is to identify a quotient "q""k" and a remainder "r""k" such that
formula_87
where every remainder is strictly smaller than its predecessor: . The first difference is that the quotients and remainders are themselves Gaussian integers, and thus are complex numbers. The quotients "q""k" are generally found by rounding the real and complex parts of the exact ratio (such as the complex number "α"/"β") to the nearest integers. The second difference lies in the necessity of defining how one complex remainder can be "smaller" than another. To do this, a norm function "f"("u" + "vi") = "u"2 + "v"2 is defined, which converts every Gaussian integer "u" + "vi" into an ordinary integer. After each step k of the Euclidean algorithm, the norm of the remainder "f"("r""k") is smaller than the norm of the preceding remainder, "f"("r""k"−1). Since the norm is a nonnegative integer and decreases with every step, the Euclidean algorithm for Gaussian integers ends in a finite number of steps. The final nonzero remainder is gcd("α", "β"), the Gaussian integer of largest norm that divides both α and β; it is unique up to multiplication by a unit, ±1 or ±"i".
Many of the other applications of the Euclidean algorithm carry over to Gaussian integers. For example, it can be used to solve linear Diophantine equations and Chinese remainder problems for Gaussian integers; continued fractions of Gaussian integers can also be defined.
Euclidean domains.
A set of elements under two binary operations, denoted as addition and multiplication, is called a Euclidean domain if it forms a commutative ring R and, roughly speaking, if a generalized Euclidean algorithm can be performed on them. The two operations of such a ring need not be the addition and multiplication of ordinary arithmetic; rather, they can be more general, such as the operations of a mathematical group or monoid. Nevertheless, these general operations should respect many of the laws governing ordinary arithmetic, such as commutativity, associativity and distributivity.
The generalized Euclidean algorithm requires a "Euclidean function", i.e., a mapping f from R into the set of nonnegative integers such that, for any two nonzero elements a and b in R, there exist q and r in R such that "a" = "qb" + "r" and "f"("r") < "f"("b"). Examples of such mappings are the absolute value for integers, the degree for univariate polynomials, and the norm for Gaussian integers above. The basic principle is that each step of the algorithm reduces "f" inexorably; hence, if f can be reduced only a finite number of times, the algorithm must stop in a finite number of steps. This principle relies on the well-ordering property of the non-negative integers, which asserts that every non-empty set of non-negative integers has a smallest member.
The fundamental theorem of arithmetic applies to any Euclidean domain: Any number from a Euclidean domain can be factored uniquely into irreducible elements. Any Euclidean domain is a unique factorization domain (UFD), although the converse is not true. The Euclidean domains and the UFD's are subclasses of the GCD domains, domains in which a greatest common divisor of two numbers always exists. In other words, a greatest common divisor may exist (for all pairs of elements in a domain), although it may not be possible to find it using a Euclidean algorithm. A Euclidean domain is always a principal ideal domain (PID), an integral domain in which every ideal is a principal ideal. Again, the converse is not true: not every PID is a Euclidean domain.
The unique factorization of Euclidean domains is useful in many applications. For example, the unique factorization of the Gaussian integers is convenient in deriving formulae for all Pythagorean triples and in proving Fermat's theorem on sums of two squares. Unique factorization was also a key element in an attempted proof of Fermat's Last Theorem published in 1847 by Gabriel Lamé, the same mathematician who analyzed the efficiency of Euclid's algorithm, based on a suggestion of Joseph Liouville. Lamé's approach required the unique factorization of numbers of the form "x" + "ωy", where x and y are integers, and "ω" = "e"2"iπ"/"n" is an nth root of 1, that is, "ω""n" = 1. Although this approach succeeds for some values of n (such as "n" = 3, the Eisenstein integers), in general such numbers do not factor uniquely. This failure of unique factorization in some cyclotomic fields led Ernst Kummer to the concept of ideal numbers and, later, Richard Dedekind to ideals.
Unique factorization of quadratic integers.
The quadratic integer rings are helpful to illustrate Euclidean domains. Quadratic integers are generalizations of the Gaussian integers in which the imaginary unit "i" is replaced by a number ω. Thus, they have the form "u" + "vω", where u and v are integers and ω has one of two forms, depending on a parameter D. If D does not equal a multiple of four plus one, then
formula_88
If, however, "D" does equal a multiple of four plus one, then
formula_89
If the function f corresponds to a norm function, such as that used to order the Gaussian integers above, then the domain is known as "norm-Euclidean". The norm-Euclidean rings of quadratic integers are exactly those where D is one of the values −11, −7, −3, −2, −1, 2, 3, 5, 6, 7, 11, 13, 17, 19, 21, 29, 33, 37, 41, 57, or 73. The cases "D" = −1 and "D" = −3 yield the Gaussian integers and Eisenstein integers, respectively.
If f is allowed to be any Euclidean function, then the list of possible values of D for which the domain is Euclidean is not yet known. The first example of a Euclidean domain that was not norm-Euclidean (with "D" = 69) was published in 1994. In 1973, Weinberger proved that a quadratic integer ring with "D" > 0 is Euclidean if, and only if, it is a principal ideal domain, provided that the generalized Riemann hypothesis holds.
Noncommutative rings.
The Euclidean algorithm may be applied to some noncommutative rings such as the set of Hurwitz quaternions. Let α and β represent two elements from such a ring. They have a common right divisor δ if "α" = "ξδ" and "β" = "ηδ" for some choice of ξ and η in the ring. Similarly, they have a common left divisor if "α" = "dξ" and "β" = "dη" for some choice of ξ and η in the ring. Since multiplication is not commutative, there are two versions of the Euclidean algorithm, one for right divisors and one for left divisors. Choosing the right divisors, the first step in finding the gcd("α", "β") by the Euclidean algorithm can be written
formula_90
where "ψ"0 represents the quotient and "ρ"0 the remainder. Here the quotent and remainder are chosen so that (if nonzero) the remainder has "N"("ρ"0) < "N"("β") for a "Euclidean function" "N" defined analogously to the Euclidean functions of Euclidean domains in the non-commutative case. This equation shows that any common right divisor of α and β is likewise a common divisor of the remainder "ρ"0. The analogous equation for the left divisors would be
formula_91
With either choice, the process is repeated as above until the greatest common right or left divisor is identified. As in the Euclidean domain, the "size" of the remainder "ρ"0 (formally, its Euclidean function or "norm") must be strictly smaller than β, and there must be only a finite number of possible sizes for "ρ"0, so that the algorithm is guaranteed to terminate.
Many results for the GCD carry over to noncommutative numbers. For example, Bézout's identity states that the right gcd("α", "β") can be expressed as a linear combination of α and β. In other words, there are numbers σ and τ such that
formula_92
The analogous identity for the left GCD is nearly the same:
formula_93
Bézout's identity can be used to solve Diophantine equations. For instance, one of the standard proofs of Lagrange's four-square theorem, that every positive integer can be represented as a sum of four squares, is based on quaternion GCDs in this way.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "r_{-2} = a"
},
{
"math_id": 1,
"text": "r_{-1} = b"
},
{
"math_id": 2,
"text": "\\{ r_{-2} = a,\\ r_{-1} = b,\\ r_0,\\ r_1,\\ \\cdots,\\ r_{n-1},\\ r_n = 0 \\}"
},
{
"math_id": 3,
"text": "r_{k+1} < r_k"
},
{
"math_id": 4,
"text": "r_{n-1}"
},
{
"math_id": 5,
"text": "\\text{gcd}(a,b) = r_{n-1}"
},
{
"math_id": 6,
"text": "r_k"
},
{
"math_id": 7,
"text": "(r_{k-2},\\ r_{k-1})"
},
{
"math_id": 8,
"text": "q_k"
},
{
"math_id": 9,
"text": "r_{k-2} = q_k \\cdot r_{k-1} + r_k \\text{, with } \\ r_{k-1} > r_k \\geq 0."
},
{
"math_id": 10,
"text": "\\{ r_k \\}"
},
{
"math_id": 11,
"text": "r_k \\ge 0"
},
{
"math_id": 12,
"text": "k"
},
{
"math_id": 13,
"text": "r_{k-1}"
},
{
"math_id": 14,
"text": "r_n"
},
{
"math_id": 15,
"text": "\\{r_{-2} = 1071,\\ r_{-1} = 462 \\}"
},
{
"math_id": 16,
"text": "r_0"
},
{
"math_id": 17,
"text": "q_0"
},
{
"math_id": 18,
"text": "r_0 < r_{-1}"
},
{
"math_id": 19,
"text": "1071 = q_0 \\cdot 462 + r_0"
},
{
"math_id": 20,
"text": "q_0 = 2"
},
{
"math_id": 21,
"text": "1071 = 2 \\cdot 462 + 147"
},
{
"math_id": 22,
"text": "r_0 = 147"
},
{
"math_id": 23,
"text": "\\{1071,\\ 462,\\ r_0 = 147 \\}"
},
{
"math_id": 24,
"text": "r_1"
},
{
"math_id": 25,
"text": "q_1"
},
{
"math_id": 26,
"text": "r_1 < r_0"
},
{
"math_id": 27,
"text": "462 = q_1 \\cdot 147 + r_1"
},
{
"math_id": 28,
"text": "q_1 = 3"
},
{
"math_id": 29,
"text": "462 = 3 \\cdot 147 + 21"
},
{
"math_id": 30,
"text": "r_1 = 21"
},
{
"math_id": 31,
"text": "\\{1071,\\ 462,\\ 147,\\ r_1 = 21 \\}"
},
{
"math_id": 32,
"text": "r_2"
},
{
"math_id": 33,
"text": "q_2"
},
{
"math_id": 34,
"text": "r_2 < r_1"
},
{
"math_id": 35,
"text": "147 = q_2 \\cdot 21 + r_2"
},
{
"math_id": 36,
"text": "q_2 = 7"
},
{
"math_id": 37,
"text": "147 = 7 \\cdot 21 + 0"
},
{
"math_id": 38,
"text": "r_2 = 0"
},
{
"math_id": 39,
"text": "\\{1071,\\ 462,\\ 147,\\ 21,\\ r_2 = 0 \\}"
},
{
"math_id": 40,
"text": "0"
},
{
"math_id": 41,
"text": "21"
},
{
"math_id": 42,
"text": "\\text{gcd}(1071,\\ 462) = 21."
},
{
"math_id": 43,
"text": "a"
},
{
"math_id": 44,
"text": "b"
},
{
"math_id": 45,
"text": "a = b"
},
{
"math_id": 46,
"text": "\\text{gcd}(a,\\ a) = a"
},
{
"math_id": 47,
"text": "\\{a,\\ a,\\ 0\\}"
},
{
"math_id": 48,
"text": "a < b"
},
{
"math_id": 49,
"text": "a \\equiv 0 \\cdot b + a"
},
{
"math_id": 50,
"text": "\\{a,\\ b,\\ a,\\ \\cdots \\}"
},
{
"math_id": 51,
"text": "r_0 < b"
},
{
"math_id": 52,
"text": "r_{k} < r_{k-1}"
},
{
"math_id": 53,
"text": "k \\ge 0"
},
{
"math_id": 54,
"text": "\\{ r_{k-1} \\}"
},
{
"math_id": 55,
"text": "a = r_{-2}"
},
{
"math_id": 56,
"text": "\\left |\\frac{r_{k+1}}{r_k}\\right |<\\frac{1}{\\varphi}\\sim 0.618,"
},
{
"math_id": 57,
"text": "\\varphi"
},
{
"math_id": 58,
"text": "\n\\begin{align}\na & = q_0 b + r_0 \\\\\nb & = q_1 r_0 + r_1 \\\\\n& \\,\\,\\,\\vdots \\\\\nr_{N-2} & = q_N r_{N-1} + 0\n\\end{align}\n"
},
{
"math_id": 59,
"text": "\n\\begin{pmatrix} a \\\\ b \\end{pmatrix} =\n\\begin{pmatrix} q_0 & 1 \\\\ 1 & 0 \\end{pmatrix} \\begin{pmatrix} b \\\\ r_0 \\end{pmatrix} =\n\\begin{pmatrix} q_0 & 1 \\\\ 1 & 0 \\end{pmatrix} \\begin{pmatrix} q_1 & 1 \\\\ 1 & 0 \\end{pmatrix} \\begin{pmatrix} r_0 \\\\ r_1 \\end{pmatrix} =\n\\cdots =\n\\prod_{i=0}^N \\begin{pmatrix} q_i & 1 \\\\ 1 & 0 \\end{pmatrix} \\begin{pmatrix} r_{N-1} \\\\ 0 \\end{pmatrix} \\,.\n"
},
{
"math_id": 60,
"text": "\n\\mathbf{M} = \\begin{pmatrix} m_{11} & m_{12} \\\\ m_{21} & m_{22} \\end{pmatrix} =\n\\prod_{i=0}^N \\begin{pmatrix} q_i & 1 \\\\ 1 & 0 \\end{pmatrix} =\n\\begin{pmatrix} q_0 & 1 \\\\ 1 & 0 \\end{pmatrix} \\begin{pmatrix} q_1 & 1 \\\\ 1 & 0 \\end{pmatrix} \\cdots \\begin{pmatrix} q_{N} & 1 \\\\ 1 & 0 \\end{pmatrix} \\,.\n"
},
{
"math_id": 61,
"text": " \n\\begin{pmatrix} a \\\\ b \\end{pmatrix} =\n\\mathbf{M} \\begin{pmatrix} r_{N-1} \\\\ 0 \\end{pmatrix} =\n\\mathbf{M} \\begin{pmatrix} g \\\\ 0 \\end{pmatrix} \\,.\n"
},
{
"math_id": 62,
"text": "\n\\begin{pmatrix} g \\\\ 0 \\end{pmatrix} =\n\\mathbf{M}^{-1} \\begin{pmatrix} a \\\\ b \\end{pmatrix} =\n(-1)^{N+1} \\begin{pmatrix} m_{22} & -m_{12} \\\\ -m_{21} & m_{11} \\end{pmatrix} \\begin{pmatrix} a \\\\ b \\end{pmatrix} \\,.\n"
},
{
"math_id": 63,
"text": "\n\\begin{align}\nx_1 & \\equiv x \\pmod {m_1} \\\\\nx_2 & \\equiv x \\pmod {m_2} \\\\\n& \\,\\,\\,\\vdots \\\\\nx_N & \\equiv x \\pmod {m_N} \\,.\n\\end{align}\n"
},
{
"math_id": 64,
"text": " M_i = \\frac M {m_i}. "
},
{
"math_id": 65,
"text": " M_i h_i \\equiv 1 \\pmod {m_i} \\,. "
},
{
"math_id": 66,
"text": " x \\equiv (x_1 M_1 h_1 + x_2 M_2 h_2 + \\cdots + x_N M_N h_N) \\pmod M \\,."
},
{
"math_id": 67,
"text": " \n\\begin{align}\n & \\gcd(3,4) & \\leftarrow \\\\\n= {} & \\gcd(3,1) & \\rightarrow \\\\\n= {} & \\gcd(2,1) & \\rightarrow \\\\\n= {} & \\gcd(1,1).\n\\end{align}\n"
},
{
"math_id": 68,
"text": "\n\\begin{align}\n\\frac a b &= q_0 + \\frac{r_0} b \\\\\n\\frac b {r_0} &= q_1 + \\frac{r_1}{r_0} \\\\\n\\frac{r_0}{r_1} &= q_2 + \\frac{r_2}{r_1} \\\\\n& \\,\\,\\, \\vdots \\\\\n\\frac{r_{k-2}}{r_{k-1}} &= q_k + \\frac{r_k}{r_{k-1}} \\\\\n& \\,\\,\\, \\vdots \\\\\n\\frac{r_{N-2}}{r_{N-1}} &= q_N\\,.\n\\end{align}\n"
},
{
"math_id": 69,
"text": "\\frac a b = q_0 + \\cfrac 1 {q_1 + \\cfrac{r_1}{r_0}} \\,."
},
{
"math_id": 70,
"text": "\\frac a b = q_0 + \\cfrac 1 {q_1 + \\cfrac 1 {q_2 + \\cfrac{r_2}{r_1}}}\\,. "
},
{
"math_id": 71,
"text": "\\frac a b = q_0 + \\cfrac 1 {q_1 + \\cfrac 1 {q_2 + \\cfrac{1}{\\ddots + \\cfrac 1 {q_N}}}} = [ q_0; q_1, q_2, \\ldots , q_N ] \\,."
},
{
"math_id": 72,
"text": "\\frac{1071}{462} = 2 + \\cfrac 1 {3 + \\cfrac 1 7} = [2; 3, 7]"
},
{
"math_id": 73,
"text": "T(a) = \\frac 1 a \\sum_{0 \\leq b<a} T(a, b).\n"
},
{
"math_id": 74,
"text": "\\tau(a) = \\frac 1 {\\varphi(a)} \\sum_{\\begin{smallmatrix} 0 \\leq b<a \\\\ \\gcd(a, b) = 1 \\end{smallmatrix}} T(a, b).\n"
},
{
"math_id": 75,
"text": "\\tau(a) = \\frac{12}{\\pi^2}\\ln 2 \\ln a + C + O(a^{-1/6-\\varepsilon})"
},
{
"math_id": 76,
"text": "C= -\\frac 1 2 + \\frac{6 \\ln 2}{\\pi^2}\\left(4\\gamma -\\frac{24}{\\pi^2}\\zeta'(2) + 3\\ln 2 - 2\\right) \\approx 1.467"
},
{
"math_id": 77,
"text": " T(a) = \\frac 1 a \\sum_{d \\mid a} \\varphi(d) \\tau(d)\n"
},
{
"math_id": 78,
"text": "T(a) \\approx C + \\frac{12}{\\pi^2} \\ln 2\\, \\biggl({\\ln a} - \\sum_{d \\mid a} \\frac{\\Lambda(d)} d\\biggr)"
},
{
"math_id": 79,
"text": "Y(n) = \\frac 1 {n^2} \\sum_{a=1}^n \\sum_{b=1}^n T(a, b) = \\frac 1 n \\sum_{a=1}^n T(a).\n"
},
{
"math_id": 80,
"text": "Y(n) \\approx \\frac{12}{\\pi^2} \\ln 2 \\ln n + 0.06."
},
{
"math_id": 81,
"text": "\nO\\Big(\\sum_{i<N}h_i(h_i-h_{i+1}+2)\\Big)\\subseteq O\\Big(h\\sum_{i<N}(h_i-h_{i+1}+2) \\Big) \\subseteq O(h(h_0+2N))\\subseteq O(h^2)."
},
{
"math_id": 82,
"text": "\\frac{a}{b} = \\frac{mg}{ng} = \\frac{m}{n},"
},
{
"math_id": 83,
"text": "\\begin{align}\n m_k &= q_k m_{k-1} + m_{k-2} \\\\\n n_k &= q_k n_{k-1} + n_{k-2},\n \\end{align}"
},
{
"math_id": 84,
"text": " \\left|\\frac{a}{b} - \\frac{m_k}{n_k}\\right| < \\frac{1}{n_k^2}."
},
{
"math_id": 85,
"text": "r_{k-2}(x) = q_k(x)r_{k-1}(x) + r_k(x),"
},
{
"math_id": 86,
"text": "\\begin{align}\n a(x) &= x^4 - 4x^3 + 4x^2 - 3x + 14 = (x^2 - 5x + 7)(x^2 + x + 2) \\qquad \\text{and}\\\\\n b(x) &= x^4 + 8x^3 + 12x^2 + 17x + 6 = (x^2 + 7x + 3)(x^2 + x + 2).\n \\end{align}"
},
{
"math_id": 87,
"text": "r_k = r_{k-2} - q_k r_{k-1},"
},
{
"math_id": 88,
"text": "\\omega = \\sqrt D ."
},
{
"math_id": 89,
"text": "\\omega = \\frac{1 + \\sqrt{D}}{2} ."
},
{
"math_id": 90,
"text": "\\rho_0 = \\alpha - \\psi_0\\beta = (\\xi - \\psi_0\\eta)\\delta,"
},
{
"math_id": 91,
"text": "\\rho_0 = \\alpha - \\beta\\psi_0 = \\delta(\\xi - \\eta\\psi_0)."
},
{
"math_id": 92,
"text": "\\Gamma_\\text{right} = \\sigma\\alpha + \\tau\\beta."
},
{
"math_id": 93,
"text": "\\Gamma_\\text{left} = \\alpha\\sigma + \\beta\\tau."
}
] | https://en.wikipedia.org/wiki?curid=10377 |
10377468 | Free-by-cyclic group | In group theory, especially, in geometric group theory, the class of free-by-cyclic groups have been deeply studied as important examples. A group formula_0 is said to be free-by-cyclic if it has a free normal subgroup formula_1 such that the quotient group formula_2 is cyclic. In other words, formula_0 is free-by-cyclic if it can be expressed as a group extension of a free group by a cyclic group (NB there are two conventions for 'by'). Usually, we assume formula_3 is finitely generated and the quotient is an infinite cyclic group. Equivalently, we can define a free-by-cyclic group constructively: if formula_4 is an automorphism of formula_3, the semidirect product formula_5 is a free-by-cyclic group.
An isomorphism class of a free-by-cyclic group is determined by an outer automorphism. If two automorphisms formula_6 represent the same outer automorphism, that is, formula_7 for some inner automorphism formula_8, the free-by-cyclic groups formula_5 and formula_9 are isomorphic.
Examples.
The class of free-by-cyclic groups contains various groups as follow: | [
{
"math_id": 0,
"text": "G"
},
{
"math_id": 1,
"text": " F"
},
{
"math_id": 2,
"text": " G/F"
},
{
"math_id": 3,
"text": " F "
},
{
"math_id": 4,
"text": " \\varphi "
},
{
"math_id": 5,
"text": " F \\rtimes_\\varphi \\mathbb{Z} "
},
{
"math_id": 6,
"text": " \\varphi, \\psi "
},
{
"math_id": 7,
"text": " \\varphi = \\psi\\iota "
},
{
"math_id": 8,
"text": " \\iota "
},
{
"math_id": 9,
"text": " F \\rtimes_\\psi \\mathbb{Z} "
}
] | https://en.wikipedia.org/wiki?curid=10377468 |
1037781 | Beta function (physics) | Function that encodes the dependence of a coupling parameter on the energy scale
In theoretical physics, specifically quantum field theory, a beta function, "β(g)", encodes the dependence of a coupling parameter, "g", on the energy scale, "μ", of a given physical process described by quantum field theory.
It is defined as
formula_0
and, because of the underlying renormalization group, it has no explicit dependence on "μ", so it only depends on "μ" implicitly through "g".
This dependence on the energy scale thus specified is known as the running of the coupling parameter, a fundamental
feature of scale-dependence in quantum field theory, and its explicit computation is achievable through a variety of mathematical techniques.
Scale invariance.
If the beta functions of a quantum field theory vanish, usually at particular values of the coupling parameters, then the theory is said to be scale-invariant. Almost all scale-invariant QFTs are also conformally invariant. The study of such theories is conformal field theory.
The coupling parameters of a quantum field theory can run even if the corresponding classical field theory is scale-invariant. In this case, the non-zero beta function tells us that the classical scale invariance is anomalous.
Examples.
Beta functions are usually computed in some kind of approximation scheme. An example is perturbation theory, where one assumes that the coupling parameters are small. One can then make an expansion in powers of the coupling parameters and truncate the higher-order terms (also known as higher loop contributions, due to the number of loops in the corresponding Feynman graphs).
Here are some examples of beta functions computed in perturbation theory:
Quantum electrodynamics.
The one-loop beta function in quantum electrodynamics (QED) is
or, equivalently,
written in terms of the fine structure constant in natural units, "α"
"e"2/4π.
This beta function tells us that the coupling increases with increasing energy scale, and QED becomes strongly coupled at high energy. In fact, the coupling apparently becomes infinite at some finite energy, resulting in a Landau pole. However, one cannot expect the perturbative beta function to give accurate results at strong coupling, and so it is likely that the Landau pole is an artifact of applying perturbation theory in a situation where it is no longer valid.
Quantum chromodynamics.
The one-loop beta function in quantum chromodynamics with formula_3 flavours and formula_4 scalar colored bosons is
formula_5
or
formula_6
written in terms of "αs" = formula_7 .
Assuming "n""s"=0, if "n""f" ≤ 16, the ensuing beta function dictates that the coupling decreases with increasing energy scale, a phenomenon known as asymptotic freedom. Conversely, the coupling increases with decreasing energy scale. This means that the coupling becomes large at low energies, and one can no longer rely on perturbation theory.
SU(N) Non-Abelian gauge theory.
While the (Yang–Mills) gauge group of QCD is formula_8, and determines 3 colors, we can generalize to any number of colors, formula_9, with a gauge group formula_10. Then for this gauge group, with Dirac fermions in a representation formula_11 of formula_12 and with complex scalars in a representation formula_13, the one-loop beta function is
formula_14
where formula_15 is the quadratic Casimir of formula_12 and formula_16 is another Casimir invariant defined by formula_17 for generators formula_18 of the Lie algebra in the representation R. (For Weyl or Majorana fermions, replace formula_19 by formula_20, and for real scalars, replace formula_21 by formula_22.) For gauge fields ("i.e." gluons), necessarily in the adjoint of formula_12, formula_23; for fermions in the fundamental (or anti-fundamental) representation of formula_12, formula_24. Then for QCD, with formula_25, the above equation reduces to that listed for the quantum chromodynamics beta function.
This famous result was derived nearly simultaneously in 1973 by Politzer, Gross and Wilczek, for which the three were awarded the Nobel Prize in Physics in 2004.
Unbeknownst to these authors, G. 't Hooft had announced the result in a comment following a talk by K. Symanzik at a small meeting in Marseilles in June 1972, but he never published it.
Standard Model Higgs–Yukawa Couplings.
In the Standard Model, quarks and leptons have "Yukawa couplings" to the Higgs boson. These determine the mass of the particle. Most all of the quarks' and leptons' Yukawa couplings are small compared to the top quark's Yukawa coupling. These Yukawa couplings change their values depending on the energy scale at which they are measured, through "running". The dynamics of Yukawa couplings of quarks are determined by the renormalization group equation:
formula_26,
where formula_27 is the color gauge coupling (which is a function of formula_28 and associated with asymptotic freedom) and formula_29 is the Yukawa coupling. This equation describes how the Yukawa coupling changes with energy scale formula_28.
The Yukawa couplings of the up, down, charm, strange and bottom quarks, are small at the extremely high energy scale of grand unification, formula_30 GeV. Therefore, the formula_31 term can be neglected in the above equation. Solving, we then find that formula_29 is increased slightly at the low energy scales at which the quark masses are generated by the Higgs, formula_32 GeV.
On the other hand, solutions to this equation for large initial values formula_29 cause the "rhs" to quickly approach smaller values as we descend in energy scale. The above equation then locks formula_29 to the QCD coupling formula_27. This is known as the (infrared) quasi-fixed point of the renormalization group equation for the Yukawa coupling. No matter what the initial starting value of the coupling is, if it is sufficiently large it will reach this quasi-fixed point value, and the corresponding quark mass is predicted.
The value of the quasi-fixed point is fairly precisely determined in the Standard Model, leading to a predicted top quark mass of 230 GeV. The observed top quark mass of 174 GeV is slightly lower than the standard model prediction by about 30% which suggests there may be more Higgs doublets beyond the single standard model Higgs boson.
Minimal Supersymmetric Standard Model.
Renomalization group studies in the Minimal Supersymmetric Standard Model (MSSM) of grand unification and the Higgs–Yukawa fixed points were very encouraging that the theory was on the right track. So far, however, no evidence of the predicted MSSM particles has emerged in experiment at the Large Hadron Collider.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\beta(g) = \\frac{\\partial g}{\\partial \\ln(\\mu)} ~,"
},
{
"math_id": 1,
"text": "\\beta(e)=\\frac{e^3}{12\\pi^2}~,"
},
{
"math_id": 2,
"text": "\\beta(\\alpha)=\\frac{2\\alpha^2}{3\\pi}~,"
},
{
"math_id": 3,
"text": "n_f"
},
{
"math_id": 4,
"text": "n_s"
},
{
"math_id": 5,
"text": "\\beta(g)=-\\left(11- \\frac{n_s}{6} - \\frac{2n_f}{3}\\right)\\frac{g^3}{16\\pi^2}~,"
},
{
"math_id": 6,
"text": "\\beta(\\alpha_s)=-\\left(11- \\frac{n_s}{6}-\\frac{2n_f}{3}\\right)\\frac{\\alpha_s^2}{2\\pi}~,"
},
{
"math_id": 7,
"text": "g^2/4\\pi"
},
{
"math_id": 8,
"text": "SU(3)"
},
{
"math_id": 9,
"text": "N_c"
},
{
"math_id": 10,
"text": "G=SU(N_c)"
},
{
"math_id": 11,
"text": "R_f"
},
{
"math_id": 12,
"text": "G"
},
{
"math_id": 13,
"text": "R_s"
},
{
"math_id": 14,
"text": "\\beta(g)=-\\left(\\frac{11}{3}C_2(G)-\\frac{1}{3}n_sT(R_s)-\\frac{4}{3}n_f T(R_f)\\right)\\frac{g^3}{16\\pi^2}~,"
},
{
"math_id": 15,
"text": "C_2(G)"
},
{
"math_id": 16,
"text": "T(R)"
},
{
"math_id": 17,
"text": "Tr (T^a_RT^b_R) = T(R)\\delta^{ab}"
},
{
"math_id": 18,
"text": "T^{a,b}_R"
},
{
"math_id": 19,
"text": "4/3"
},
{
"math_id": 20,
"text": "2/3"
},
{
"math_id": 21,
"text": "1/3"
},
{
"math_id": 22,
"text": "1/6"
},
{
"math_id": 23,
"text": "C_2(G) = N_c"
},
{
"math_id": 24,
"text": "T(R) = 1/2"
},
{
"math_id": 25,
"text": "N_c = 3"
},
{
"math_id": 26,
"text": "\\mu \\frac{\\partial}{\\partial\\mu} y \\approx \\frac{y}{16\\pi^2}\\left(\\frac{9}{2}y^2 - 8 g_3^2\\right)"
},
{
"math_id": 27,
"text": "g_3"
},
{
"math_id": 28,
"text": "\\mu"
},
{
"math_id": 29,
"text": "y"
},
{
"math_id": 30,
"text": " \\mu \\approx 10^{15} "
},
{
"math_id": 31,
"text": "y^2"
},
{
"math_id": 32,
"text": " \\mu \\approx 100 "
}
] | https://en.wikipedia.org/wiki?curid=1037781 |
10377971 | Virtually | Mathematical concept
In mathematics, especially in the area of abstract algebra that studies infinite groups, the adverb virtually is used to modify a property so that it need only hold for a subgroup of finite index. Given a property P, the group "G" is said to be "virtually P" if there is a finite index subgroup formula_0 such that "H" has property P.
Common uses for this would be when P is abelian, nilpotent, solvable or free. For example, virtually solvable groups are one of the two alternatives in the Tits alternative, while Gromov's theorem states that the finitely generated groups with polynomial growth are precisely the finitely generated virtually nilpotent groups.
This terminology is also used when P is just another group. That is, if "G" and "H" are groups then "G" is "virtually" "H" if "G" has a subgroup "K" of finite index in "G" such that "K" is isomorphic to "H".
In particular, a group is virtually trivial if and only if it is finite. Two groups are virtually equal if and only if they are commensurable.
Examples.
Virtually abelian.
The following groups are virtually abelian.
Virtually nilpotent.
Gromov's theorem says that a finitely generated group is virtually nilpotent if and only if it has polynomial growth.
Virtually free.
It follows from Stalling's theorem that any torsion-free virtually free group is free.
Others.
The free group formula_4 on 2 generators is virtually formula_5 for any formula_6 as a consequence of the Nielsen–Schreier theorem and the Schreier index formula.
The group formula_7 is virtually connected as formula_8 has index 2 in it. | [
{
"math_id": 0,
"text": "H \\le G"
},
{
"math_id": 1,
"text": "N\\rtimes H"
},
{
"math_id": 2,
"text": "H*K"
},
{
"math_id": 3,
"text": "\\operatorname{PSL}(2,\\Z)"
},
{
"math_id": 4,
"text": "F_2"
},
{
"math_id": 5,
"text": "F_n"
},
{
"math_id": 6,
"text": "n\\ge 2"
},
{
"math_id": 7,
"text": "\\operatorname{O}(n)"
},
{
"math_id": 8,
"text": "\\operatorname{SO}(n)"
}
] | https://en.wikipedia.org/wiki?curid=10377971 |
1037854 | Free electron model | Simple model for the behaviour of valence electrons in a crystal structure of a metallic solid
In solid-state physics, the free electron model is a quantum mechanical model for the behaviour of charge carriers in a metallic solid. It was developed in 1927, principally by Arnold Sommerfeld, who combined the classical Drude model with quantum mechanical Fermi–Dirac statistics and hence it is also known as the Drude–Sommerfeld model.
Given its simplicity, it is surprisingly successful in explaining many experimental phenomena, especially
The free electron model solved many of the inconsistencies related to the Drude model and gave insight into several other properties of metals. The free electron model considers that metals are composed of a quantum electron gas where ions play almost no role. The model can be very predictive when applied to alkali and noble metals.
Ideas and assumptions.
In the free electron model four main assumptions are taken into account:
The name of the model comes from the first two assumptions, as each electron can be treated as free particle with a respective quadratic relation between energy and momentum.
The crystal lattice is not explicitly taken into account in the free electron model, but a quantum-mechanical justification was given a year later (1928) by Bloch's theorem: an unbound electron moves in a periodic potential as a free electron in vacuum, except for the electron mass "me" becoming an effective mass "m*" which may deviate considerably from "me" (one can even use negative effective mass to describe conduction by electron holes). Effective masses can be derived from band structure computations that were not originally taken into account in the free electron model.
From the Drude model.
Many physical properties follow directly from the Drude model, as some equations do not depend on the statistical distribution of the particles. Taking the classical velocity distribution of an ideal gas or the velocity distribution of a Fermi gas only changes the results related to the speed of the electrons.
Mainly, the free electron model and the Drude model predict the same DC electrical conductivity "σ" for Ohm's law, that is
formula_1 with formula_2
where formula_3 is the current density, formula_4 is the external electric field, formula_5 is the electronic density (number of electrons/volume), formula_0 is the mean free time and formula_6 is the electron electric charge.
Other quantities that remain the same under the free electron model as under Drude's are the AC susceptibility, the plasma frequency, the magnetoresistance, and the Hall coefficient related to the Hall effect.
Properties of an electron gas.
Many properties of the free electron model follow directly from equations related to the Fermi gas, as the independent electron approximation leads to an ensemble of non-interacting electrons. For a three-dimensional electron gas we can define the Fermi energy as
formula_7
where formula_8 is the reduced Planck constant. The Fermi energy defines the energy of the highest energy electron at zero temperature. For metals the Fermi energy is in the order of units of electronvolts above the free electron band minimum energy.
Density of states.
The 3D density of states (number of energy states, per energy per volume) of a non-interacting electron gas is given by:
formula_9
where formula_10 is the energy of a given electron. This formula takes into account the spin degeneracy but does not consider a possible energy shift due to the bottom of the conduction band. For 2D the density of states is constant and for 1D is inversely proportional to the square root of the electron energy.
Fermi level.
The chemical potential formula_11 of electrons in a solid is also known as the Fermi level and, like the related Fermi energy, often denoted formula_12. The Sommerfeld expansion can be used to calculate the Fermi level (formula_13) at higher temperatures as:
formula_14
where formula_15 is the temperature and we define formula_16 as the Fermi temperature (formula_17 is Boltzmann constant). The perturbative approach is justified as the Fermi temperature is usually of about 105 K for a metal, hence at room temperature or lower the Fermi energy formula_18 and the chemical potential formula_19 are practically equivalent.
Compressibility of metals and degeneracy pressure.
The total energy per unit volume (at formula_20) can also be calculated by integrating over the phase space of the system, we obtain
formula_21
which does not depend on temperature. Compare with the energy per electron of an ideal gas: formula_22, which is null at zero temperature. For an ideal gas to have the same energy as the electron gas, the temperatures would need to be of the order of the Fermi temperature. Thermodynamically, this energy of the electron gas corresponds to a zero-temperature pressure given by
formula_23
where formula_24 is the volume and formula_25 is the total energy, the derivative performed at temperature and chemical potential constant. This pressure is called the electron degeneracy pressure and does not come from repulsion or motion of the electrons but from the restriction that no more than two electrons (due to the two values of spin) can occupy the same energy level. This pressure defines the compressibility or bulk modulus of the metal
formula_26
This expression gives the right order of magnitude for the bulk modulus for alkali metals and noble metals, which show that this pressure is as important as other effects inside the metal. For other metals the crystalline structure has to be taken into account.
Magnetic response.
According to the Bohr–Van Leeuwen theorem, a classical system at thermodynamic equilibrium cannot have a magnetic response. The magnetic properties of matter in terms of a microscopic theory are purely quantum mechanical. For an electron gas, the total magnetic response is paramagnetic and its magnetic susceptibility given by
formula_27
where formula_28 is the vacuum permittivity and the formula_29 is the Bohr magneton. This value results from the competition of two contributions: a diamagnetic contribution (known as Landau's diamagnetism) coming from the orbital motion of the electrons in the presence of a magnetic field, and a paramagnetic contribution (Pauli's paramagnetism). The latter contribution is three times larger in absolute value than the diamagnetic contribution and comes from the electron spin, an intrinsic quantum degree of freedom that can take two discrete values and it is associated to the electron magnetic moment.
Corrections to Drude's model.
Heat capacity.
One open problem in solid-state physics before the arrival of quantum mechanics was to understand the heat capacity of metals. While most solids had a constant volumetric heat capacity given by Dulong–Petit law of about formula_30at large temperatures, it did correctly predict its behavior at low temperatures. In the case of metals that are good conductors, it was expected that the electrons contributed also the heat capacity.
The classical calculation using Drude's model, based on an ideal gas, provides a volumetric heat capacity given by
formula_31.
If this was the case, the heat capacity of a metals should be 1.5 of that obtained by the Dulong–Petit law.
Nevertheless, such a large additional contribution to the heat capacity of metals was never measured, raising suspicions about the argument above. By using Sommerfeld's expansion one can obtain corrections of the energy density at finite temperature and obtain the volumetric heat capacity of an electron gas, given by:
formula_32,
where the prefactor to formula_33is considerably smaller than the 3/2 found in formula_34, about 100 times smaller at room temperature and much smaller at lower formula_35.
Evidently, the electronic contribution alone does not predict the Dulong–Petit law, i.e. the observation that the heat capacity of a metal is still constant at high temperatures. The free electron model can be improved in this sense by adding the contribution of the vibrations of the crystal lattice. Two famous quantum corrections include the Einstein solid model and the more refined Debye model. With the addition of the latter, the volumetric heat capacity of a metal at low temperatures can be more precisely written in the form,
formula_36,
where formula_37 and formula_38 are constants related to the material. The linear term comes from the electronic contribution while the cubic term comes from Debye model. At high temperature this expression is no longer correct, the electronic heat capacity can be neglected, and the total heat capacity of the metal tends to a constant given by the Dulong–petit law.
Mean free path.
Notice that without the relaxation time approximation, there is no reason for the electrons to deflect their motion, as there are no interactions, thus the mean free path should be infinite. The Drude model considered the mean free path of electrons to be close to the distance between ions in the material, implying the earlier conclusion that the diffusive motion of the electrons was due to collisions with the ions. The mean free paths in the free electron model are instead given by formula_39 (where formula_40 is the Fermi speed) and are in the order of hundreds of ångströms, at least one order of magnitude larger than any possible classical calculation. The mean free path is then not a result of electron–ion collisions but instead is related to imperfections in the material, either due to defects and impurities in the metal, or due to thermal fluctuations.
Thermal conductivity and thermopower.
While Drude's model predicts a similar value for the electric conductivity as the free electron model, the models predict slightly different thermal conductivities.
The thermal conductivity is given by formula_41 for free particles, which is proportional to the heat capacity and the mean free path which depend on the model (formula_42 is the mean (square) speed of the electrons or the Fermi speed in the case of the free electron model). This implies that the ratio between thermal and electric conductivity is given by the Wiedemann–Franz law,
formula_43
where formula_44 is the Lorenz number, given by
formula_45
The free electron model is closer to the measured value of formula_46 V2/K2 while the Drude prediction is off by about half the value, which is not a large difference. The close prediction to the Lorenz number in the Drude model was a result of the classical kinetic energy of electron being about 100 smaller than the quantum version, compensating the large value of the classical heat capacity.
However, Drude's mode predicts the wrong order of magnitude for the Seebeck coefficient (thermopower), which relates the generation of a potential difference by applying a temperature gradient across a sample formula_47. This coefficient can be showed to be formula_48, which is just proportional to the heat capacity, so the Drude model predicts a constant that is hundred times larger than the value of the free electron model. While the latter get as coefficient that is linear in temperature and provides much more accurate absolute values in the order of a few tens of μV/K at room temperature. However this models fails to predict the sign change of the thermopower in lithium and noble metals like gold and silver.
Inaccuracies and extensions.
The free electron model presents several inadequacies that are contradicted by experimental observation. We list some inaccuracies below:
–1/|"ne"| in Drude's model and in the free electron model. This value is independent of temperature and the strength of the magnetic field. The Hall coefficient is actually dependent on the band structure and the difference with the model can be quite dramatic when studying elements like magnesium and aluminium that have a strong magnetic field dependence. The free electron model also predicts that the traverse magnetoresistance, the resistance in the direction of the current, does not depend on the strength of the field. In almost all the cases it does.
Other inadequacies are present in the Wiedemann–Franz law at intermediate temperatures and the frequency-dependence of metals in the optical spectrum.
More exact values for the electrical conductivity and Wiedemann–Franz law can be obtained by softening the relaxation-time approximation by appealing to the Boltzmann transport equations.
The exchange interaction is totally excluded from this model and its inclusion can lead to other magnetic responses like ferromagnetism.
An immediate continuation to the free electron model can be obtained by assuming the empty lattice approximation, which forms the basis of the band structure model known as the nearly free electron model.
Adding repulsive interactions between electrons does not change very much the picture presented here. Lev Landau showed that a Fermi gas under repulsive interactions, can be seen as a gas of equivalent quasiparticles that slightly modify the properties of the metal. Landau's model is now known as the Fermi liquid theory. More exotic phenomena like superconductivity, where interactions can be attractive, require a more refined theory.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\tau"
},
{
"math_id": 1,
"text": "\\mathbf{J} = \\sigma \\mathbf{E}\\quad"
},
{
"math_id": 2,
"text": "\\quad\\sigma = \\frac{ne^2\\tau}{m_e},"
},
{
"math_id": 3,
"text": "\\mathbf{J}"
},
{
"math_id": 4,
"text": "\\mathbf{E}"
},
{
"math_id": 5,
"text": "n"
},
{
"math_id": 6,
"text": "e"
},
{
"math_id": 7,
"text": "E_{\\rm F} = \\frac{\\hbar^2}{2m_e}\\left(3\\pi^2n\\right)^\\frac{2}{3},"
},
{
"math_id": 8,
"text": "\\hbar"
},
{
"math_id": 9,
"text": "g(E) = \\frac{m_e}{\\pi^2\\hbar^3}\\sqrt{2m_eE} = \\frac{3}{2}\\frac{n}{E_{\\rm F}}\\sqrt{\\frac{E}{E_{\\rm F}}},"
},
{
"math_id": 10,
"text": "E \\geq 0"
},
{
"math_id": 11,
"text": "\\mu"
},
{
"math_id": 12,
"text": "E_{\\rm F}"
},
{
"math_id": 13,
"text": "T>0"
},
{
"math_id": 14,
"text": "E_{\\rm F}(T) = E_{\\rm F}(T=0) \\left[1 - \\frac{\\pi ^2}{12} \\left(\\frac{T}{T_{\\rm F}}\\right) ^2 - \\frac{\\pi^4}{80} \\left(\\frac{T}{T_{\\rm F}}\\right)^4 + \\cdots \\right], "
},
{
"math_id": 15,
"text": "T"
},
{
"math_id": 16,
"text": "T_{\\rm F} = E_{\\rm F}/k_{\\rm B}"
},
{
"math_id": 17,
"text": "k_{\\rm B}"
},
{
"math_id": 18,
"text": "E_{\\rm F}(T=0)"
},
{
"math_id": 19,
"text": "E_{\\rm F}(T>0)"
},
{
"math_id": 20,
"text": "T = 0"
},
{
"math_id": 21,
"text": "u(0) = \\frac{3}{5}nE_{\\rm F},"
},
{
"math_id": 22,
"text": "\\frac{3}{2}k_{\\rm B}T"
},
{
"math_id": 23,
"text": "P = -\\left(\\frac{\\partial U}{\\partial V}\\right)_{T,\\mu} = \\frac{2}{3}u(0),"
},
{
"math_id": 24,
"text": "V"
},
{
"math_id": 25,
"text": "U(T) = u(T) V"
},
{
"math_id": 26,
"text": "B = -V\\left(\\frac{\\partial P}{\\partial V}\\right)_{T,\\mu} = \\frac{5}{3}P = \\frac{2}{3}nE_{\\rm F}."
},
{
"math_id": 27,
"text": "\\chi=\\frac{2}{3}\\mu_0\\mu_\\mathrm{B}^2g(E_\\mathrm{F}),"
},
{
"math_id": 28,
"text": "\\mu_0"
},
{
"math_id": 29,
"text": "\\mu_{\\rm B}"
},
{
"math_id": 30,
"text": "3nk_{\\rm B}"
},
{
"math_id": 31,
"text": "c^\\text{Drude}_V = \\frac{3}{2}nk_{\\rm B}"
},
{
"math_id": 32,
"text": "c_V=\\left(\\frac{\\partial u}{\\partial T}\\right)_{n}=\\frac{\\pi^2}{2}\\frac{T}{T_{\\rm F}} nk_{\\rm B}"
},
{
"math_id": 33,
"text": "nk_B"
},
{
"math_id": 34,
"text": "c^{\\text{Drude}}_V"
},
{
"math_id": 35,
"text": "T"
},
{
"math_id": 36,
"text": "c_V\\approx\\gamma T + AT^3"
},
{
"math_id": 37,
"text": "\\gamma"
},
{
"math_id": 38,
"text": "A"
},
{
"math_id": 39,
"text": "\\lambda=v_{\\rm F}\\tau"
},
{
"math_id": 40,
"text": "v_{\\rm F}=\\sqrt{2E_{\\rm F}/m_e}"
},
{
"math_id": 41,
"text": "\\kappa=c_V \\tau\\langle v^2\\rangle/3 "
},
{
"math_id": 42,
"text": "\\langle v^2\\rangle^{1/2} "
},
{
"math_id": 43,
"text": "\\frac \\kappa \\sigma = \\frac{m_{\\rm e}c_V \\langle v^2 \\rangle }{3n e^2} = L T"
},
{
"math_id": 44,
"text": "L "
},
{
"math_id": 45,
"text": "L=\\left\\{\\begin{matrix}\\displaystyle \\frac{3}{2}\\left(\\frac{k_{\\rm B}}{e}\\right)^2\\;, & \\text{Drude}\\\\\n\\displaystyle\\frac{\\pi^2}{3}\\left(\\frac{k_{\\rm B}}{e}\\right)^2\\;,&\\text{free electron model.}\n\\end{matrix}\\right."
},
{
"math_id": 46,
"text": "L=2.44\\times10^{-8} "
},
{
"math_id": 47,
"text": "\\nabla V =-S \\nabla T"
},
{
"math_id": 48,
"text": "S=-{c_{\\rm V}}/{|ne|}"
}
] | https://en.wikipedia.org/wiki?curid=1037854 |
1038048 | Iterated logarithm | Inverse function to a tower of powers
In computer science, the iterated logarithm of formula_0, written log* formula_0 (usually read "log star"), is the number of times the logarithm function must be iteratively applied before the result is less than or equal to formula_1. The simplest formal definition is the result of this recurrence relation:
formula_2
In computer science, lg* is often used to indicate the binary iterated logarithm, which iterates the binary logarithm (with base formula_3) instead of the natural logarithm (with base "e"). Mathematically, the iterated logarithm is well defined for any base greater than formula_4, not only for base formula_3 and base "e". The "super-logarithm" function formula_5 is "essentially equivalent" to the base formula_6 iterated logarithm (although differing in minor details of rounding) and forms an inverse to the operation of tetration.
Analysis of algorithms.
The iterated logarithm is useful in analysis of algorithms and computational complexity, appearing in the time and space complexity bounds of some algorithms such as:
The iterated logarithm grows at an extremely slow rate, much slower than the logarithm itself, or repeats of it. This is because the tetration grows much faster than iterated exponential:
formula_7
the inverse grows much slower: formula_8.
For all values of "n" relevant to counting the running times of algorithms implemented in practice (i.e., "n" ≤ 265536, which is far more than the estimated number of atoms in the known universe), the iterated logarithm with base 2 has a value no more than 5.
Higher bases give smaller iterated logarithms.
Other applications.
The iterated logarithm is closely related to the generalized logarithm function used in symmetric level-index arithmetic. The additive persistence of a number, the number of times someone must replace the number by the sum of its digits before reaching its digital root, is formula_9.
In computational complexity theory, Santhanam shows that the computational resources DTIME — computation time for a deterministic Turing machine — and NTIME — computation time for a non-deterministic Turing machine — are distinct up to formula_10
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "n"
},
{
"math_id": 1,
"text": "1"
},
{
"math_id": 2,
"text": "\n \\log^* n :=\n \\begin{cases}\n 0 & \\mbox{if } n \\le 1; \\\\\n 1 + \\log^*(\\log n) & \\mbox{if } n > 1\n \\end{cases}\n "
},
{
"math_id": 3,
"text": "2"
},
{
"math_id": 4,
"text": "e^{1/e} \\approx 1.444667"
},
{
"math_id": 5,
"text": "\\mathrm {slog}_b(n)"
},
{
"math_id": 6,
"text": "b"
},
{
"math_id": 7,
"text": "{^{y}b} = \\underbrace{b^{b^{\\cdot^{\\cdot^{b}}}}}_y \\gg \\underbrace{b^{b^{\\cdot^{\\cdot^{b^{y}}}}}}_n"
},
{
"math_id": 8,
"text": "\\log_b^* x \\ll \\log_b^n x"
},
{
"math_id": 9,
"text": "O(\\log^* n)"
},
{
"math_id": 10,
"text": "n\\sqrt{\\log^*n}."
}
] | https://en.wikipedia.org/wiki?curid=1038048 |
1038257 | Cauchy–Euler equation | Ordinary differential equation
In mathematics, an Euler–Cauchy equation, or Cauchy–Euler equation, or simply Euler's equation, is a linear homogeneous ordinary differential equation with variable coefficients. It is sometimes referred to as an "equidimensional" equation. Because of its particularly simple equidimensional structure, the differential equation can be solved explicitly.
The equation.
Let "y"("n")("x") be the "n"th derivative of the unknown function "y"("x"). Then a Cauchy–Euler equation of order "n" has the form
formula_0
The substitution formula_1 (that is, formula_2; for formula_3, in which one might replace all instances of formula_4 by formula_5, extending the solution's domain to formula_6) can be used to reduce this equation to a linear differential equation with constant coefficients. Alternatively, the trial solution formula_7 can be used to solve the equation directly, yielding the basic solutions.
Second order – solving through trial solution.
The most common Cauchy–Euler equation is the second-order equation, which appears in a number of physics and engineering applications, such as when solving Laplace's equation in polar coordinates. The second order Cauchy–Euler equation is
formula_8
We assume a trial solution formula_9
Differentiating gives formula_10 and formula_11
Substituting into the original equation leads to requiring that
formula_12
Rearranging and factoring gives the indicial equation
formula_13
We then solve for "m". There are three cases of interest:
In case 1, the solution is formula_14
In case 2, the solution is formula_15
To get to this solution, the method of reduction of order must be applied, after having found one solution "y" = "x""m".
In case 3, the solution is
formula_16
formula_17
formula_18
For formula_19.
This form of the solution is derived by setting "x" = "e""t" and using Euler's formula
Second order – solution through change of variables.
formula_20
We operate the variable substitution defined by
formula_21
formula_22
Differentiating gives
formula_23
formula_24
Substituting formula_25 the differential equation becomes
formula_26
This equation in formula_25 is solved via its characteristic polynomial
formula_27
Now let formula_28 and formula_29 denote the two roots of this polynomial. We analyze the case in which there are distinct roots and the case in which there is a repeated root:
If the roots are distinct, the general solution is formula_30 where the exponentials may be complex.
If the roots are equal, the general solution is formula_31
In both cases, the solution formula_32 can be found by setting formula_33.
Hence, in the first case, formula_34 and in the second case, formula_35
Second order - solution using differential operators.
Observe that we can write the second-order Cauchy-Euler equation in terms of a linear differential operator formula_36 as formula_37 where formula_38 and formula_39 is the identity operator.
We express the above operator as a polynomial in formula_40, rather than formula_41. By the product rule, formula_42 So, formula_43
We can then use the quadratic formula to factor this operator into linear terms. More specifically, let formula_44 denote the (possibly equal) values of formula_45 Then, formula_46
It can be seen that these factors commute, that is formula_47. Hence, if formula_48, the solution to formula_49 is a linear combination of the solutions to each of formula_50 and formula_51, which can be solved by separation of variables.
Indeed, with formula_52, we have formula_53. So, formula_54 Thus, the general solution is formula_55.
If formula_56, then we instead need to consider the solution of formula_57. Let formula_58, so that we can write formula_59 As before, the solution of formula_60 is of the form formula_61. So, we are left to solve formula_62 We then rewrite the equation as formula_63 which one can recognize as being amenable to solution via an integrating factor.
Choose formula_64 as our integrating factor. Multiplying our equation through by formula_65 and recognizing the left-hand side as the derivative of a product, we then obtain formula_66
Example.
Given
formula_67
we substitute the simple solution "x""m":
formula_68
For "x""m" to be a solution, either "x" = 0, which gives the trivial solution, or the coefficient of "x""m" is zero. Solving the quadratic equation, we get "m" = 1, 3. The general solution is therefore
formula_69
Difference equation analogue.
There is a difference equation analogue to the Cauchy–Euler equation. For a fixed "m" > 0, define the sequence "f""m"("n") as
formula_70
Applying the difference operator to formula_71, we find that
formula_72
If we do this k times, we find that
formula_73
where the superscript ("k") denotes applying the difference operator k times. Comparing this to the fact that the k-th derivative of "x""m" equals
formula_74
suggests that we can solve the "N"-th order difference equation
formula_75
in a similar manner to the differential equation case. Indeed, substituting the trial solution
formula_76
brings us to the same situation as the differential equation case,
formula_77
One may now proceed as in the differential equation case, since the general solution of an N-th order linear difference equation is also the linear combination of N linearly independent solutions. Applying reduction of order in case of a multiple root "m"1 will yield expressions involving a discrete version of ln,
formula_78
In cases where fractions become involved, one may use formula_80 instead (or simply use it in all cases), which coincides with the definition before for integer m. | [
{
"math_id": 0,
"text": "a_{n} x^n y^{(n)}(x) + a_{n-1} x^{n-1} y^{(n-1)}(x) + \\dots + a_0 y(x) = 0."
},
{
"math_id": 1,
"text": "x = e^u"
},
{
"math_id": 2,
"text": "u = \\ln(x)"
},
{
"math_id": 3,
"text": "x < 0"
},
{
"math_id": 4,
"text": "x"
},
{
"math_id": 5,
"text": "|x|"
},
{
"math_id": 6,
"text": "\\reals \\setminus \\{0\\}"
},
{
"math_id": 7,
"text": "y = x^m"
},
{
"math_id": 8,
"text": "x^2\\frac{d^2y}{dx^2} + ax\\frac{dy}{dx} + by = 0."
},
{
"math_id": 9,
"text": "y = x^m."
},
{
"math_id": 10,
"text": "\\frac{dy}{dx} = mx^{m-1} "
},
{
"math_id": 11,
"text": "\\frac{d^2y}{dx^2} = m\\left(m-1\\right)x^{m-2}. "
},
{
"math_id": 12,
"text": "x^2\\left( m\\left(m-1 \\right)x^{m-2} \\right) + ax\\left( mx^{m-1} \\right) + b\\left( x^m \\right) = 0"
},
{
"math_id": 13,
"text": "m^2 + \\left(a-1\\right)m + b = 0."
},
{
"math_id": 14,
"text": "y = c_1 x^{m_1} + c_2 x^{m_2}"
},
{
"math_id": 15,
"text": "y = c_1 x^m \\ln(x) + c_2 x^m "
},
{
"math_id": 16,
"text": "y = c_1 x^\\alpha \\cos(\\beta \\ln(x)) + c_2 x^\\alpha \\sin(\\beta \\ln(x)) "
},
{
"math_id": 17,
"text": "\\alpha = \\operatorname{Re}(m)"
},
{
"math_id": 18,
"text": "\\beta = \\operatorname{Im}(m)"
},
{
"math_id": 19,
"text": "c_1, c_2 \\isin \\R"
},
{
"math_id": 20,
"text": "x^2\\frac{d^2y}{dx^2} +ax\\frac{dy}{dx} + by = 0 "
},
{
"math_id": 21,
"text": "t = \\ln(x). "
},
{
"math_id": 22,
"text": "y(x) = \\varphi(\\ln(x)) = \\varphi(t). "
},
{
"math_id": 23,
"text": "\\frac{dy}{dx}=\\frac{1}{x}\\frac{d\\varphi}{dt}"
},
{
"math_id": 24,
"text": "\\frac{d^2y}{dx^2}=\\frac{1}{x^2}\\left(\\frac{d^2\\varphi}{dt^2}-\\frac{d\\varphi}{dt}\\right)."
},
{
"math_id": 25,
"text": "\\varphi(t)"
},
{
"math_id": 26,
"text": "\\frac{d^2\\varphi}{dt^2} + (a-1)\\frac{d\\varphi}{dt} + b\\varphi = 0."
},
{
"math_id": 27,
"text": "\\lambda^2 + (a-1)\\lambda + b = 0."
},
{
"math_id": 28,
"text": "\\lambda_1"
},
{
"math_id": 29,
"text": "\\lambda_2"
},
{
"math_id": 30,
"text": "\\varphi(t)=c_1 e^{\\lambda_1 t} + c_2 e^{\\lambda_2 t},"
},
{
"math_id": 31,
"text": "\\varphi(t)=c_1 e^{\\lambda_1 t} + c_2 t e^{\\lambda_1 t}."
},
{
"math_id": 32,
"text": "y(x)"
},
{
"math_id": 33,
"text": "t = \\ln(x)"
},
{
"math_id": 34,
"text": "y(x) = c_1 x^{\\lambda_1} + c_2 x^{\\lambda_2},"
},
{
"math_id": 35,
"text": "y(x) = c_1 x^{\\lambda_1} + c_2 \\ln(x) x^{\\lambda_1}."
},
{
"math_id": 36,
"text": " L "
},
{
"math_id": 37,
"text": "Ly = (x^2 D^2 + axD + bI)y = 0,"
},
{
"math_id": 38,
"text": " D = \\frac{d}{dx} "
},
{
"math_id": 39,
"text": " I "
},
{
"math_id": 40,
"text": " xD "
},
{
"math_id": 41,
"text": " D "
},
{
"math_id": 42,
"text": " (x D)^2 = x D(x D) = x(D + x D^2) = x^2D^2 + x D."
},
{
"math_id": 43,
"text": " L = (xD)^2 + (a-1)(xD) + bI."
},
{
"math_id": 44,
"text": " \\lambda_1, \\lambda_2 "
},
{
"math_id": 45,
"text": "-\\frac{a-1}{2} \\pm \\frac{1}{2}\\sqrt{(a-1)^2 - 4b}. "
},
{
"math_id": 46,
"text": "L = (xD - \\lambda_1 I)(xD - \\lambda_2 I)."
},
{
"math_id": 47,
"text": "(xD - \\lambda_1 I)(xD - \\lambda_2 I) = (xD - \\lambda_2 I)(xD - \\lambda_1 I)"
},
{
"math_id": 48,
"text": " \\lambda_1 \\neq \\lambda_2 "
},
{
"math_id": 49,
"text": " Ly = 0 "
},
{
"math_id": 50,
"text": " (xD - \\lambda_1 I)y = 0 "
},
{
"math_id": 51,
"text": " (xD - \\lambda_2 I)y = 0 "
},
{
"math_id": 52,
"text": " i \\in \\{1,2\\} "
},
{
"math_id": 53,
"text": " (xD - \\lambda_i I)y = x\\frac{dy}{dx} - \\lambda_i y = 0 "
},
{
"math_id": 54,
"text": "\\begin{align} x\\frac{dy}{dx} &= \\lambda_i y\\\\ \\int \\frac{1}{y}\\, dy &= \\lambda_i \\int \\frac{1}{x}\\, dx\\\\ \\ln y &= \\lambda_i \\ln x + C\\\\ y &= c_i e^{\\lambda_i \\ln x} = c_i x^{\\lambda_i}.\\end{align}"
},
{
"math_id": 55,
"text": " y = c_1 x^{\\lambda_1} + c_2 x^{\\lambda_2} "
},
{
"math_id": 56,
"text": " \\lambda = \\lambda_1 = \\lambda_2 "
},
{
"math_id": 57,
"text": "(xD - \\lambda I)^2y = 0 "
},
{
"math_id": 58,
"text": " z = (xD-\\lambda I)y "
},
{
"math_id": 59,
"text": " (xD - \\lambda I)^2y = (xD - \\lambda I)z = 0."
},
{
"math_id": 60,
"text": " (xD- \\lambda I)z = 0 "
},
{
"math_id": 61,
"text": " z = c_1x^\\lambda "
},
{
"math_id": 62,
"text": " (xD - \\lambda I)y = x\\frac{dy}{dx} - \\lambda y = c_1x^\\lambda."
},
{
"math_id": 63,
"text": " \\frac{dy}{dx} - \\frac{\\lambda}{x} y = c_1x^{\\lambda-1},"
},
{
"math_id": 64,
"text": " M(x) = x^{-\\lambda} "
},
{
"math_id": 65,
"text": " M(x) "
},
{
"math_id": 66,
"text": "\\begin{align} \\frac{d}{dx}(x^{-\\lambda} y) &= c_1x^{-1}\\\\ x^{-\\lambda} y &= \\int c_1x^{-1}\\, dx\\\\ y &= x^\\lambda (c_1\\ln(x) + c_2)\\\\ &= c_1\\ln(x)x^\\lambda +c_2 x^\\lambda.\\end{align}"
},
{
"math_id": 67,
"text": "x^2 u'' - 3xu' + 3u = 0\\,,"
},
{
"math_id": 68,
"text": "x^2\\left(m\\left(m-1\\right)x^{m-2}\\right)-3x\\left(m x^{m-1}\\right) + 3x^m = m\\left(m-1\\right)x^m - 3m x^m+3x^m = \\left(m^2 - 4m + 3\\right)x^m = 0\\,."
},
{
"math_id": 69,
"text": "u=c_1 x+c_2 x^3\\,."
},
{
"math_id": 70,
"text": "f_m(n) := n (n+1) \\cdots (n+m-1)."
},
{
"math_id": 71,
"text": "f_m"
},
{
"math_id": 72,
"text": "\\begin{align}\nDf_m(n) & = f_{m}(n+1) - f_m(n) \\\\\n& = m(n+1)(n+2) \\cdots (n+m-1) = \\frac{m}{n} f_m(n).\n\\end{align}"
},
{
"math_id": 73,
"text": "\\begin{align}\nf_m^{(k)}(n) & = \\frac{m(m-1)\\cdots(m-k+1)}{n(n+1)\\cdots(n+k-1)} f_m(n) \\\\\n& = m(m-1)\\cdots(m-k+1) \\frac{f_m(n)}{f_k(n)},\n\\end{align}"
},
{
"math_id": 74,
"text": "m(m-1) \\cdots (m-k+1)\\frac{x^m}{x^k}"
},
{
"math_id": 75,
"text": "f_N(n) y^{(N)}(n) + a_{N-1} f_{N-1}(n) y^{(N-1)}(n) + \\cdots + a_0 y(n) = 0,"
},
{
"math_id": 76,
"text": "y(n) = f_m(n) "
},
{
"math_id": 77,
"text": "m(m-1)\\cdots(m-N+1) + a_{N-1} m(m-1) \\cdots (m-N+2) + \\dots + a_1 m + a_0 = 0."
},
{
"math_id": 78,
"text": "\\varphi(n) = \\sum_{k=1}^n \\frac{1}{k - m_1}."
},
{
"math_id": 79,
"text": "\\ln (x - m_1) = \\int_{1+m_1}^x \\frac{dt}{t - m_1} ."
},
{
"math_id": 80,
"text": "f_m(n) := \\frac{\\Gamma(n+m)}{\\Gamma(n)}"
}
] | https://en.wikipedia.org/wiki?curid=1038257 |
10384478 | Triple correlation | The triple correlation of an ordinary function on the real line is the integral of the product of that function with two independently shifted copies of itself:
formula_0
The Fourier transform of triple correlation is the bispectrum. The triple correlation extends the concept of autocorrelation, which correlates a function with a single shifted copy of itself and thereby enhances its latent periodicities.
History.
The theory of the triple correlation was first investigated by statisticians examining the cumulant structure of non-Gaussian random processes. It was also independently studied by physicists as a tool for spectroscopy of laser beams. Hideya Gamo in 1963 described an apparatus for measuring the triple correlation of a laser beam, and also showed how phase information can be recovered from the real part of the bispectrum—up to sign reversal and linear offset. However, Gamo's method implicitly requires the Fourier transform to never be zero at any frequency. This requirement was relaxed, and the class of functions which are known to be uniquely identified by their triple (and higher-order) correlations was considerably expanded, by the study of Yellott and Iverson (1992). Yellott & Iverson also pointed out the connection between triple correlations and the visual texture discrimination theory proposed by Bela Julesz.
Applications.
Triple correlation methods are frequently used in signal processing for treating signals that are corrupted by additive white Gaussian noise; in particular, triple correlation techniques are suitable when multiple observations of the signal are available and the signal may be translating in between the observations, e.g., a sequence of images of an object translating on a noisy background. What makes the triple correlation particularly useful for such tasks are three properties: (1) it is invariant under translation of the underlying signal; (2) it is unbiased in additive Gaussian noise; and (3) it retains nearly all of the relevant phase information in the underlying signal. Properties (1)-(3) of the triple correlation extend in many cases to functions on an arbitrary locally compact group, in particular to the groups of rotations and rigid motions of euclidean space that arise in computer vision and signal processing.
Extension to groups.
The triple correlation may be defined for any locally compact group by using the group's left-invariant Haar measure. It is easily shown that the resulting object is invariant under left translation of the underlying function and unbiased in additive Gaussian noise. What is more interesting is the question of uniqueness : when two functions have the same triple correlation, how are the functions related? For many cases of practical interest, the triple correlation of a function on an abstract group uniquely identifies that function up to a single unknown group action. This uniqueness is a mathematical result that relies on the Pontryagin duality theorem, the Tannaka–Krein duality theorem, and related results of Iwahori-Sugiura, and Tatsuuma. Algorithms exist for recovering bandlimited functions from their triple correlation on Euclidean space, as well as rotation groups in two and three dimensions. There is also an interesting link with Wiener's tauberian theorem: any function whose translates are dense in formula_1, where formula_2 is a locally compact Abelian group, is also uniquely identified by its triple correlation. | [
{
"math_id": 0,
"text": "\n\n \\int_{-\\infty}^{\\infty} f^{*}(x) f(x+s_1) f(x+s_2) dx.\n\n"
},
{
"math_id": 1,
"text": "L_1(G)"
},
{
"math_id": 2,
"text": "G"
}
] | https://en.wikipedia.org/wiki?curid=10384478 |
1038753 | Cut rule | In mathematical logic, the cut rule is an inference rule of sequent calculus. It is a generalisation of the classical modus ponens inference rule. Its meaning is that, if a formula "A" appears as a conclusion in one proof and a hypothesis in another, then another proof in which the formula "A" does not appear can be deduced. In the particular case of the modus ponens, for example occurrences of "man" are eliminated of "Every man is mortal, Socrates is a man" to deduce "Socrates is mortal".
Formal notation.
Formal notation in sequent calculus notation :
formula_0
Elimination.
The cut rule is the subject of an important theorem, the cut-elimination theorem. It states that any sequent that has a proof in the sequent calculus making use of the cut rule also has a cut-free proof, that is, a proof that does not make use of the cut rule. | [
{
"math_id": 0,
"text": "\n\\begin{array}{c}\\Gamma \\vdash A, \\Delta \\quad \\Gamma', A \\vdash \\Delta' \\\\ \\hline \\Gamma, \\Gamma' \\vdash \\Delta, \\Delta'\\end{array} "
}
] | https://en.wikipedia.org/wiki?curid=1038753 |
10387769 | Tangential and normal components | In mathematics, given a vector at a point on a curve, that vector can be decomposed uniquely as a sum of two vectors, one tangent to the curve, called the tangential component of the vector, and another one perpendicular to the curve, called the normal component of the vector. Similarly, a vector at a point on a surface can be broken down the same way.
More generally, given a submanifold "N" of a manifold "M", and a vector in the tangent space to "M" at a point of "N", it can be decomposed into the component tangent to "N" and the component normal to "N".
Formal definition.
Surface.
More formally, let formula_0 be a surface, and formula_1 be a point on the surface. Let formula_2 be a vector at formula_1. Then one can write uniquely formula_2 as a sum
formula_3
where the first vector in the sum is the tangential component and the second one is the normal component. It follows immediately that these two vectors are perpendicular to each other.
To calculate the tangential and normal components, consider a unit normal to the surface, that is, a unit vector formula_4 perpendicular to formula_0 at formula_1. Then,
formula_5
and thus
formula_6
where "formula_7" denotes the dot product. Another formula for the tangential component is
formula_8
where "formula_9" denotes the cross product.
These formulas do not depend on the particular unit normal formula_4 used (there exist two unit normals to any surface at a given point, pointing in opposite directions, so one of the unit normals is the negative of the other one).
Submanifold.
More generally, given a submanifold "N" of a manifold "M" and a point formula_10, we get a short exact sequence involving the tangent spaces:
formula_11
The quotient space formula_12 is a generalized space of normal vectors.
If "M" is a Riemannian manifold, the above sequence splits, and the tangent space of "M" at "p" decomposes as a direct sum of the component tangent to "N" and the component normal to "N":
formula_13
Thus every tangent vector formula_14 splits as formula_15, where formula_16 and formula_17.
Computations.
Suppose "N" is given by non-degenerate equations.
If "N" is given explicitly, via parametric equations (such as a parametric curve), then the derivative gives a spanning set for the tangent bundle (it is a basis if and only if the parametrization is an immersion).
If "N" is given implicitly (as in the above description of a surface, (or more generally as) a hypersurface) as a level set or intersection of level surfaces for formula_18, then the gradients of formula_18 span the normal space.
In both cases, we can again compute using the dot product; the cross product is special to 3 dimensions however. | [
{
"math_id": 0,
"text": "S"
},
{
"math_id": 1,
"text": "x"
},
{
"math_id": 2,
"text": "\\mathbf{v}"
},
{
"math_id": 3,
"text": "\\mathbf{v} = \\mathbf{v}_\\parallel + \\mathbf{v}_\\perp"
},
{
"math_id": 4,
"text": "\\hat\\mathbf{n}"
},
{
"math_id": 5,
"text": "\\mathbf{v}_\\perp = \\left(\\mathbf{v} \\cdot \\hat\\mathbf{n}\\right) \\hat\\mathbf{n}"
},
{
"math_id": 6,
"text": "\\mathbf{v}_\\parallel = \\mathbf{v} - \\mathbf{v}_\\perp"
},
{
"math_id": 7,
"text": "\\cdot"
},
{
"math_id": 8,
"text": "\\mathbf{v}_\\parallel = -\\hat\\mathbf{n} \\times (\\hat\\mathbf{n}\\times\\mathbf{v}),"
},
{
"math_id": 9,
"text": "\\times"
},
{
"math_id": 10,
"text": "p \\in N"
},
{
"math_id": 11,
"text": "T_p N \\to T_p M \\to T_p M / T_p N"
},
{
"math_id": 12,
"text": "T_p M / T_p N"
},
{
"math_id": 13,
"text": "T_p M = T_p N \\oplus N_p N := (T_p N)^\\perp"
},
{
"math_id": 14,
"text": "v \\in T_p M"
},
{
"math_id": 15,
"text": "v = v_\\parallel + v_\\perp"
},
{
"math_id": 16,
"text": "v_\\parallel \\in T_p N"
},
{
"math_id": 17,
"text": "v_\\perp \\in N_p N := (T_p N)^\\perp"
},
{
"math_id": 18,
"text": "g_i"
}
] | https://en.wikipedia.org/wiki?curid=10387769 |
10388995 | Majority problem | The majority problem, or density classification task, is the problem of finding one-dimensional cellular automaton rules that accurately perform majority voting.
Using local transition rules, cells cannot know the total count of all the ones in system. In order to count the number of ones (or, by symmetry, the number of zeros), the system requires a logarithmic number of bits in the total size of the system. It also requires the system send messages over a distance linear in the size of the system and for the system to recognize a non-regular language. Thus, this problem is an important test case in measuring the computational power of cellular automaton systems.
Problem statement.
Given a configuration of a two-state cellular automaton with "i" + "j" cells total, "i" of which are in the zero state and "j" of which are in the one state, a correct solution to the voting problem must eventually set all cells to zero if "i" > "j" and must eventually set all cells to one if "i" < "j". The desired eventual state is unspecified if "i" = "j".
The problem can also be generalized to testing whether the proportion of zeros and ones is above or below some threshold other than 50%. In this generalization, one is also given
a threshold formula_0; a correct solution to the voting problem must eventually set all cells to zero if formula_1 and must eventually set all cells to one if formula_2. The desired eventual state is unspecified if formula_3.
Approximate solutions.
Gács, Kurdyumov, and Levin found an automaton that, although it does not always solve the majority problem correctly, does so in many cases. In their approach to the problem,
the quality of a cellular automaton rule is measured by the fraction of the formula_4 possible starting configurations that it correctly classifies.
The rule proposed by Gacs, Kurdyumov, and Levin sets the state of each cell as follows. If a cell is 0, its next state is formed as the majority among the values of itself, its immediate neighbor to the left, and its neighbor three spaces to the left. If, on the other hand, a cell is 1, its next state is formed symmetrically, as the majority among the values of itself, its immediate neighbor to the right, and its neighbor three spaces to the right. In randomly generated instances, this achieves about 78% accuracy in correctly determining the majority.
Das, Mitchell, and Crutchfield showed that it is possible to develop better rules using genetic algorithms.
Impossibility of a perfect classifier.
In 1995, Land and Belew showed that no two-state rule with radius "r" and density ρ correctly solves the voting problem on all starting configurations when the number of cells is sufficiently large (larger than about 4"r"/ρ).
Their argument shows that because the system is deterministic, every cell surrounded entirely by zeros or ones must then become a zero. Likewise, any perfect rule can never make the ratio of ones go above formula_0 if it was below (or vice versa). They then show that any assumed perfect rule will either cause an isolated one that pushed the ratio over formula_0 to be cancelled out or, if the ratio of ones is less than formula_0, will cause an isolated one to introduce spurious ones into a block of zeros causing the ratio of ones to become greater than formula_0.
In 2013, Busic, Fatès, Marcovici and Mairesse gave a simpler proof of the impossibility to have a perfect density classifier, which holds both for deterministic and stochastic cellular and for any dimension.
Exact solution with alternative termination conditions.
As observed by Capcarrere, Sipper, and Tomassini, the majority problem may be solved perfectly if one relaxes the definition by which the automaton is said to have recognized the majority. In particular, for the Rule 184 automaton, when run on a finite universe with cyclic boundary conditions, each cell will infinitely often remain in the majority state for two consecutive steps while only finitely many times being in the minority state for two consecutive steps.
Alternatively, a hybrid automaton that runs Rule 184 for a number of steps linear in the size of the array, and then switches to the majority rule (Rule 232), that sets each cell to the majority of itself and its neighbors, solves the majority problem with the standard recognition criterion of either all zeros or all ones in the final state. However, this machine is not itself a cellular automaton. Moreover, it has been shown that Fukś's composite rule is very sensitive to noise and cannot outperform the noisy Gacs-Kurdyumov-Levin automaton, an imperfect classifier, for any level of noise (e.g., from the environment or from dynamical mistakes).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rho"
},
{
"math_id": 1,
"text": "\\tfrac{i}{i+j} < \\rho"
},
{
"math_id": 2,
"text": "\\tfrac{j}{i+j} > \\rho"
},
{
"math_id": 3,
"text": "\\tfrac{j}{i+j} = \\rho"
},
{
"math_id": 4,
"text": "2^{i+j}"
}
] | https://en.wikipedia.org/wiki?curid=10388995 |
10389277 | Mixed Hodge module | Mathematical concept
In mathematics, mixed Hodge modules are the culmination of Hodge theory, mixed Hodge structures, intersection cohomology, and the decomposition theorem yielding a coherent framework for discussing variations of degenerating mixed Hodge structures through the six functor formalism. Essentially, these objects are a pair of a filtered D-module formula_0 together with a perverse sheaf formula_1 such that the functor from the Riemann–Hilbert correspondence sends formula_0 to formula_1. This makes it possible to construct a Hodge structure on intersection cohomology, one of the key problems when the subject was discovered. This was solved by Morihiko Saito who found a way to use the filtration on a coherent D-module as an analogue of the Hodge filtration for a Hodge structure. This made it possible to give a Hodge structure on an intersection cohomology sheaf, the simple objects in the Abelian category of perverse sheaves.
Abstract structure.
Before going into the nitty gritty details of defining Mixed hodge modules, which is quite elaborate, it is useful to get a sense of what the category of Mixed Hodge modules actually provides. Given a complex algebraic variety formula_2 there is an abelian category formula_3pg 339 with the following functorial properties
In addition, there are the following categorical properties
For a morphism formula_14 of algebraic varieties, the associated six functors on formula_15 and formula_16 have the following properties
Relation between derived categories.
The derived category of mixed Hodge modules formula_15 is intimately related to the derived category of constructuctible sheaves formula_20 equivalent to the derived category of perverse sheaves. This is because of how the rationalization functor is compatible with the cohomology functor formula_21 of a complex formula_18 of mixed Hodge modules. When taking the rationalization, there is an isomorphismformula_22for the middle perversity formula_23. Notepg 310 this is the function formula_24 sending formula_25, which differs from the case of pseudomanifolds where the perversity is a function formula_26 where formula_27. Recall this is defined as taking the composition of perverse truncations with the shift functor, sopg 341formula_28This kind of setup is also reflected in the derived push and pull functors formula_29 and with nearby and vanishing cycles formula_30, the rationalization functor takes these to their analogous perverse functors on the derived category of perverse sheaves.
Tate modules and cohomology.
Here we denote the canonical projection to a point by formula_31. One of the first mixed Hodge modules available is the weight 0 Tate object, denoted formula_32 which is defined as the pullback of its corresponding object in formula_33, soformula_34It has weight zero, so formula_35 corresponds to the weight 0 Tate object formula_36 in the category of mixed Hodge structures. This object is useful because it can be used to compute the various cohomologies of formula_2 through the six functor formalism and give them a mixed Hodge structure. These can be summarized with the tableformula_37Moreover, given a closed embedding formula_38 there is the local cohomology groupformula_39
Variations of Mixed Hodge structures.
For a morphism of varieties formula_40 the pushforward maps formula_41 and formula_42 give degenerating variations of mixed Hodge structures on formula_43. In order to better understand these variations, the decomposition theorem and intersection cohomology are required.
Intersection cohomology.
One of the defining features of the category of mixed Hodge modules is the fact intersection cohomology can be phrased in its language. This makes it possible to use the decomposition theorem for maps formula_40 of varieties. To define the intersection complex, let formula_44 be the open smooth part of a variety formula_2. Then the intersection complex of formula_2 can be defined asformula_45whereformula_46as with perverse sheavespg 311. In particular, this setup can be used to show the intersection cohomology groupsformula_47have a pure weight formula_48 Hodge structure. | [
{
"math_id": 0,
"text": "(M, F^\\bullet)"
},
{
"math_id": 1,
"text": "\\mathcal{F}"
},
{
"math_id": 2,
"text": "X"
},
{
"math_id": 3,
"text": "\\textbf{MHM}(X)"
},
{
"math_id": 4,
"text": "\\text{rat}_X:D^b\\textbf{MHM}(X) \\to D^b_{cs}(X;\\mathbb{Q})"
},
{
"math_id": 5,
"text": "\\text{Dmod}_X:D^b\\textbf{MHM}(X) \\to D^b_{coh}(\\mathcal{D}_X)"
},
{
"math_id": 6,
"text": "DR_X:D^b_{Coh}(\\mathcal{D}_X) \\to D^b_{cs}(X;\\mathbb{C})"
},
{
"math_id": 7,
"text": "M"
},
{
"math_id": 8,
"text": "\\alpha: \\text{rat}_X(M)\\otimes \\mathbb{C} \\xrightarrow{\\sim} \\text{DR}_X(\\text{Dmod}_X(M))"
},
{
"math_id": 9,
"text": "\\textbf{MHM}(\\{pt\\}) \\cong \\text{MHS}"
},
{
"math_id": 10,
"text": "W"
},
{
"math_id": 11,
"text": "\\text{Gr}_k^W(M)"
},
{
"math_id": 12,
"text": "\\mathbb{D}_X"
},
{
"math_id": 13,
"text": "D^b_{cs}(X;\\mathbb{Q})"
},
{
"math_id": 14,
"text": "f:X \\to Y"
},
{
"math_id": 15,
"text": "D^b\\textbf{MHM}(X)"
},
{
"math_id": 16,
"text": "D^b\\textbf{MHM}(Y)"
},
{
"math_id": 17,
"text": "f_!,f^*"
},
{
"math_id": 18,
"text": "M^\\bullet"
},
{
"math_id": 19,
"text": "f^!,f_*"
},
{
"math_id": 20,
"text": "D^b_{cs}(X;\\mathbb{Q}) \\cong D^b(\\text{Perv}(X;\\mathbb{Q}))"
},
{
"math_id": 21,
"text": "H^k"
},
{
"math_id": 22,
"text": "\\text{rat}_X(H^k(M^\\bullet)) = \\text{ }^\\mathbf{p}H^k(\\text{rat}_X(M^\\bullet))"
},
{
"math_id": 23,
"text": "\\mathbb{p}"
},
{
"math_id": 24,
"text": "\\mathbf{p}:2\\mathbb{N} \\to \\mathbb{Z}"
},
{
"math_id": 25,
"text": "\\mathbf{p}(2k) = -k"
},
{
"math_id": 26,
"text": "\\mathbb{p}:[2,n] \\to \\mathbb{Z}_{\\geq 0}"
},
{
"math_id": 27,
"text": "\\mathbf{p}(2k)=\\mathbf{p}(2k - 1) = k-1"
},
{
"math_id": 28,
"text": "\\text{ }^\\mathbf{p}H^k(\\text{rat}_X(M^\\bullet)) =\n\\text{ }^{\\mathbf{p}}\\tau_{\\leq 0}\\text{ }^{\\mathbf{p}}\\tau_{\\geq 0}\n(\\text{rat}_X(M^\\bullet)[+k])"
},
{
"math_id": 29,
"text": "f_!,f^*,f^!,f_*\n"
},
{
"math_id": 30,
"text": "\\psi_f, \\phi_f"
},
{
"math_id": 31,
"text": "p:X \\to \\{pt\\}"
},
{
"math_id": 32,
"text": "\\underline{\\mathbb{Q}}_X^{Hdg}"
},
{
"math_id": 33,
"text": "\\mathbb{Q}^{Hdg} \\in \\textbf{MHM}(\\{pt\\})"
},
{
"math_id": 34,
"text": "\\underline{\\mathbb{Q}}_X^{Hdg} = p^*\\mathbb{Q}^{Hdg}"
},
{
"math_id": 35,
"text": "\\mathbb{Q}^{Hdg}"
},
{
"math_id": 36,
"text": "\\mathbb{Q}(0)"
},
{
"math_id": 37,
"text": "\\begin{matrix}\nH^k(X;\\mathbb{Q}) &= H^k(\\{pt\\}, p_*p^*\\mathbb{Q}^{Hdg}) \\\\\nH^k_c(X;\\mathbb{Q}) &= H^k(\\{pt\\}, p_!p^*\\mathbb{Q}^{Hdg}) \\\\\nH_{-k}(X;\\mathbb{Q}) &= H^k(\\{pt\\}, p_!p^!\\mathbb{Q}^{Hdg}) \\\\\nH_{-k}^{BM}(X;\\mathbb{Q}) &= H^k(\\{pt\\}, p_!p^*\\mathbb{Q}^{Hdg})\n\\end{matrix}"
},
{
"math_id": 38,
"text": "i: Z \\to X"
},
{
"math_id": 39,
"text": "H^k_Z(X;\\mathbb{Q}) = H^k(\\{pt\\}, p_*i_*i^!\\underline{\\mathbb{Q}}_X^{Hdg})"
},
{
"math_id": 40,
"text": "f:X \\to Y "
},
{
"math_id": 41,
"text": "f_*\\underline{\\mathbb{Q}}^{Hdg}_X"
},
{
"math_id": 42,
"text": "f_!\\underline{\\mathbb{Q}}^{Hdg}_X"
},
{
"math_id": 43,
"text": "Y"
},
{
"math_id": 44,
"text": "j : U \\hookrightarrow X"
},
{
"math_id": 45,
"text": "IC_X^\\bullet\\mathbb{Q}^{Hdg} := j_{!*}\\underline{\\mathbb{Q}}_U^{Hdg}[d_X]"
},
{
"math_id": 46,
"text": "j_{!*}(\\underline{\\mathbb{Q}}_U^{Hdg}) = \\operatorname{Image}[j_!(\\underline{\\mathbb{Q}}_U^{Hdg}) \\to j_*(\\underline{\\mathbb{Q}}_U^{Hdg})]"
},
{
"math_id": 47,
"text": "IH^k(X) = H^k(p_*IC^\\bullet\\underline{\\mathbb{Q}}_X)"
},
{
"math_id": 48,
"text": "k"
}
] | https://en.wikipedia.org/wiki?curid=10389277 |
10389861 | Shimura variety | In number theory, a Shimura variety is a higher-dimensional analogue of a modular curve that arises as a quotient variety of a Hermitian symmetric space by a congruence subgroup of a reductive algebraic group defined over Q. Shimura varieties are not algebraic varieties but are families of algebraic varieties. Shimura curves are the one-dimensional Shimura varieties. Hilbert modular surfaces and Siegel modular varieties are among the best known classes of Shimura varieties.
Special instances of Shimura varieties were originally introduced by Goro Shimura in the course of his generalization of the complex multiplication theory. Shimura showed that while initially defined analytically, they are arithmetic objects, in the sense that they admit models defined over a number field, the reflex field of the Shimura variety. In the 1970s, Pierre Deligne created an axiomatic framework for the work of Shimura. In 1979, Robert Langlands remarked that Shimura varieties form a natural realm of examples for which equivalence between motivic and automorphic "L"-functions postulated in the Langlands program can be tested. Automorphic forms realized in the cohomology of a Shimura variety are more amenable to study than general automorphic forms; in particular, there is a construction attaching Galois representations to them.
Definition.
Shimura datum.
Let "S" = ResC/R "G""m" be the Weil restriction of the multiplicative group from complex numbers to real numbers. It is a real algebraic group, whose group of R-points, "S"(R), is C* and group of C-points is C*×C*. A Shimura datum is a pair ("G", "X") consisting of a (connected) reductive algebraic group "G" defined over the field Q of rational numbers and a "G"(R)-conjugacy class "X" of homomorphisms "h": "S" → "G"R satisfying the following axioms:
formula_0
where for any "z" ∈ "S", "h"("z") acts trivially on the first summand and via formula_1 (respectively, formula_2) on the second (respectively, third) summand.
It follows from these axioms that "X" has a unique structure of a complex manifold (possibly, disconnected) such that for every representation "ρ": "G"R → "GL"("V"), the family ("V", "ρ" ⋅ "h") is a holomorphic family of Hodge structures; moreover, it forms a variation of Hodge structure, and "X" is a finite disjoint union of hermitian symmetric domains.
Shimura variety.
Let A"ƒ" be the ring of finite adeles of Q. For every sufficiently small compact open subgroup "K" of "G"(A"ƒ"), the double coset space
formula_3
is a finite disjoint union of locally symmetric varieties of the form formula_4, where the plus superscript indicates a connected component. The varieties Sh"K"("G","X") are complex algebraic varieties and they form an inverse system over all sufficiently small compact open subgroups "K". This inverse system
formula_5
admits a natural right action of "G"(A"ƒ"). It is called the Shimura variety associated with the Shimura datum ("G", "X") and denoted Sh("G", "X").
History.
For special types of hermitian symmetric domains and congruence subgroups Γ, algebraic varieties of the form Γ \ "X" = Sh"K"("G","X") and their compactifications were introduced in a series of papers of Goro Shimura during the 1960s. Shimura's approach, later presented in his monograph, was largely phenomenological, pursuing the widest generalizations of the reciprocity law formulation of complex multiplication theory. In retrospect, the name "Shimura variety" was introduced by Deligne, who proceeded to isolate the abstract features that played a role in Shimura's theory. In Deligne's formulation, Shimura varieties are parameter spaces of certain types of Hodge structures. Thus they form a natural higher-dimensional generalization of modular curves viewed as moduli spaces of elliptic curves with level structure. In many cases, the moduli problems to which Shimura varieties are solutions have been likewise identified.
Examples.
Let "F" be a totally real number field and "D" a quaternion division algebra over "F". The multiplicative group "D"× gives rise to a canonical Shimura variety. Its dimension "d" is the number of infinite places over which "D" splits. In particular, if "d" = 1 (for example, if "F" = Q and "D" ⊗ R ≅ M2(R)), fixing a sufficiently small arithmetic subgroup of "D"×, one gets a Shimura curve, and curves arising from this construction are already compact (i.e. projective).
Some examples of Shimura curves with explicitly known equations are given by the Hurwitz curves of low genus:
and by the Fermat curve of degree 7.
Other examples of Shimura varieties include Picard modular surfaces and Hilbert modular surfaces, also known as Hilbert–Blumenthal varieties.
Canonical models and special points.
Each Shimura variety can be defined over a canonical number field "E" called the reflex field. This important result due to Shimura shows that Shimura varieties, which "a priori" are only complex manifolds, have an algebraic field of definition and, therefore, arithmetical significance. It forms the starting point in his formulation of the reciprocity law, where an important role is played by certain arithmetically defined special points.
The qualitative nature of the Zariski closure of sets of special points on a Shimura variety is described by the André–Oort conjecture. Conditional results have been obtained on this conjecture, assuming a generalized Riemann hypothesis.
Role in the Langlands program.
Shimura varieties play an outstanding role in the Langlands program. The prototypical theorem, the Eichler–Shimura congruence relation, implies that the Hasse–Weil zeta function of a modular curve is a product of L-functions associated to explicitly determined modular forms of weight 2. Indeed, it was in the process of generalization of this theorem that Goro Shimura introduced his varieties and proved his reciprocity law. Zeta functions of Shimura varieties associated with the group "GL"2 over other number fields and its inner forms (i.e. multiplicative groups of quaternion algebras) were studied by Eichler, Shimura, Kuga, Sato, and Ihara. On the basis of their results, Robert Langlands made a prediction that the Hasse-Weil zeta function of any algebraic variety "W" defined over a number field would be a product of positive and negative powers of automorphic L-functions, i.e. it should arise from a collection of automorphic representations. However philosophically natural it may be to expect such a description, statements of this type have only been proved when "W" is a Shimura variety. In the words of Langlands:
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathfrak{g}\\otimes\\mathbb{C}=\\mathfrak{k}\\oplus\\mathfrak{p}^{+}\\oplus\\mathfrak{p}^{-},"
},
{
"math_id": 1,
"text": "z/\\bar{z}"
},
{
"math_id": 2,
"text": "\\bar{z}/z"
},
{
"math_id": 3,
"text": "\\operatorname{Sh}_K(G,X) = G(\\mathbb{Q})\\backslash X\\times G(\\mathbb{A}_f)/K "
},
{
"math_id": 4,
"text": "\\Gamma_i\\backslash X^+"
},
{
"math_id": 5,
"text": "(\\operatorname{Sh}_K(G,X))_K"
}
] | https://en.wikipedia.org/wiki?curid=10389861 |
10390 | Econometrics | Empirical statistical testing of economic theories
Econometrics is an application of statistical methods to economic data in order to give empirical content to economic relationships. More precisely, it is "the quantitative analysis of actual economic phenomena based on the concurrent development of theory and observation, related by appropriate methods of inference." An introductory economics textbook describes econometrics as allowing economists "to sift through mountains of data to extract simple relationships." Jan Tinbergen is one of the two founding fathers of econometrics. The other, Ragnar Frisch, also coined the term in the sense in which it is used today.
A basic tool for econometrics is the multiple linear regression model. "Econometric theory" uses statistical theory and mathematical statistics to evaluate and develop econometric methods. Econometricians try to find estimators that have desirable statistical properties including unbiasedness, efficiency, and consistency. "Applied econometrics" uses theoretical econometrics and real-world data for assessing economic theories, developing econometric models, analysing economic history, and forecasting.
Basic models: linear regression.
A basic tool for econometrics is the multiple linear regression model. In modern econometrics, other statistical tools are frequently used, but linear regression is still the most frequently used starting point for an analysis. Estimating a linear regression on two variables can be visualised as fitting a line through data points representing paired values of the independent and dependent variables.
For example, consider Okun's law, which relates GDP growth to the unemployment rate. This relationship is represented in a linear regression where the change in unemployment rate (formula_0) is a function of an intercept (formula_1), a given value of GDP growth multiplied by a slope coefficient formula_2 and an error term, formula_3:
formula_4
The unknown parameters formula_1 and formula_2 can be estimated. Here formula_1 is estimated to be 0.83 and formula_2 is estimated to be -1.77. This means that if GDP growth increased by one percentage point, the unemployment rate would be predicted to drop by 1.77 * 1 points, other things held constant. The model could then be tested for statistical significance as to whether an increase in GDP growth is associated with a decrease in the unemployment, as hypothesized. If the estimate of formula_2 were not significantly different from 0, the test would fail to find evidence that changes in the growth rate and unemployment rate were related. The variance in a prediction of the dependent variable (unemployment) as a function of the independent variable (GDP growth) is given in polynomial least squares.
Theory.
Econometric theory uses statistical theory and mathematical statistics to evaluate and develop econometric methods. Econometricians try to find estimators that have desirable statistical properties including unbiasedness, efficiency, and consistency. An estimator is unbiased if its expected value is the true value of the parameter; it is consistent if it converges to the true value as the sample size gets larger, and it is efficient if the estimator has lower standard error than other unbiased estimators for a given sample size. Ordinary least squares (OLS) is often used for estimation since it provides the BLUE or "best linear unbiased estimator" (where "best" means most efficient, unbiased estimator) given the Gauss-Markov assumptions. When these assumptions are violated or other statistical properties are desired, other estimation techniques such as maximum likelihood estimation, generalized method of moments, or generalized least squares are used. Estimators that incorporate prior beliefs are advocated by those who favour Bayesian statistics over traditional, classical or "frequentist" approaches.
Methods.
"Applied econometrics" uses theoretical econometrics and real-world data for assessing economic theories, developing econometric models, analysing economic history, and forecasting.
Econometrics may use standard statistical models to study economic questions, but most often they are with observational data, rather than in controlled experiments. In this, the design of observational studies in econometrics is similar to the design of studies in other observational disciplines, such as astronomy, epidemiology, sociology and political science. Analysis of data from an observational study is guided by the study protocol, although exploratory data analysis may be useful for generating new hypotheses. Economics often analyses systems of equations and inequalities, such as supply and demand hypothesized to be in equilibrium. Consequently, the field of econometrics has developed methods for identification and estimation of simultaneous equations models. These methods are analogous to methods used in other areas of science, such as the field of system identification in systems analysis and control theory. Such methods may allow researchers to estimate models and investigate their empirical consequences, without directly manipulating the system.
One of the fundamental statistical methods used by econometricians is regression analysis. Regression methods are important in econometrics because economists typically cannot use controlled experiments. Typically, the most readily available data is retrospective. However, retrospective analysis of observational data may be subject to omitted-variable bias, reverse causality, or other limitations that cast doubt on causal interpretation of the correlations.
In the absence of evidence from controlled experiments, econometricians often seek illuminating natural experiments or apply quasi-experimental methods to draw credible causal inference. The methods include regression discontinuity design, instrumental variables, and difference-in-differences.
Example.
A simple example of a relationship in econometrics from the field of labour economics is:
formula_5
This example assumes that the natural logarithm of a person's wage is a linear function of the number of years of education that person has acquired. The parameter formula_6 measures the increase in the natural log of the wage attributable to one more year of education. The term formula_3 is a random variable representing all other factors that may have direct influence on wage. The econometric goal is to estimate the parameters, formula_7 under specific assumptions about the random variable formula_3. For example, if formula_3 is uncorrelated with years of education, then the equation can be estimated with ordinary least squares.
If the researcher could randomly assign people to different levels of education, the data set thus generated would allow estimation of the effect of changes in years of education on wages. In reality, those experiments cannot be conducted. Instead, the econometrician observes the years of education of and the wages paid to people who differ along many dimensions. Given this kind of data, the estimated coefficient on years of education in the equation above reflects both the effect of education on wages and the effect of other variables on wages, if those other variables were correlated with education. For example, people born in certain places may have higher wages and higher levels of education. Unless the econometrician controls for place of birth in the above equation, the effect of birthplace on wages may be falsely attributed to the effect of education on wages.
The most obvious way to control for birthplace is to include a measure of the effect of birthplace in the equation above. Exclusion of birthplace, together with the assumption that formula_8 is uncorrelated with education produces a misspecified model. Another technique is to include in the equation additional set of measured covariates which are not instrumental variables, yet render formula_6 identifiable. An overview of econometric methods used to study this problem were provided by Card (1999).
Journals.
The main journals that publish work in econometrics are:
Limitations and criticisms.
Like other forms of statistical analysis, badly specified econometric models may show a spurious relationship where two variables are correlated but causally unrelated. In a study of the use of econometrics in major economics journals, McCloskey concluded that some economists report "p"-values (following the Fisherian tradition of tests of significance of point null-hypotheses) and neglect concerns of type II errors; some economists fail to report estimates of the size of effects (apart from statistical significance) and to discuss their economic importance. She also argues that some economists also fail to use economic reasoning for model selection, especially for deciding which variables to include in a regression.
In some cases, economic variables cannot be experimentally manipulated as treatments randomly assigned to subjects. In such cases, economists rely on observational studies, often using data sets with many strongly associated covariates, resulting in enormous numbers of models with similar explanatory ability but different covariates and regression estimates. Regarding the plurality of models compatible with observational data-sets, Edward Leamer urged that "professionals ... properly withhold belief until an inference can be shown to be adequately insensitive to the choice of assumptions".
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\Delta\\ \\text{Unemployment}"
},
{
"math_id": 1,
"text": " \\beta_0 "
},
{
"math_id": 2,
"text": " \\beta_1 "
},
{
"math_id": 3,
"text": "\\varepsilon"
},
{
"math_id": 4,
"text": " \\Delta\\ \\text {Unemployment} = \\beta_0 + \\beta_1\\text{Growth} + \\varepsilon. "
},
{
"math_id": 5,
"text": " \\ln(\\text{wage}) = \\beta_0 + \\beta_1 (\\text{years of education}) + \\varepsilon. "
},
{
"math_id": 6,
"text": "\\beta_1"
},
{
"math_id": 7,
"text": "\\beta_0 \\mbox{ and } \\beta_1 "
},
{
"math_id": 8,
"text": "\\epsilon"
}
] | https://en.wikipedia.org/wiki?curid=10390 |
10391173 | Gunduz Caginalp | American mathematician (died 2021)
Gunduz Caginalp (died December 7th, 2021) was a Turkish-born American mathematician whose research has also contributed over 100 papers to physics, materials science and economics/finance journals, including two with Michael Fisher and nine with Nobel Laureate Vernon Smith. He began his studies at Cornell University in 1970 and received an AB in 1973 "Cum Laude with Honors in All Subjects" and Phi Beta Kappa. In 1976 he received a master's degree, and in 1978 a PhD, both also at Cornell. He held positions at The Rockefeller University, Carnegie-Mellon University and the University of Pittsburgh (since 1984), where he was a professor of Mathematics until his death on December 7, 2021. He was born in Turkey, and spent his first seven years and ages 13–16 there, and the middle years in New York City.
Caginalp and his wife Eva were married in 1992 and had three sons, Carey, Reggie and Ryan.
He served as the Editor of the "Journal of Behavioral Finance" (1999–2003), and was an Associate Editor for numerous journals. He received awards from the National Science Foundation as well as private foundations.
Thesis and related research.
Caginalp's PhD in Applied Mathematics at Cornell University (with thesis advisor Professor Michael Fisher) focused on surface free energy. Previous results by David Ruelle, Fisher, and Elliot Lieb in the 1960s had established that the free energy of a large system can be written as a product of the volume times a term formula_0 (free energy per unit volume) that is independent of the size of the system, plus smaller terms. A remaining problem was to prove that there was a similar term associated with the surface. This was more difficult since the formula_0 proofs relied on discarding terms that were proportional to the surface.
A key result of Caginalp's thesis [1,2,3] is the proof that the free energy, F, of a lattice system occupying a region formula_1 with volume formula_2 and surface area formula_3 can be written as
formula_4
with formula_5 is the surface free energy (independent of formula_2 and formula_3 ).
Shortly after his PhD, Caginalp joined the Mathematical Physics group of James Glimm (2002 National Medal of Science recipient) at The Rockefeller University. In addition to working on mathematical statistical mechanics, he also proved existence theorems on nonlinear hyperbolic differential equations describing fluid flow. These papers were published in the "Annals of Physics" and the "Journal of Differential Equations".
Developing phase field models.
In 1980, Caginalp was the first recipient of the Zeev Nehari position established at Carnegie-Mellon University's Mathematical Sciences Department. At that time he began working on free boundary problems, e.g., problems in which there is an interface between two phases that must be determined as part of the solution to the problem. His original paper on this topic is the second most cited paper in a leading journal, Archive for Rational Mechanics and Analysis, during the subsequent quarter century.
He had published over fifty papers on the phase field equations in mathematics, physics and materials journals. The focus of research in the mathematics and physics communities changed considerably during this period, and this perspective is widely used to derive macroscopic equations from a microscopic setting, as well as performing computations on dendritic growth and other phenomena.
In the mathematics community during the previous century, the interface between two phases was generally studied via the Stefan model, in which temperature played a dual role, as the sign of the temperature determined the phase, so the interface is defined as the set of points at which the temperature is zero. Physically, however, the temperature at the interface was known to be proportional to the curvature, thereby preventing the temperature from fulfilling its dual role of the Stefan model. This suggested that an additional variable would be needed for a complete description of the interface. In the physics literature, the idea of an "order parameter" and mean field theory had been used by Landau in the 1940s to describe the region near the critical point (i.e., the region in which the liquid and solid phases become indistinguishable). However, the calculation of exact exponents in statistical mechanics showed that mean field theory was not reliable.
There was speculation in the physics community that such a theory could be used to describe an ordinary phase transition. However, the fact that the order parameter could not produce the correct exponents in critical phenomena for which it was invented led to skepticism that it could produce results for normal phase transitions.
The justification for an order parameter or mean field approach had been that the correlation length between atoms approaches infinity near the critical point. For an ordinary phase transition, the correlation length is typically just a few atomic lengths. Furthermore, in critical phenomena one is often trying to calculate the critical exponents, which should be independent of the details of the system (often called "universality"). In a typical interface problem, one is trying to calculate the interface position essentially exactly, so that one cannot "hide behind universality".
In 1980 there seemed to be ample reason to be skeptical of the idea that an order parameter could be used to describe a moving interface between two phases of a material. Beyond the physical justifications, there remained issues related to the dynamics of an interface and the mathematics of the equations. For example, if one uses an order parameter, formula_6, together with the temperature variable, T, in a system of parabolic equations, will an initial transition layer in formula_6, describing the interface remain as such? One expects that will vary from -1 to +1 as one moves from the solid to the liquid and that the transition will be made on a spatial scale of formula_7, the physical thickness of the interface. The interface in the phase field system is then described by the level set of points on which formula_6 vanishes.
The simplest model [4] can be written as a pair formula_8 that satisfies the equations
formula_9
where formula_10 are physically measurable constants, and formula_7 is the interface thickness.
With the interface described as the level set of points where the phase variable vanishes, the model allows the interface to be identified without tracking, and is valid even if there are self-intersections.
Modeling.
Using the phase field idea to model solidification so that the physical parameters could be identified was originally undertaken in [4].
Alloys.
A number of papers in collaboration with Weiqing Xie* and James Jones [5,6] have extended the modeling to alloy solid-liquid interfaces.
Basic theorems and analytical results.
Initiated during the 1980s, these include the following.
Computational results.
The earliest qualitative computations were done in collaboration with J.T. Lin in 1987.
Phase field models of second order.
As phase field models became a useful tool in materials science, the need for even better convergence (from the phase field to the sharp interface problems) became apparent. This led to the development of phase field models of second order, meaning that as the interface thickness, formula_7, becomes small, the difference in the interface of the phase field model and the interface of the related sharp interface model become second order in interface thickness, i.e., formula_12. In collaboration with Dr. Christof Eck, Dr. Emre Esenturk* and Profs. Xinfu Chen and Caginalp developed a new phase field model and proved that it was indeed second order [10, 11,12]. Numerical computations confirmed these results.
Application of renormalization group methods to differential equations.
The philosophical perspective of the renormalization group (RG) initiated by Ken Wilson in the 1970s is that in a system with large degrees of freedom, one should be able to repeatedly average and adjust, or renormalize, at each step without changing the essential feature that one is trying to compute. In the 1990s Nigel Goldenfeld and collaborators began to investigate the possibility of using this idea for the Barenblatt equation. Caginalp further developed these ideas so that one can calculate the decay (in space and time) of solutions to a heat equation with nonlinearity [13] that satisfies a dimensional condition. The methods were also applied to interface problems and systems of parabolic differential equations with Huseyin Merdan*.
Research in behavioral finance and experimental economics.
Caginalp has been a leader in the newly developing field of Quantitative Behavioral Finance. The work has three main facets: (1) statistical time series modeling, (2) mathematical modeling using differential equations, and (3) laboratory experiments; comparison with models and world markets. His research is influenced by decades of experience as an individual investor and trader.
Statistical time series modeling.
The efficient-market hypothesis (EMH) has been the dominant theory for financial markets for the past half century. It stipulates that asset prices are essentially random fluctuations about their fundamental value. As empirical evidence, its proponents cite market data that appears to be "white noise". Behavioral finance has challenged this perspective, citing large market upheavals such as the high-tech bubble and bust of 1998–2003, etc. The difficulty in establishing the key ideas of behavioral finance and economics has been the presence of "noise" in the market. Caginalp and others have made substantial progress toward surmounting this key difficulty. An early study by Caginalp and Constantine in 1995 showed that using the ratio of two clone closed-end funds, one can remove the noise associated with valuation. They showed that today's price is not likely to be yesterday's price (as indicated by EMH), or a pure continuation of the change during the previous time interval, but is halfway between those prices.
Subsequent work with Ahmet Duran* [14] examined the data involving large deviations between the price and net asset value of closed end funds, finding strong evidence that there is a subsequent movement in the opposite direction (suggesting overreaction). More surprisingly, there is a precursor to the deviation, which is usually a result of large changes in price in the absence of significant changes in value.
Dr. Vladimira Ilieva and Mark DeSantis* focused on large scale data studies that effectively subtracted out the changes due to the net asset value of closed end funds [15]. Thus one could establish significant coefficients for price trend. The work with DeSantis was particularly noteworthy in two respects: (a) by standardizing the data, it became possible to compare the impact of price trend versus changes in money supply, for example; (b) the impact of the price trend was shown to be nonlinear, so that a small uptrend has a positive impact on prices (demonstrating underreaction), while a large uptrend has a negative influence. The measure of large or small is based upon the frequency of occurrence (measure in standard deviations). Using exchange traded funds (ETFs), they also showed (together with Akin Sayrak) that the concept of resistance—whereby a stock has retreats as it nears a yearly high—has strong statistical support [16].
The research shows the importance of two key ideas: (i) By compensating for much of the change in valuation, one can reduce the noise that obscures many behavioral and other influence on price dynamics; (ii) By examining nonlinearity (e.g., in the price trend effect) one can uncover influences that would be statistically insignificant upon examining only linear terms.
Mathematical modeling using differential equations.
The asset flow differential approach involves understanding asset market dynamics.
(I) Unlike the EMH, the model developed by Caginalp and collaborators since 1990 involves ingredients that were marginalized by the classical efficient market hypothesis: while price change depends on supply and demand for the asset (e.g., stock) the latter can depend on a variety of motivations and strategies, such as the recent price trend. Unlike the classical theories, there is no assumption of infinite arbitrage, which says that any small deviation from the true value (that is universally accepted since all participants have the same information) is quickly exploited by an (essentially) infinite capital managed by "informed" investors. Among the consequences of this theory is that equilibrium is not a unique price, but depends on the price history and strategies of the traders.
Classical models of price dynamics are all built on the idea that there is infinite arbitrage capital. The Caginalp asset flow model introduced an important new concept of liquidity, L, or excess cash that is defined to be the total cash in the system divided by the total number of shares.
(II) In subsequent years, these asset flow equations were generalized to include distinct groups with differing assessments of value, and distinct strategies and resources. For example, one group may be focused on trend (momentum) while another emphasizes value, and attempts to buy the stock when it is undervalued.
(III) In collaboration with Duran these equations were studied in terms of optimization of parameters, rendering them a useful tool for practical implementation.
(IV) More recently, David Swigon, DeSantis and Caginalp studied the stability of the asset flow equations and showed that instabilities, for example, flash crashes could occur as a result of traders utilizing momentum strategies together with shorter time scales [17, 18].
In recent years, there has been related work that is sometimes called "evolutionary finance".
Laboratory experiments; comparison with models and world markets.
In the 1980s asset market experiments pioneered by Vernon Smith (2002 Economics Nobel Laureate) and collaborators provided a new tool to study microeconomics and finance. In particular these posed a challenge to classical economics by showing that participants when participants traded (with real money) an asset with a well defined value the price would soar well above the fundamental value that is defined by the experimenters. Repetition of this experiment under various conditions showed the robustness of the phenomenon. By designing new experiments, Profs. Caginalp, Smith and David Porter largely resolved this paradox through the framework of the asset flow equations. In particular, the bubble size (and more generally, the asset price) was highly correlated by the excess cash in the system, and momentum was also shown to be a factor [19]. In classical economics there would be just one quantity, namely the share price that has units of dollars per share. The experiments showed that this is distinct from the fundamental value per share. The liquidity, L, introduced by Caginalp and collaborators, is a third quantity that also has these units [20]. The temporal evolution of prices involves a complex relationship among these three variables, together with quantities reflecting the motivations of the traders that may involve price trend and other factors. Other studies have shown quantitatively that motivations in the experimental traders are similar to those in world markets.
formula_13 - PhD student of Caginalp
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "f_{\\infty}"
},
{
"math_id": 1,
"text": "\\Omega"
},
{
"math_id": 2,
"text": "|\\Omega|"
},
{
"math_id": 3,
"text": "|\\partial\\Omega|"
},
{
"math_id": 4,
"text": "F(\\Omega)=|\\Omega|f_\\infty+|\\partial\\Omega|f_x+..."
},
{
"math_id": 5,
"text": "f_x"
},
{
"math_id": 6,
"text": "\\phi"
},
{
"math_id": 7,
"text": "\\varepsilon"
},
{
"math_id": 8,
"text": "(\\phi,T)"
},
{
"math_id": 9,
"text": "\n\\begin{array}{lcl}\nC_{P}T_{t}+\\frac{l}{2}\\phi = K\\Delta T \\\\\n\\alpha\\varepsilon^2 \\phi_t = \\varepsilon^2 \\Delta\\phi + \\frac{1}{2}(\\phi-\\phi^3)+\\frac{\\varepsilon[s]_E}{3\\sigma}(T-T_E)\n\\end{array}\n"
},
{
"math_id": 10,
"text": "C_{P}, l, \\alpha, \\sigma, [s]_{E}"
},
{
"math_id": 11,
"text": "d_0"
},
{
"math_id": 12,
"text": "\\varepsilon^2"
},
{
"math_id": 13,
"text": "\\ast"
}
] | https://en.wikipedia.org/wiki?curid=10391173 |
1039124 | Stellar structure | Structure of stars
Stellar structure models describe the internal structure of a star in detail and make predictions about the luminosity, the color and the future evolution of the star. Different classes and ages of stars have different internal structures, reflecting their elemental makeup and energy transport mechanisms.
Heat transport.
For energy transport refer to Radiative transfer.
Different layers of the stars transport heat up and outwards in different ways, primarily convection and radiative transfer, but thermal conduction is important in white dwarfs.
Convection is the dominant mode of energy transport when the temperature gradient is steep enough so that a given parcel of gas within the star will continue to rise if it rises slightly via an adiabatic process. In this case, the rising parcel is buoyant and continues to rise if it is warmer than the surrounding gas; if the rising parcel is cooler than the surrounding gas, it will fall back to its original height. In regions with a low temperature gradient and a low enough opacity to allow energy transport via radiation, radiation is the dominant mode of energy transport.
The internal structure of a main sequence star depends upon the mass of the star.
In stars with masses of 0.3–1.5 solar masses (M☉), including the Sun, hydrogen-to-helium fusion occurs primarily via proton–proton chains, which do not establish a steep temperature gradient. Thus, radiation dominates in the inner portion of solar mass stars. The outer portion of solar mass stars is cool enough that hydrogen is neutral and thus opaque to ultraviolet photons, so convection dominates. Therefore, solar mass stars have radiative cores with convective envelopes in the outer portion of the star.
In massive stars (greater than about 1.5 M☉), the core temperature is above about 1.8×107 K, so hydrogen-to-helium fusion occurs primarily via the CNO cycle. In the CNO cycle, the energy generation rate scales as the temperature to the 15th power, whereas the rate scales as the temperature to the 4th power in the proton-proton chains. Due to the strong temperature sensitivity of the CNO cycle, the temperature gradient in the inner portion of the star is steep enough to make the core convective. In the outer portion of the star, the temperature gradient is shallower but the temperature is high enough that the hydrogen is nearly fully ionized, so the star remains transparent to ultraviolet radiation. Thus, massive stars have a radiative envelope.
The lowest mass main sequence stars have no radiation zone; the dominant energy transport mechanism throughout the star is convection.
Equations of stellar structure.
The simplest commonly used model of stellar structure is the spherically symmetric quasi-static model, which assumes that a star is in a steady state and that it is spherically symmetric. It contains four basic first-order differential equations: two represent how matter and pressure vary with radius; two represent how temperature and luminosity vary with radius.
In forming the stellar structure equations (exploiting the assumed spherical symmetry), one considers the matter density formula_0, temperature formula_1, total pressure (matter plus radiation) formula_2, luminosity formula_3, and energy generation rate per unit mass formula_4 in a spherical shell of a thickness formula_5 at a distance formula_6 from the center of the star. The star is assumed to be in local thermodynamic equilibrium (LTE) so the temperature is identical for matter and photons. Although LTE does not strictly hold because the temperature a given shell "sees" below itself is always hotter than the temperature above, this approximation is normally excellent because the photon mean free path, formula_7, is much smaller than the length over which the temperature varies considerably, i.e. formula_8.
First is a statement of "hydrostatic equilibrium:" the outward force due to the pressure gradient within the star is exactly balanced by the inward force due to gravity. This is sometimes referred to as stellar equilibrium.
formula_9,
where formula_10 is the cumulative mass inside the shell at formula_6 and "G" is the gravitational constant. The cumulative mass increases with radius according to the "mass continuity equation:"
formula_11
Integrating the mass continuity equation from the star center (formula_12) to the radius of the star (formula_13) yields the total mass of the star.
Considering the energy leaving the spherical shell yields the "energy equation:"
formula_14,
where formula_15 is the luminosity produced in the form of neutrinos (which usually escape the star without interacting with ordinary matter) per unit mass. Outside the core of the star, where nuclear reactions occur, no energy is generated, so the luminosity is constant.
The energy transport equation takes differing forms depending upon the mode of energy transport. For conductive energy transport (appropriate for a white dwarf), the energy equation is
formula_16
where "k" is the thermal conductivity.
In the case of radiative energy transport, appropriate for the inner portion of a solar mass main sequence star and the outer envelope of a massive main sequence star,
formula_17
where formula_18 is the opacity of the matter, formula_19 is the Stefan–Boltzmann constant, and the Boltzmann constant is set to one.
The case of convective energy transport does not have a known rigorous mathematical formulation, and involves turbulence in the gas. Convective energy transport is usually modeled using mixing length theory. This treats the gas in the star as containing discrete elements which roughly retain the temperature, density, and pressure of their surroundings but move through the star as far as a characteristic length, called the "mixing length". For a monatomic ideal gas, when the convection is adiabatic, meaning that the convective gas bubbles don't exchange heat with their surroundings, mixing length theory yields
formula_20
where formula_21 is the adiabatic index, the ratio of specific heats in the gas. (For a fully ionized ideal gas, formula_22.) When the convection is not adiabatic, the true temperature gradient is not given by this equation. For example, in the Sun the convection at the base of the convection zone, near the core, is adiabatic but that near the surface is not. The mixing length theory contains two free parameters which must be set to make the model fit observations, so it is a phenomenological theory rather than a rigorous mathematical formulation.
Also required are the equations of state, relating the pressure, opacity and energy generation rate to other local variables appropriate for the material, such as temperature, density, chemical composition, etc. Relevant equations of state for pressure may have to include the perfect gas law, radiation pressure, pressure due to degenerate electrons, etc. Opacity cannot be expressed exactly by a single formula. It is calculated for various compositions at specific densities and temperatures and presented in tabular form. Stellar structure "codes" (meaning computer programs calculating the model's variables) either interpolate in a density-temperature grid to obtain the opacity needed, or use a fitting function based on the tabulated values. A similar situation occurs for accurate calculations of the pressure equation of state. Finally, the nuclear energy generation rate is computed from nuclear physics experiments, using "reaction networks" to compute reaction rates for each individual reaction step and equilibrium abundances for each isotope in the gas.
Combined with a set of boundary conditions, a solution of these equations completely describes the behavior of the star. Typical boundary conditions set the values of the observable parameters appropriately at the surface (formula_13) and center (formula_12) of the star: formula_23, meaning the pressure at the surface of the star is zero; formula_24, there is no mass inside the center of the star, as required if the mass density remains ; formula_25, the total mass of the star is the star's mass; and formula_26, the temperature at the surface is the effective temperature of the star.
Although nowadays stellar evolution models describe the main features of color–magnitude diagrams, important improvements have to be made in order to remove uncertainties which are linked to the limited knowledge of transport phenomena. The most difficult challenge remains the numerical treatment of turbulence. Some research teams are developing simplified modelling of turbulence in 3D calculations.
Rapid evolution.
The above simplified model is not adequate without modification in situations when the composition changes are sufficiently rapid. The equation of hydrostatic equilibrium may need to be modified by adding a radial acceleration term if the radius of the star is changing very quickly, for example if the star is radially pulsating. Also, if the nuclear burning is not stable, or the star's core is rapidly collapsing, an entropy term must be added to the energy equation.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rho(r)"
},
{
"math_id": 1,
"text": "T(r)"
},
{
"math_id": 2,
"text": "P(r)"
},
{
"math_id": 3,
"text": "l(r)"
},
{
"math_id": 4,
"text": "\\epsilon(r)"
},
{
"math_id": 5,
"text": "\\mbox{d}r"
},
{
"math_id": 6,
"text": "r"
},
{
"math_id": 7,
"text": "\\lambda"
},
{
"math_id": 8,
"text": "\\lambda \\ll T/|\\nabla T|"
},
{
"math_id": 9,
"text": " {\\mbox{d} P \\over \\mbox{d} r} = - { G m \\rho \\over r^2 } "
},
{
"math_id": 10,
"text": "m(r)"
},
{
"math_id": 11,
"text": " {\\mbox {d} m \\over \\mbox{d} r} = 4 \\pi r^2 \\rho ."
},
{
"math_id": 12,
"text": "r=0"
},
{
"math_id": 13,
"text": "r=R"
},
{
"math_id": 14,
"text": " {\\mbox{d} l \\over \\mbox{d} r} = 4 \\pi r^2 \\rho ( \\epsilon - \\epsilon_\\nu )"
},
{
"math_id": 15,
"text": "\\epsilon_\\nu"
},
{
"math_id": 16,
"text": " {\\mbox{d} T \\over \\mbox{d} r} = - {1 \\over k} { l \\over 4 \\pi r^2 },"
},
{
"math_id": 17,
"text": " {\\mbox{d} T \\over \\mbox{d} r} = - {3 \\kappa \\rho l \\over 64 \\pi r^2 \\sigma T^3},"
},
{
"math_id": 18,
"text": "\\kappa"
},
{
"math_id": 19,
"text": "\\sigma"
},
{
"math_id": 20,
"text": " {\\mbox{d} T \\over \\mbox{d} r} = \\left(1 - {1 \\over \\gamma} \\right) {T \\over P } { \\mbox{d} P \\over \\mbox{d} r},"
},
{
"math_id": 21,
"text": "\\gamma = c_p / c_v"
},
{
"math_id": 22,
"text": "\\gamma = 5/3"
},
{
"math_id": 23,
"text": "P(R) = 0"
},
{
"math_id": 24,
"text": "m(0) = 0"
},
{
"math_id": 25,
"text": "m(R) = M"
},
{
"math_id": 26,
"text": "T(R) = T_{eff}"
}
] | https://en.wikipedia.org/wiki?curid=1039124 |