id
stringlengths 2
8
| title
stringlengths 1
130
| text
stringlengths 0
252k
| formulas
listlengths 1
823
| url
stringlengths 38
44
|
---|---|---|---|---|
933541 | Karoubi envelope | Category theory
In mathematics the Karoubi envelope (or Cauchy completion or idempotent completion) of a category C is a classification of the idempotents of C, by means of an auxiliary category. Taking the Karoubi envelope of a preadditive category gives a pseudo-abelian category, hence the construction is sometimes called the pseudo-abelian completion. It is named for the French mathematician Max Karoubi.
Given a category C, an idempotent of C is an endomorphism
formula_0
with
formula_1.
An idempotent "e": "A" → "A" is said to split if there is an object "B" and morphisms "f": "A" → "B",
"g" : "B" → "A" such that "e" = "g" "f" and 1"B" = "f" "g".
The Karoubi envelope of C, sometimes written Split(C), is the category whose objects are pairs of the form ("A", "e") where "A" is an object of C and formula_2 is an idempotent of C, and whose morphisms are the triples
formula_3
where formula_4 is a morphism of C satisfying formula_5 (or equivalently formula_6).
Composition in Split(C) is as in C, but the identity morphism
on formula_7 in Split(C) is formula_8, rather than
the identity on formula_9.
The category C embeds fully and faithfully in Split(C). In Split(C) every idempotent splits, and Split(C) is the universal category with this property.
The Karoubi envelope of a category C can therefore be considered as the "completion" of C which splits idempotents.
The Karoubi envelope of a category C can equivalently be defined as the full subcategory of formula_10 (the presheaves over C) of retracts of representable functors. The category of presheaves on C is equivalent to the category of presheaves on Split(C).
Automorphisms in the Karoubi envelope.
An automorphism in Split(C) is of the form formula_11, with inverse formula_12 satisfying:
formula_13
formula_14
formula_15
If the first equation is relaxed to just have formula_16, then "f" is a partial automorphism (with inverse "g"). A (partial) involution in Split(C) is a self-inverse (partial) automorphism. | [
{
"math_id": 0,
"text": "e: A \\rightarrow A"
},
{
"math_id": 1,
"text": "e\\circ e = e"
},
{
"math_id": 2,
"text": "e : A \\rightarrow A"
},
{
"math_id": 3,
"text": "(e, f, e^{\\prime}): (A, e) \\rightarrow (A^{\\prime}, e^{\\prime})"
},
{
"math_id": 4,
"text": "f: A \\rightarrow A^{\\prime}"
},
{
"math_id": 5,
"text": "e^{\\prime} \\circ f = f = f \\circ e"
},
{
"math_id": 6,
"text": "f=e'\\circ f\\circ e"
},
{
"math_id": 7,
"text": "(A,e)"
},
{
"math_id": 8,
"text": "(e,e,e)"
},
{
"math_id": 9,
"text": "A"
},
{
"math_id": 10,
"text": "\\hat{\\mathbf{C}}"
},
{
"math_id": 11,
"text": "(e, f, e): (A, e) \\rightarrow (A, e)"
},
{
"math_id": 12,
"text": "(e, g, e): (A, e) \\rightarrow (A, e)"
},
{
"math_id": 13,
"text": "g \\circ f = e = f \\circ g"
},
{
"math_id": 14,
"text": "g \\circ f \\circ g = g"
},
{
"math_id": 15,
"text": "f \\circ g \\circ f = f"
},
{
"math_id": 16,
"text": "g \\circ f = f \\circ g"
},
{
"math_id": 17,
"text": "f: A \\rightarrow B"
},
{
"math_id": 18,
"text": "f \\times f^{-1}: A \\times B \\rightarrow B \\times A"
},
{
"math_id": 19,
"text": "\\gamma:B \\times A \\rightarrow A \\times B"
},
{
"math_id": 20,
"text": "X"
},
{
"math_id": 21,
"text": "C(X)"
}
] | https://en.wikipedia.org/wiki?curid=933541 |
9335905 | Multidelay block frequency domain adaptive filter | The multidelay block frequency domain adaptive filter (MDF) algorithm is a block-based frequency domain implementation of the (normalised) Least mean squares filter (LMS) algorithm.
Introduction.
The MDF algorithm is based on the fact that convolutions may be efficiently computed in the frequency domain (thanks to the fast Fourier transform). However, the algorithm differs from the fast LMS algorithm in that block size it uses may be smaller than the filter length. If both are equal, then MDF reduces to the FLMS algorithm.
The advantages of MDF over the (N)LMS algorithm are:
Variable definitions.
Let formula_0 be the length of the processing blocks, formula_1 be the number of blocks and formula_2 denote the 2Nx2N Fourier transform matrix. The variables are defined as:
formula_3
formula_4
formula_5
formula_6
With normalisation matrices formula_7 and formula_8:
formula_9
formula_10
formula_11
In practice, when multiplying a column vector formula_12 by formula_7, we take the inverse FFT of formula_12, set the first formula_0 values in the result to zero and then take the FFT. This is meant to remove the effects of the circular convolution.
Algorithm description.
For each block, the MDF algorithm is computed as:
formula_13
formula_14
formula_15
formula_16
It is worth noting that, while the algorithm is more easily expressed in matrix form, the actual implementation requires no matrix multiplications. For instance the normalisation matrix computation formula_17 reduces to an element-wise vector multiplication because formula_18 is block-diagonal. The same goes for other multiplications. | [
{
"math_id": 0,
"text": "N"
},
{
"math_id": 1,
"text": "K"
},
{
"math_id": 2,
"text": "\\mathbf{F}"
},
{
"math_id": 3,
"text": "\\underline{\\mathbf{e}}(\\ell) = \\mathbf{F}\\left[ \\mathbf{0}_{1xN}, e(\\ell N),\\dots,e(\\ell N-N-1) \\right]^T"
},
{
"math_id": 4,
"text": "\\underline{\\mathbf{x}}_k(\\ell) = \\mathrm{diag} \\left\\{ \\mathbf{F}\\left[ x((\\ell -k+1) N),\\dots,x((\\ell -k-1) N-1) \\right]^T \\right\\}"
},
{
"math_id": 5,
"text": "\\underline{\\mathbf{X}}(\\ell) = \\left[ \\underline{\\mathbf{x}}_0(\\ell), \\underline{\\mathbf{x}}_1(\\ell), \\dots, \\underline{\\mathbf{x}}_{K-1}(\\ell) \\right]"
},
{
"math_id": 6,
"text": "\\underline{\\mathbf{d}}(\\ell) = \\mathbf{F}\\left[ \\mathbf{0}_{1xN}, d(\\ell N),\\dots,d(\\ell N-N-1) \\right]^T"
},
{
"math_id": 7,
"text": "\\mathbf{G}_1"
},
{
"math_id": 8,
"text": "\\mathbf{G}_2"
},
{
"math_id": 9,
"text": "\\mathbf{G}_1 = \\mathbf{F}\\begin{bmatrix}\n\\mathbf{0}_{N\\times N} & \\mathbf{0}_{N\\times N} \\\\\n\\mathbf{0}_{N\\times N} & \\mathbf{I}_{N\\times N} \\\\\n\\end{bmatrix}\\mathbf{F}^H"
},
{
"math_id": 10,
"text": "\\tilde{\\mathbf{G}}_2 = \\mathbf{F}\\begin{bmatrix}\n\\mathbf{I}_{N\\times N} & \\mathbf{0}_{N\\times N} \\\\\n\\mathbf{0}_{N\\times N} & \\mathbf{0}_{N\\times N} \\\\\n\\end{bmatrix}\\mathbf{F}^H"
},
{
"math_id": 11,
"text": "\\mathbf{G}_2 = \\operatorname{diag} \\left\\{ \\tilde{\\mathbf{G}}_2, \\tilde{\\mathbf{G}}_2, \\dots, \\tilde{\\mathbf{G}}_2 \\right\\} "
},
{
"math_id": 12,
"text": "\\mathbf{x}"
},
{
"math_id": 13,
"text": " \\underline{\\hat{\\mathbf{y}}}(\\ell) = \\mathbf{G}_1 \\underline{\\mathbf{X}}(\\ell) \\underline{\\hat{\\mathbf{h}}}(\\ell-1) "
},
{
"math_id": 14,
"text": " \\underline{\\mathbf{e}}(\\ell) = \\underline{\\mathbf{d}}(\\ell) - \\underline{\\hat{\\mathbf{y}}}(\\ell) "
},
{
"math_id": 15,
"text": "\\mathbf{\\Phi}_\\mathbf{xx}(\\ell) = \\underline{\\mathbf{X}}^H(\\ell)\\underline{\\mathbf{X}}(\\ell)"
},
{
"math_id": 16,
"text": " \\underline{\\hat{\\mathbf{h}}}(\\ell) = \\underline{\\hat{\\mathbf{h}}}(\\ell-1) + \\mu\\mathbf{G}_2\\mathbf{\\Phi}_\\mathbf{xx}^{-1}(\\ell) \\underline{\\mathbf{X}}^H(\\ell) \\underline{\\mathbf{e}}(\\ell) "
},
{
"math_id": 17,
"text": "\\mathbf{\\Phi}_\\mathbf{xx} = \\underline{\\mathbf{X}}^H(\\ell)\\underline{\\mathbf{X}}(\\ell)"
},
{
"math_id": 18,
"text": "\\underline{\\mathbf{X}}(\\ell)"
}
] | https://en.wikipedia.org/wiki?curid=9335905 |
9337483 | Charles Loewner | American mathematician (1893–1968)
Charles Loewner (29 May 1893 – 8 January 1968) was an American mathematician. His name was Karel Löwner in Czech and Karl Löwner in German.
Karl Loewner was born into a Jewish family in Lany, about 30 km from Prague, where his father Sigmund Löwner was a store owner.
Loewner received his Ph.D. from the University of Prague in 1917 under supervision of Georg Pick.
One of his central mathematical contributions is the proof of the Bieberbach conjecture in the first highly nontrivial case of the third coefficient. The technique he introduced, the Loewner differential equation, has had far-reaching implications in geometric function theory; it was used in the final solution of the Bieberbach conjecture by Louis de Branges in 1985. Loewner worked at the University of Berlin, University of Prague, University of Louisville, Brown University, Syracuse University and eventually at Stanford University. His students include Lipman Bers, Roger Horn, Adriano Garsia, and P. M. Pu.
Loewner's torus inequality.
In 1949 Loewner proved his torus inequality, to the effect that every metric on the 2-torus satisfies the optimal inequality
formula_0
where sys is its systole. The boundary case of equality is attained if and only if the metric is flat and homothetic to the so-called "equilateral torus", i.e. torus whose group of deck transformations is precisely the hexagonal lattice spanned by the cube roots of unity in formula_1.
Loewner matrix theorem.
The Loewner matrix (in linear algebra) is a square matrix or, more specifically, a linear operator (of real formula_2 functions) associated with 2 input parameters consisting of (1) a real continuously differentiable function on a subinterval of the real numbers and (2) an formula_3-dimensional vector with elements chosen from the subinterval; the 2 input parameters are assigned an output parameter consisting of an formula_4 matrix.
Let formula_5 be a real-valued function that is continuously differentiable on the open interval formula_6.
For any formula_7 define the divided difference of formula_5 at formula_8 as
formula_9.
Given formula_10, the Loewner matrix formula_11 associated with formula_5 for formula_12 is defined as the formula_4 matrix whose formula_13-entry is formula_14.
In his fundamental 1934 paper, Loewner proved that for each positive integer formula_3, formula_5 is formula_3-monotone on formula_6 if and only if formula_11 is positive semidefinite for any choice of formula_15. Most significantly, using this equivalence, he proved that formula_5 is formula_3-monotone on formula_6 for all formula_3 if and only if formula_5 is real analytic with an analytic continuation to the upper half plane that has a positive imaginary part on the upper plane. See "Operator monotone function".
Continuous groups.
"During [Loewner's] 1955 visit to Berkeley he gave a course on continuous groups, and his lectures were reproduced in the form of duplicated notes. Loewner planned to write a detailed book on continuous groups based on these lecture notes, but the project was still in the formative stage at the time of his death." Harley Flanders and Murray H. Protter "decided to revise and correct the original lecture notes and make them available in permanent form." "Charles Loewner: Theory of Continuous Groups" (1971) was published by The MIT Press, and re-issued in 2008.
In Loewner's terminology, if formula_16 and a group action is performed on formula_17, then formula_18 is called a "quantity" (page 10). The distinction is made between an abstract group formula_19 and a realization of formula_19 in terms of linear transformations that yield a group representation. These linear transformations are Jacobians denoted formula_20 (page 41). The term "invariant density" is used for the Haar measure, which Loewner attributes to Adolph Hurwitz (page 46). Loewner proves that compact groups have equal left and right invariant densities (page 48).
A reviewer said, "The reader is helped by illuminating examples and comments on relations with analysis and geometry."
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\operatorname{sys}^2 \\leq \\frac{2}{\\sqrt{3}} \\operatorname{area} (\\mathbb T^2),"
},
{
"math_id": 1,
"text": "\\mathbb C"
},
{
"math_id": 2,
"text": "C^1"
},
{
"math_id": 3,
"text": "n"
},
{
"math_id": 4,
"text": "n \\times n"
},
{
"math_id": 5,
"text": "f"
},
{
"math_id": 6,
"text": "(a,b)"
},
{
"math_id": 7,
"text": "s, t \\in (a, b)"
},
{
"math_id": 8,
"text": "s, t"
},
{
"math_id": 9,
"text": "f^{[1]}(s,t) = \n\\begin{cases} \\displaystyle \n \\frac{f(s)-f(t)}{s-t}, & \\text{if } s \\neq t \\\\\n f'(s), & \\text{if } s = t\n\\end{cases}"
},
{
"math_id": 10,
"text": "t_1, \\ldots, t_n \\in (a,b)"
},
{
"math_id": 11,
"text": "L_f (t_1, \\ldots, t_n)"
},
{
"math_id": 12,
"text": "(t_1,\\ldots,t_n)"
},
{
"math_id": 13,
"text": "(i,j)"
},
{
"math_id": 14,
"text": "f^{[1]}(t_i,t_j)"
},
{
"math_id": 15,
"text": "t_1,\\ldots,t_n \\in (a,b)"
},
{
"math_id": 16,
"text": "x\\in S"
},
{
"math_id": 17,
"text": "S"
},
{
"math_id": 18,
"text": "x"
},
{
"math_id": 19,
"text": "\\mathfrak{g},"
},
{
"math_id": 20,
"text": "J(\\overset{u}{v})"
}
] | https://en.wikipedia.org/wiki?curid=9337483 |
933946 | Almost flat manifold | In mathematics, a smooth compact manifold "M" is called almost flat if for any formula_0 there is a Riemannian metric formula_1 on "M" such that formula_2 and
formula_3 is formula_4-flat, i.e. for the sectional curvature of formula_5 we have formula_6.
Given "n", there is a positive number formula_7 such that if an "n"-dimensional manifold admits an formula_8-flat metric with diameter formula_9 then it is almost flat. On the other hand, one can fix the bound of sectional curvature and get the diameter going to zero, so the almost-flat manifold is a special case of a collapsing manifold, which is collapsing along all directions.
According to the Gromov–Ruh theorem, "M" is almost flat if and only if it is infranil. In particular, it is a finite factor of a nilmanifold, which is the total space of a principal torus bundle over a principal torus bundle over a torus. | [
{
"math_id": 0,
"text": "\\varepsilon>0 "
},
{
"math_id": 1,
"text": "g_\\varepsilon "
},
{
"math_id": 2,
"text": " \\mbox{diam}(M,g_\\varepsilon)\\le 1 "
},
{
"math_id": 3,
"text": " g_\\varepsilon "
},
{
"math_id": 4,
"text": "\\varepsilon"
},
{
"math_id": 5,
"text": " K_{g_\\varepsilon} "
},
{
"math_id": 6,
"text": " |K_{g_\\epsilon}| < \\varepsilon"
},
{
"math_id": 7,
"text": "\\varepsilon_n>0 "
},
{
"math_id": 8,
"text": "\\varepsilon_n"
},
{
"math_id": 9,
"text": "\\le 1 "
}
] | https://en.wikipedia.org/wiki?curid=933946 |
9342843 | Fujiki class C | In algebraic geometry, a complex manifold is called Fujiki class formula_0 if it is bimeromorphic to a compact Kähler manifold. This notion was defined by Akira Fujiki.
Properties.
Let "M" be a compact manifold of Fujiki class formula_0, and
formula_1 its complex subvariety. Then "X"
is also in Fujiki class formula_0 (, Lemma 4.6). Moreover, the Douady space of "X" (that is, the moduli of deformations of a subvariety formula_1, "M" fixed) is compact and in Fujiki class formula_0.
Fujiki class formula_0 manifolds are examples of compact complex manifolds which are not necessarily Kähler, but for which the formula_2-lemma holds.
Conjectures.
J.-P. Demailly and M. Pǎun have
shown that a manifold is in Fujiki class formula_0 if and only
if it supports a Kähler current.
They also conjectured that a manifold "M" is in Fujiki class formula_0 if it admits a nef current which is "big", that is, satisfies
formula_3
For a cohomology class formula_4 which is rational, this statement is known: by Grauert-Riemenschneider conjecture, a holomorphic line bundle "L" with first Chern class
formula_5
nef and big has maximal Kodaira dimension, hence the corresponding rational map to
formula_6
is generically finite onto its image, which is algebraic, and therefore Kähler.
Fujiki and Ueno asked whether the property formula_0 is stable under deformations. This conjecture was disproven in 1992 by Y.-S. Poon and Claude LeBrun | [
{
"math_id": 0,
"text": "\\mathcal{C}"
},
{
"math_id": 1,
"text": "X\\subset M"
},
{
"math_id": 2,
"text": "\\partial \\bar \\partial"
},
{
"math_id": 3,
"text": "\\int_M \\omega^{{dim_{\\mathbb C} M}}>0."
},
{
"math_id": 4,
"text": "[\\omega]\\in H^2(M)"
},
{
"math_id": 5,
"text": "c_1(L)=[\\omega]"
},
{
"math_id": 6,
"text": "{\\mathbb P} H^0(L^N)"
}
] | https://en.wikipedia.org/wiki?curid=9342843 |
934407 | Force of infection | Rate at which susceptible individuals acquire an infectious disease
In epidemiology, force of infection (denoted formula_0) is the rate at which susceptible individuals acquire an infectious disease. Because it takes account of susceptibility it can be used to compare the rate of transmission between different groups of the population for the same infectious disease, or even between different infectious diseases. That is to say, formula_0 is directly proportional to formula_1; the effective transmission rate.
formula_2
Such a calculation is difficult because not all new infections are reported, and it is often difficult to know how many susceptibles were exposed. However, formula_0 can be calculated for an infectious disease in an endemic state if homogeneous mixing of the population and a rectangular population distribution (such as that generally found in developed countries), rather than a pyramid, is assumed. In this case, formula_0 is given by:
formula_3
where formula_4 is the average age of infection. In other words, formula_4 is the average time spent in the susceptible group before becoming infected. The rate of becoming infected (formula_0) is therefore formula_5 (since rate is 1/time). The advantage of this method of calculating formula_0 is that data on the average age of infection is very easily obtainable, even if not all cases of the disease are reported.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\lambda"
},
{
"math_id": 1,
"text": "\\beta"
},
{
"math_id": 2,
"text": "\n\\lambda = \\frac {\\mbox{number of new infections}} {\\mbox{number of susceptible persons exposed} \\times \\mbox{average duration of exposure}}\n"
},
{
"math_id": 3,
"text": "\n\\lambda = \\frac {1} {A}\n"
},
{
"math_id": 4,
"text": "A"
},
{
"math_id": 5,
"text": "1/A"
}
] | https://en.wikipedia.org/wiki?curid=934407 |
93459 | Karl Weierstrass | German mathematician (1815–1897)
Karl Theodor Wilhelm Weierstrass ( ; 31 October 1815 – 19 February 1897) was a German mathematician often cited as the "father of modern analysis". Despite leaving university without a degree, he studied mathematics and trained as a school teacher, eventually teaching mathematics, physics, botany and gymnastics. He later received an honorary doctorate and became professor of mathematics in Berlin.
Among many other contributions, Weierstrass formalized the definition of the continuity of a function and complex analysis, proved the intermediate value theorem and the Bolzano–Weierstrass theorem, and used the latter to study the properties of continuous functions on closed bounded intervals.
Biography.
Weierstrass was born into a Roman Catholic family in Ostenfelde, a village near Ennigerloh, in the Province of Westphalia.
Weierstrass was the son of Wilhelm Weierstrass, a government official, and Theodora Vonderforst both of whom were Catholic Rhinelanders. His interest in mathematics began while he was a gymnasium student at the Theodorianum in Paderborn. He was sent to the University of Bonn upon graduation to prepare for a government position. Because his studies were to be in the fields of law, economics, and finance, he was immediately in conflict with his hopes to study mathematics. He resolved the conflict by paying little heed to his planned course of study but continuing private study in mathematics. The outcome was that he left the university without a degree. He then studied mathematics at the Münster Academy (which was even then famous for mathematics) and his father was able to obtain a place for him in a teacher training school in Münster. Later he was certified as a teacher in that city. During this period of study, Weierstrass attended the lectures of Christoph Gudermann and became interested in elliptic functions.
In 1843 he taught in Deutsch Krone in West Prussia and from 1848 he taught at the Lyceum Hosianum in Braunsberg. Besides mathematics he also taught physics, botany, and gymnastics.
Weierstrass may have had an illegitimate child named Franz with the widow of his friend Carl Wilhelm Borchardt.
After 1850 Weierstrass suffered from a long period of illness, but was able to publish mathematical articles that brought him fame and distinction. The University of Königsberg conferred an honorary doctor's degree on him on 31 March 1854. In 1856 he took a chair at the "Gewerbeinstitut" in Berlin (an institute to educate technical workers which would later merge with the "Bauakademie" to form the Technische Hochschule in Charlottenburg; now Technische Universität Berlin). In 1864 he became professor at the Friedrich-Wilhelms-Universität Berlin, which later became the Humboldt Universität zu Berlin.
In 1870, at the age of fifty-five, Weierstrass met Sofia Kovalevsky whom he tutored privately after failing to secure her admission to the university. They had a fruitful intellectual, and kindly personal relationship that "far transcended the usual teacher-student relationship". He mentored her for four years, and regarded her as his best student, helping to secure a doctorate for her from Heidelberg University without the need for an oral thesis defense. He was immobile for the last three years of his life, and died in Berlin from pneumonia.
From 1870 until her death in 1891, Kovalevsky corresponded with Weierstrass. Upon learning of her death, he burned her letters. About 150 of his letters to her have been preserved. Professor Reinhard Bölling discovered the draft of the letter she wrote to Weierstrass when she arrived in Stockholm in 1883 upon her appointment as "Privatdocent" at Stockholm University.
Mathematical contributions.
Soundness of calculus.
Weierstrass was interested in the soundness of calculus, and at the time there were somewhat ambiguous definitions of the foundations of calculus so that important theorems could not be proven with sufficient rigour. Although Bolzano had developed a reasonably rigorous definition of a limit as early as 1817 (and possibly even earlier) his work remained unknown to most of the mathematical community until years later,
and many mathematicians had only vague definitions of limits and continuity of functions.
The basic idea behind Delta-epsilon proofs is, arguably, first found in the works of Cauchy in the 1820s.
Cauchy did not clearly distinguish between continuity and uniform continuity on an interval. Notably, in his 1821 "Cours d'analyse," Cauchy argued that the (pointwise) limit of (pointwise) continuous functions was itself (pointwise) continuous, a statement that is false in general. The correct statement is rather that the "uniform" limit of continuous functions is continuous (also, the uniform limit of uniformly continuous functions is uniformly continuous).
This required the concept of uniform convergence, which was first observed by Weierstrass's advisor, Christoph Gudermann, in an 1838 paper, where Gudermann noted the phenomenon but did not define it or elaborate on it. Weierstrass saw the importance of the concept, and both formalized it and applied it widely throughout the foundations of calculus.
The formal definition of continuity of a function, as formulated by Weierstrass, is as follows:
formula_0 is continuous at formula_1 if formula_2 such that for every formula_3 in the domain of formula_4, formula_5 In simple English, formula_0 is continuous at a point formula_1 if for each formula_3 close enough to formula_6, the function value formula_7 is very close to formula_8, where the "close enough" restriction typically depends on the desired closeness of formula_8 to formula_9
Using this definition, he proved the Intermediate Value Theorem. He also proved the Bolzano–Weierstrass theorem and used it to study the properties of continuous functions on closed and bounded intervals.
Calculus of variations.
Weierstrass also made advances in the field of calculus of variations. Using the apparatus of analysis that he helped to develop, Weierstrass was able to give a complete reformulation of the theory that paved the way for the modern study of the calculus of variations. Among several axioms, Weierstrass established a necessary condition for the existence of strong extrema of variational problems. He also helped devise the Weierstrass–Erdmann condition, which gives sufficient conditions for an extremal to have a corner along a given extremum and allows one to find a minimizing curve for a given integral.
Honours and awards.
The lunar crater Weierstrass and the asteroid 14100 Weierstrass are named after him. Also, there is the Weierstrass Institute for Applied Analysis and Stochastics in Berlin.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\displaystyle f(x)"
},
{
"math_id": 1,
"text": "\\displaystyle x = x_0"
},
{
"math_id": 2,
"text": " \\displaystyle \\forall \\ \\varepsilon > 0\\ \\exists\\ \\delta > 0"
},
{
"math_id": 3,
"text": "x"
},
{
"math_id": 4,
"text": "f"
},
{
"math_id": 5,
"text": " \\displaystyle \\ |x-x_0| < \\delta \\Rightarrow |f(x) - f(x_0)| < \\varepsilon."
},
{
"math_id": 6,
"text": "x_0"
},
{
"math_id": 7,
"text": "f(x)"
},
{
"math_id": 8,
"text": "f(x_0)"
},
{
"math_id": 9,
"text": "f(x)."
}
] | https://en.wikipedia.org/wiki?curid=93459 |
934711 | Hotelling's T-squared distribution | Type of probability distribution
In statistics, particularly in hypothesis testing, the Hotelling's "T"-squared distribution ("T"2), proposed by Harold Hotelling, is a multivariate probability distribution that is tightly related to the "F"-distribution and is most notable for arising as the distribution of a set of sample statistics that are natural generalizations of the statistics underlying the Student's "t"-distribution.
The Hotelling's "t"-squared statistic ("t"2) is a generalization of Student's "t"-statistic that is used in multivariate hypothesis testing.
Motivation.
The distribution arises in multivariate statistics in undertaking tests of the differences between the (multivariate) means of different populations, where tests for univariate problems would make use of a "t"-test.
The distribution is named for Harold Hotelling, who developed it as a generalization of Student's "t"-distribution.
Definition.
If the vector formula_0 is Gaussian multivariate-distributed with zero mean and unit covariance matrix formula_1 and formula_2 is a formula_3 random matrix with a Wishart distribution formula_4 with unit scale matrix and "m" degrees of freedom, and "d" and "M" are independent of each other, then the quadratic form formula_5 has a Hotelling distribution (with parameters formula_6 and formula_7):
formula_8
It can be shown that if a random variable "X" has Hotelling's "T"-squared distribution, formula_9, then:
formula_10
where formula_11 is the "F"-distribution with parameters "p" and "m" − "p" + 1.
Hotelling "t"-squared statistic.
Let formula_12 be the sample covariance:
formula_13
where we denote transpose by an apostrophe. It can be shown that formula_12 is a positive (semi) definite matrix and formula_14 follows a "p"-variate Wishart distribution with "n" − 1 degrees of freedom.
The sample covariance matrix of the mean reads formula_15.
The Hotelling's "t"-squared statistic is then defined as:
formula_16
which is proportional to the Mahalanobis distance between the sample mean and formula_17. Because of this, one should expect the statistic to assume low values if formula_18, and high values if they are different.
From the distribution,
formula_19
where formula_20 is the "F"-distribution with parameters "p" and "n" − "p".
In order to calculate a "p"-value (unrelated to "p" variable here), note that the distribution of formula_21 equivalently implies that
formula_22
Then, use the quantity on the left hand side to evaluate the "p"-value corresponding to the sample, which comes from the "F"-distribution. A confidence region may also be determined using similar logic.
Motivation.
Let formula_23 denote a "p"-variate normal distribution with location formula_17 and known covariance formula_24. Let
formula_25
be "n" independent identically distributed (iid) random variables, which may be represented as formula_26 column vectors of real numbers. Define
formula_27
to be the sample mean with covariance formula_28. It can be shown that
formula_29
where formula_30 is the chi-squared distribution with "p" degrees of freedom.
Two-sample statistic.
If formula_31 and formula_32, with the samples independently drawn from two independent multivariate normal distributions with the same mean and covariance, and we define
formula_33
as the sample means, and
formula_34
formula_35
as the respective sample covariance matrices. Then
formula_36
is the unbiased pooled covariance matrix estimate (an extension of pooled variance).
Finally, the Hotelling's two-sample "t"-squared statistic is
formula_37
Related concepts.
It can be related to the F-distribution by
formula_38
The non-null distribution of this statistic is the noncentral F-distribution (the ratio of a non-central Chi-squared random variable and an independent central Chi-squared random variable)
formula_39
with
formula_40
where formula_41 is the difference vector between the population means.
In the two-variable case, the formula simplifies nicely allowing appreciation of how the correlation, formula_42,
between the variables affects formula_21. If we define
formula_43
and
formula_44
then
formula_45
Thus, if the differences in the two rows of the vector formula_46 are of the same sign, in general, formula_21 becomes smaller as formula_42 becomes more positive. If the differences are of opposite sign formula_21 becomes larger as formula_42 becomes more positive.
A univariate special case can be found in Welch's t-test.
More robust and powerful tests than Hotelling's two-sample test have been proposed in the literature, see for example the interpoint distance based tests which can be applied also when the number of variables is comparable with, or even larger than, the number of subjects.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "d"
},
{
"math_id": 1,
"text": "N(\\mathbf{0}_{p}, \\mathbf{I}_{p, p})"
},
{
"math_id": 2,
"text": "M"
},
{
"math_id": 3,
"text": "p \\times p"
},
{
"math_id": 4,
"text": "W(\\mathbf{I}_{p, p}, m)"
},
{
"math_id": 5,
"text": "X"
},
{
"math_id": 6,
"text": "p"
},
{
"math_id": 7,
"text": "m"
},
{
"math_id": 8,
"text": "X = m d^T M^{-1} d \\sim T^2(p, m)."
},
{
"math_id": 9,
"text": "X \\sim T^2_{p,m}"
},
{
"math_id": 10,
"text": "\n\\frac{m-p+1}{pm} X\\sim F_{p,m-p+1}\n"
},
{
"math_id": 11,
"text": "F_{p,m-p+1}"
},
{
"math_id": 12,
"text": "\\hat{\\mathbf \\Sigma}"
},
{
"math_id": 13,
"text": " \\hat{\\mathbf \\Sigma} = \\frac 1 {n-1} \\sum_{i=1}^n (\\mathbf{x}_i -\\overline{\\mathbf{x}}) (\\mathbf{x}_i-\\overline{\\mathbf{x}})' "
},
{
"math_id": 14,
"text": "(n-1)\\hat{\\mathbf \\Sigma}"
},
{
"math_id": 15,
"text": "\\hat{\\mathbf \\Sigma}_\\overline{\\mathbf x}=\\hat{\\mathbf \\Sigma}/n"
},
{
"math_id": 16,
"text": "\nt^2=(\\overline{\\mathbf x}-\\boldsymbol{\\mu})'\\hat{\\mathbf \\Sigma}_\\overline{\\mathbf x}^{-1} (\\overline{\\mathbf x}-\\boldsymbol{\\mathbf\\mu})=n(\\overline{\\mathbf x}-\\boldsymbol{\\mu})'\\hat{\\mathbf \\Sigma}^{-1} (\\overline{\\mathbf x}-\\boldsymbol{\\mathbf\\mu}),\n"
},
{
"math_id": 17,
"text": "\\boldsymbol{\\mu}"
},
{
"math_id": 18,
"text": "\\overline{\\mathbf x} \\approx \\boldsymbol{\\mu}"
},
{
"math_id": 19,
"text": "t^2 \\sim T^2_{p,n-1}=\\frac{p(n-1)}{n-p} F_{p,n-p} ,"
},
{
"math_id": 20,
"text": "F_{p,n-p}"
},
{
"math_id": 21,
"text": "t^2"
},
{
"math_id": 22,
"text": " \\frac{n-p} {p(n-1)} t^2 \\sim F_{p,n-p} ."
},
{
"math_id": 23,
"text": "\\mathcal{N}_p(\\boldsymbol{\\mu},{\\mathbf \\Sigma})"
},
{
"math_id": 24,
"text": "{\\mathbf \\Sigma}"
},
{
"math_id": 25,
"text": "{\\mathbf x}_1,\\dots,{\\mathbf x}_n\\sim \\mathcal{N}_p(\\boldsymbol{\\mu},{\\mathbf \\Sigma})"
},
{
"math_id": 26,
"text": "p\\times1"
},
{
"math_id": 27,
"text": "\\overline{\\mathbf x}=\\frac{\\mathbf{x}_1+\\cdots+\\mathbf{x}_n}{n}"
},
{
"math_id": 28,
"text": "{\\mathbf \\Sigma}_\\overline{\\mathbf x}={\\mathbf \\Sigma}/ n"
},
{
"math_id": 29,
"text": "(\\overline{\\mathbf x}-\\boldsymbol{\\mu})'{\\mathbf \\Sigma}_\\overline{\\mathbf x}^{-1}(\\overline{\\mathbf x}-\\boldsymbol{\\mathbf\\mu})\\sim\\chi^2_p ,"
},
{
"math_id": 30,
"text": "\\chi^2_p"
},
{
"math_id": 31,
"text": "{\\mathbf x}_1,\\dots,{\\mathbf x}_{n_x}\\sim N_p(\\boldsymbol{\\mu},{\\mathbf \\Sigma})"
},
{
"math_id": 32,
"text": "{\\mathbf y}_1,\\dots,{\\mathbf y}_{n_y}\\sim N_p(\\boldsymbol{\\mu},{\\mathbf \\Sigma})"
},
{
"math_id": 33,
"text": "\\overline{\\mathbf x}=\\frac{1}{n_x}\\sum_{i=1}^{n_x} \\mathbf{x}_i \\qquad \\overline{\\mathbf y}=\\frac{1}{n_y}\\sum_{i=1}^{n_y} \\mathbf{y}_i"
},
{
"math_id": 34,
"text": "\\hat{\\mathbf \\Sigma}_{\\mathbf x}=\\frac{1}{n_x-1}\\sum_{i=1}^{n_{x}} (\\mathbf{x}_i-\\overline{\\mathbf x})(\\mathbf{x}_i-\\overline{\\mathbf x})'"
},
{
"math_id": 35,
"text": "\\hat{\\mathbf \\Sigma}_{\\mathbf y}=\\frac{1}{n_y-1}\\sum_{i=1}^{n_{y}} (\\mathbf{y}_i-\\overline{\\mathbf y})(\\mathbf{y}_i-\\overline{\\mathbf y})'"
},
{
"math_id": 36,
"text": "\\hat{\\mathbf \\Sigma}= \\frac{(n_x - 1) \\hat{\\mathbf \\Sigma}_{\\mathbf x} + (n_y - 1) \\hat{\\mathbf \\Sigma}_{\\mathbf y}}{n_x+n_y-2}"
},
{
"math_id": 37,
"text": "t^2 = \\frac{n_x n_y}{n_x+n_y}(\\overline{\\mathbf x}-\\overline{\\mathbf y})'\\hat{\\mathbf \\Sigma}^{-1}(\\overline{\\mathbf x}-\\overline{\\mathbf y})\n\\sim T^2(p, n_x+n_y-2)"
},
{
"math_id": 38,
"text": "\\frac{n_x+n_y-p-1}{(n_x+n_y-2)p}t^2 \\sim F(p,n_x+n_y-1-p)."
},
{
"math_id": 39,
"text": "\\frac{n_x+n_y-p-1}{(n_x+n_y-2)p}t^2 \\sim F(p,n_x+n_y-1-p;\\delta),"
},
{
"math_id": 40,
"text": "\\delta = \\frac{n_x n_y}{n_x+n_y}\\boldsymbol{d}'\\mathbf{\\Sigma}^{-1}\\boldsymbol{d},"
},
{
"math_id": 41,
"text": "\\boldsymbol{d}=\\mathbf{\\overline{x} - \\overline{y}}"
},
{
"math_id": 42,
"text": "\\rho"
},
{
"math_id": 43,
"text": "d_{1} = \\overline{x}_{1}-\\overline{y}_{1}, \\qquad d_{2} = \\overline{x}_{2}-\\overline{y}_{2}"
},
{
"math_id": 44,
"text": "s_1 = \\sqrt{\\Sigma_{11}} \\qquad s_2 = \\sqrt{\\Sigma_{22}} \\qquad \\rho = \\Sigma_{12}/(s_1 s_2) = \\Sigma_{21}/(s_1 s_2)"
},
{
"math_id": 45,
"text": "t^2 = \\frac{n_x n_y}{(n_x+n_y)(1-\\rho ^2)} \\left [ \\left ( \\frac{d_1}{s_1} \\right )^2+\\left ( \\frac{d_2}{s_2} \\right )^2-2\\rho \\left ( \\frac{d_1}{s_1} \\right )\\left ( \\frac{d_2}{s_2} \\right ) \\right ] "
},
{
"math_id": 46,
"text": "\\mathbf d = \\overline{\\mathbf x}-\\overline{\\mathbf y}"
}
] | https://en.wikipedia.org/wiki?curid=934711 |
9347993 | Specificity constant | Measure of enzyme efficiency
In the field of biochemistry, the specificity constant (also called kinetic efficiency or formula_0), is a measure of how efficiently an enzyme converts substrates into products. A comparison of specificity constants can also be used as a measure of the preference of an enzyme for different substrates (i.e., substrate specificity). The higher the specificity constant, the more the enzyme "prefers" that substrate.
The following equation, known as the Michaelis–Menten model, is used to describe the kinetics of enzymes:
<chem>
</chem>
where E, S, ES, and P represent enzyme, substrate, enzyme–substrate complex, and product, respectively. The symbols formula_1, formula_2, and formula_3 denote the rate constants for the "forward" binding and "reverse" unbinding of substrate, and for the "catalytic" conversion of substrate into product, respectively.
The Michaelis constant in turn is defined as follows:
formula_4
The Michaelis constant is equal to the substrate concentration at which the enzyme converts substrates into products at half its maximal rate and hence is related to the affinity of the substrate for the enzyme. The catalytic constant (formula_5) is the rate of product formation when the enzyme is saturated with substrate and therefore reflects the enzyme's maximum rate. The rate of product formation is dependent on both how well the enzyme binds substrate and how fast the enzyme converts substrate into product once substrate is bound. For a kinetically perfect enzyme, every encounter between enzyme and substrate leads to product and hence the reaction velocity is only limited by the rate the enzyme encounters substrate in solution. Hence the upper limit for formula_0 is equal to rate of substrate diffusion which is between 108 and 109 s−1M−1.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "k_{cat}/K_{M}"
},
{
"math_id": 1,
"text": "k_f"
},
{
"math_id": 2,
"text": "k_r"
},
{
"math_id": 3,
"text": "k_\\mathrm{cat}"
},
{
"math_id": 4,
"text": "K_{M} = \\frac{k_{r}+k_{cat}}{k_{f}}"
},
{
"math_id": 5,
"text": "k_{cat}"
}
] | https://en.wikipedia.org/wiki?curid=9347993 |
934837 | Sharpe ratio | Finance term
In finance, the Sharpe ratio (also known as the Sharpe index, the Sharpe measure, and the reward-to-variability ratio) measures the performance of an investment such as a security or portfolio compared to a risk-free asset, after adjusting for its risk. It is defined as the difference between the returns of the investment and the risk-free return, divided by the standard deviation of the investment returns. It represents the additional amount of return that an investor receives per unit of increase in risk.
It was named after William F. Sharpe, who developed it in 1966.
Definition.
Since its revision by the original author, William Sharpe, in 1994, the "ex-ante" Sharpe ratio is defined as:
formula_0
where formula_1 is the asset return, formula_2 is the risk-free return (such as a U.S. Treasury security). formula_3 is the expected value of the excess of the asset return over the benchmark return, and formula_4 is the standard deviation of the asset excess return. The t-statistic will equal the Sharpe Ratio times the square root of T (the number of returns used for the calculation).
The "ex-post" Sharpe ratio uses the same equation as the one above but with realized returns of the asset and benchmark rather than expected returns; see the second example below.
The information ratio is a generalization of the Sharpe ratio that uses as benchmark some other, typically risky index rather than using risk-free returns.
Use in finance.
The Sharpe ratio seeks to characterize how well the return of an asset compensates the investor for the risk taken. When comparing two assets, the one with a higher Sharpe ratio appears to provide better return for the same risk, which is usually attractive to investors.
However, financial assets are often not normally distributed, so that standard deviation does not capture all aspects of risk. Ponzi schemes, for example, will have a high empirical Sharpe ratio until they fail. Similarly, a fund that sells low-strike put options will have a high empirical Sharpe ratio until one of those puts is exercised, creating a large loss. In both cases, the empirical standard deviation before failure gives no real indication of the size of the risk being run.
Even in less extreme cases, a reliable empirical estimate of Sharpe ratio still requires the collection of return data over sufficient period for all aspects of the strategy returns to be observed. For example, data must be taken over decades if the algorithm sells an insurance that involves a high liability payout once every 5–10 years, and a high-frequency trading algorithm may only require a week of data if each trade occurs every 50 milliseconds, with care taken toward risk from unexpected but rare results that such testing did not capture (see flash crash).
Additionally, when examining the investment performance of assets with smoothing of returns (such as with-profits funds), the Sharpe ratio should be derived from the performance of the underlying assets rather than the fund returns (Such a model would invalidate the aforementioned Ponzi scheme, as desired).
Sharpe ratios, along with Treynor ratios and Jensen's alphas, are often used to rank the performance of portfolio or mutual fund managers. Berkshire Hathaway had a Sharpe ratio of 0.76 for the period 1976 to 2011, higher than any other stock or mutual fund with a history of more than 30 years. The stock market had a Sharpe ratio of 0.39 for the same period.
Tests.
Several statistical tests of the Sharpe ratio have been proposed. These include those proposed by Jobson & Korkie and Gibbons, Ross & Shanken.
History.
In 1952, Arthur D. Roy suggested maximizing the ratio "(m-d)/σ", where m is expected gross return, d is some "disaster level" (a.k.a., minimum acceptable return, or MAR) and σ is standard deviation of returns. This ratio is just the Sharpe ratio, only using minimum acceptable return instead of the risk-free rate in the numerator, and using standard deviation of returns instead of standard deviation of excess returns in the denominator. Roy's ratio is also related to the Sortino ratio, which also uses MAR in the numerator, but uses a different standard deviation (semi/downside deviation) in the denominator.
In 1966, William F. Sharpe developed what is now known as the Sharpe ratio. Sharpe originally called it the "reward-to-variability" ratio before it began being called the Sharpe ratio by later academics and financial operators. The definition was:
formula_5
Sharpe's 1994 revision acknowledged that the basis of comparison should be an applicable benchmark, which changes with time. After this revision, the definition is:
formula_6
Note, if formula_7 is a constant risk-free return throughout the period,
formula_8
The (original) Sharpe ratio has often been challenged with regard to its appropriateness as a fund performance measure during periods of declining markets.
Examples.
Example 1
Suppose the asset has an expected return of 15% in excess of the risk free rate. We typically do not know if the asset will have this return. We estimate the risk of the asset, defined as standard deviation of the asset's excess return, as 10%. The risk-free return is constant. Then the Sharpe ratio using the old definition is formula_9
Example 2
An investor has a portfolio with an expected return of 12% and a standard deviation of 10%. The rate of interest is 5%, and is risk-free.
The Sharpe ratio is: formula_10
Strengths and weaknesses.
A negative Sharpe ratio means the portfolio has underperformed its benchmark. All other things being equal, an investor typically prefers a higher positive Sharpe ratio as it has either higher returns or lower volatility. However, a negative Sharpe ratio can be made higher by either increasing returns (a good thing) or increasing volatility (a bad thing). Thus, for negative values the Sharpe ratio does not correspond well to typical investor utility functions.
The Sharpe ratio is convenient because it can be calculated purely from any observed series of returns without need for additional information surrounding the source of profitability. However, this makes it vulnerable to manipulation if opportunities exist for smoothing or discretionary pricing of illiquid assets. Statistics such as the bias ratio and first order autocorrelation are sometimes used to indicate the potential presence of these problems.
While the Treynor ratio considers only the systematic risk of a portfolio, the Sharpe ratio considers both systematic and idiosyncratic risks. Which one is more relevant will depend on the portfolio context.
The returns measured can be of any frequency (i.e. daily, weekly, monthly or annually), as long as they are normally distributed, as the returns can always be annualized. Herein lies the underlying weakness of the ratio - asset returns are not normally distributed. Abnormalities like kurtosis, fatter tails and higher peaks, or skewness on the distribution can be problematic for the ratio, as standard deviation doesn't have the same effectiveness when these problems exist.
For Brownian walk, Sharpe ratio formula_11 is a dimensional quantity and has units formula_12, because the excess return formula_13 and the volatility formula_14 are proportional to formula_12 and formula_15 correspondingly. Kelly criterion is a dimensionless quantity, and, indeed, Kelly fraction formula_16 is the numerical fraction of wealth suggested for the investment.
In some settings, the Kelly criterion can be used to convert the Sharpe ratio into a rate of return. The Kelly criterion gives the ideal size of the investment, which when adjusted by the period and expected rate of return per unit, gives a rate of return.
The accuracy of Sharpe ratio estimators hinges on the statistical properties of returns, and these properties can vary considerably among strategies, portfolios, and over time.
Drawback as fund selection criteria.
Bailey and López de Prado (2012) show that Sharpe ratios tend to be overstated in the case of hedge funds with short track records. These authors propose a probabilistic version of the Sharpe ratio that takes into account the asymmetry and fat-tails of the returns' distribution. With regards to the selection of portfolio managers on the basis of their Sharpe ratios, these authors have proposed a "Sharpe ratio indifference curve" This curve illustrates the fact that it is efficient to hire portfolio managers with low and even negative Sharpe ratios, as long as their correlation to the other portfolio managers is sufficiently low.
Goetzmann, Ingersoll, Spiegel, and Welch (2002) determined that the best strategy to maximize a portfolio's Sharpe ratio, when both securities and options contracts on these securities are available for investment, is a portfolio of selling one out-of-the-money call and selling one out-of-the-money put. This portfolio generates an immediate positive payoff, has a large probability of generating modestly high returns, and has a small probability of generating huge losses. Shah (2014) observed that such a portfolio is not suitable for many investors, but fund sponsors who select fund managers primarily based on the Sharpe ratio will give incentives for fund managers to adopt such a strategy.
In recent years, many financial websites have promoted the idea that a Sharpe Ratio "greater than 1 is considered acceptable; a ratio higher than 2.0 is considered very good; and a ratio above 3.0 is excellent." While it is unclear where this rubric originated online, it makes little sense since the magnitude of the Sharpe ratio is sensitive to the time period over which the underlying returns are measured. This is because the nominator of the ratio (returns) scales in proportion to time; while the denominator of the ratio (standard deviation) scales in proportion to the square root of time. Most diversified indexes of equities, bonds, mortgages or commodities have annualized Sharpe ratios below 1, which suggests that a Sharpe ratio consistently above 2.0 or 3.0 is unrealistic.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "S_a = \\frac{E[R_a-R_b]}{\\sigma_a} = \\frac{E[R_a-R_b]}{\\sqrt{\\mathrm{var}[R_a-R_b]}},"
},
{
"math_id": 1,
"text": "R_a"
},
{
"math_id": 2,
"text": "R_b"
},
{
"math_id": 3,
"text": "E[R_a-R_b]"
},
{
"math_id": 4,
"text": "{\\sigma_a}"
},
{
"math_id": 5,
"text": "S = \\frac{E[R-R_f]}{\\sqrt{\\mathrm{var}[R]}}."
},
{
"math_id": 6,
"text": "S = \\frac{E[R-R_b]}{\\sqrt{\\mathrm{var}[R-R_b]}}."
},
{
"math_id": 7,
"text": "R_f"
},
{
"math_id": 8,
"text": "\\sqrt{\\mathrm{var}[R-R_f]}=\\sqrt{\\mathrm{var}[R]}."
},
{
"math_id": 9,
"text": "\\frac{R_a-R_f}{\\sigma_a}=\\frac{0.15}{0.10}=1.5"
},
{
"math_id": 10,
"text": "\\frac{0.12-0.05}{0.1} = 0.7"
},
{
"math_id": 11,
"text": " \\mu/\\sigma"
},
{
"math_id": 12,
"text": " 1/\\sqrt{T}"
},
{
"math_id": 13,
"text": " \\mu"
},
{
"math_id": 14,
"text": "\\sigma"
},
{
"math_id": 15,
"text": " 1/T"
},
{
"math_id": 16,
"text": " \\mu/\\sigma^2 "
}
] | https://en.wikipedia.org/wiki?curid=934837 |
934991 | Body proportions | Proportions of the human body in art
Body proportions is the study of artistic anatomy, which attempts to explore the relation of the elements of the human body to each other and to the whole. These ratios are used in depictions of the human figure and may become part of an artistic canon of body proportion within a culture. Academic art of the nineteenth century demanded close adherence to these reference metrics and some artists in the early twentieth century rejected those constraints and consciously mutated them.
Basics of human proportions.
It is usually important in figure drawing to draw the human figure in proportion. Though there are subtle differences between individuals, human proportions fit within a fairly standard range – though artists have historically tried to create idealised standards that have varied considerably over time, according to era and region. In modern figure drawing, the basic unit of measurement is the 'head', which is the distance from the top of the head to the chin. This unit of measurement is credited to the Greek sculptor Polykleitos (fifth century BCE) and has long been used by artists to establish the proportions of the human figure. Ancient Egyptian art used a canon of proportion based on the "fist", measured across the knuckles, with 18 fists from the ground to the hairline on the forehead. This canon was already established by the Narmer Palette from about the 31st century BC, and remained in use until at least the conquest by Alexander the Great some 3,000 years later.
One version of the proportions used in modern figure drawing is:
Measurements.
There are a number of important distances between reference points that an artist may measure and will observe: These are the distance from floor to the patella; from the patella to the front iliac crest; the distance across the stomach between the iliac crests; the distances (which may differ according to pose) from the iliac crests to the suprasternal notch between the clavicles; and the distance from the notch to the bases of the ears (which again may differ according to the pose).
Some teachers deprecate mechanistic measurements and strongly advise the artist to learn to estimate proportion by eye alone.
<templatestyles src="Template:Blockquote/styles.css" />It is in drawing from the life that a canon is likely to be a hindrance to the artist; but it is not the method of Indian art to work from the model. Almost the whole philosophy of Indian art is summed up in the verse of Śukrācārya's "Śukranĩtisāra" which enjoins meditations upon the imager: "In order that the form of an image may be brought fully and clearly before the mind, the imager should medi[t]ate; and his success will be proportionate to his meditation. No other way—not indeed seeing the object itself—will achieve his purpose." The canon then, is of use as a rule of thumb, relieving him of some part of the technical difficulties, leaving him free to concentrate his thought more singly on the message or burden of his work. It is only in this way that it must have been used in periods of great achievement, or by great artists.
Ratios.
<templatestyles src="Template:Blockquote/styles.css" />[Proportion] should not be confused with a ratio, involving two magnitudes. Modern usage tends to substitute "proportion" for a comparison involving two magnitudes (e.g., length and width), and hence mistakes a mere grouping of simple ratios for a complete proportion system, often with a linear basis at odds with the areal approach of Greek geometry
Many text books of artistic anatomy advise that the head height be used as a yardstick for other lengths in the body: their ratios to it provide a consistent and credible structure. Although the average person is 7<templatestyles src="Fraction/styles.css" />1⁄2 heads tall, the custom in Classical Greece (since Lysippos) and Renaissance art was to set the figure as eight heads tall: "the eight-heads-length figure seems by far the best; it gives dignity to the figure and also seems to be the most convenient." The half-way mark is a line between the greater trochanters, just above the pubic arch.
Body proportions in history.
The earliest known representations of female figures date from 23,000 to 25,000 years ago. Models of the human head (such as the Venus of Brassempouy) are rare in Paleolithic art: most are like the Venus of Willendorf – bodies with vestigial head and limbs, noted for their very high waist:hip ratio of 1:1 or more. It may be that the artists' "depictions of corpulent, middle-aged females were not 'Venuses' in any conventional sense. They may, instead, have symbolized the hope for survival and longevity, within well-nourished and reproductively successful communities."
The ancient Greek sculptor Polykleitos (c.450–420 BCE), known for his ideally proportioned bronze "Doryphoros", wrote an influential "Canon" (now lost) describing the proportions to be followed in sculpture. The "Canon" applies the basic mathematical concepts of Greek geometry, such as the ratio, proportion, and "symmetria" (Greek for "harmonious proportions") creating a system capable of describing the human form through a series of continuous geometric progressions. Polykleitos may have used the distal phalanx of the little finger as the basic module for determining the proportions of the human body, scaling this length up repeatedly by √2 to obtain the ideal size of the other phalanges, the hand, forearm, and upper arm in turn.
Leonardo da Vinci believed that the ideal human proportions were determined by the harmonious proportions that he believed governed the universe, such that the ideal man would fit cleanly into a circle as depicted in his famed drawing of "Vitruvian Man" (c. 1492), as described in a book by Vitruvius. Leonardo's commentary is about relative body proportions – with comparisons of hand, foot, and other feature's lengths to other body parts – more than to actual measurements.
Golden ratio.
It has been suggested that the ideal human figure has its navel at the golden ratio (formula_0, about 1.618), dividing the body in the ratio of 0.618 to 0.382 (soles of feet to navel:navel to top of head) (<templatestyles src="Fraction/styles.css" />1⁄formula_0 is formula_0-1, about 0.618) and Leonardo da Vinci's Vitruvian Man is cited as evidence. In reality, the navel of the Vitruvian Man divides the figure at 0.604 and nothing in the accompanying text mentions the golden ratio.
In his conjectural reconstruction of the Canon of Polykleitos, art historian Richard Tobin determined √2 (about 1.4142) to be the important ratio between elements that the classical Greek sculptor had used.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\phi"
}
] | https://en.wikipedia.org/wiki?curid=934991 |
9350418 | Molar conductivity | Conductivity per molar concentration of electrolyte
The molar conductivity of an electrolyte solution is defined as its conductivity divided by its molar concentration.
formula_0
where:
"κ" is the measured conductivity (formerly known as specific conductance),
"c" is the molar concentration of the electrolyte.
The SI unit of molar conductivity is siemens metres squared per mole (S m2 mol−1). However, values are often quoted in S cm2 mol−1. In these last units, the value of Λm may be understood as the conductance of a volume of solution between parallel plate electrodes one centimeter apart and of sufficient area so that the solution contains exactly one mole of electrolyte.
Variation of molar conductivity with dilution.
There are two types of electrolytes: strong and weak. Strong electrolytes usually undergo complete ionization, and therefore they have higher conductivity than weak electrolytes, which undergo only partial ionization. For strong electrolytes, such as salts, strong acids and strong bases, the molar conductivity depends only "weakly" on concentration. On dilution there is a regular increase in the molar conductivity of strong electrolyte, due to the decrease in solute–solute interaction. Based on experimental data Friedrich Kohlrausch (around the year 1900) proposed the non-linear law for strong electrolytes:
formula_1
where
Λ is the molar conductivity at infinite dilution (or "limiting molar conductivity"), which can be determined by extrapolation of Λm as a function of √"c",
"K" is the Kohlrausch coefficient, which depends mainly on the stoichiometry of the specific salt in solution,
"α" is the dissociation degree even for strong concentrated electrolytes,
"fλ" is the lambda factor for concentrated solutions.
This law is valid for low electrolyte concentrations only; it fits into the Debye–Hückel–Onsager equation.
For weak electrolytes (i.e. incompletely dissociated electrolytes), however, the molar conductivity "strongly" depends on concentration: The more dilute a solution, the greater its "molar" conductivity, due to increased ionic dissociation. For example, acetic acid has a higher molar conductivity in dilute aqueous acetic acid than in concentrated acetic acid.
Kohlrausch's law of independent migration of ions.
Friedrich Kohlrausch in 1875–1879 established that to a high accuracy in dilute solutions, molar conductivity can be decomposed into contributions of the individual ions. This is known as Kohlrausch's law of independent ionic migration.
For any electrolyte A"x"B"y", the limiting molar conductivity is expressed as "x" times the limiting molar conductivity of A"y"+ and "y" times the limiting molar conductivity of B"x"−.
formula_2
where:
"λi" is the limiting molar ionic conductivity of ion "i",
"νi" is the number of ions "i" in the formula unit of the electrolyte (e.g. 2 and 1 for Na+ and SO42− in Na2SO4).
Kohlrausch's evidence for this law was that the limiting molar conductivities of two electrolytes with two different cations and a common anion differ by an amount which is independent of the nature of the anion. For example, Λ0(KX) − Λ0(NaX) = 23.4 S cm2 mol−1 for X = Cl−, I− and SO42−. This difference is ascribed to a difference in ionic conductivities between K+ and Na+. Similar regularities are found for two electrolytes with a common anion and two cations.
Molar ionic conductivity.
The molar ionic conductivity of each ionic species is proportional to its electrical mobility ("μ"), or drift velocity per unit electric field, according to the equation
formula_3
where "z" is the ionic charge, and "F" is the Faraday constant.
The limiting molar conductivity of a weak electrolyte cannot be determined reliably by extrapolation. Instead it can be expressed as a sum of ionic contributions, which can be evaluated from the limiting molar conductivities of strong electrolytes containing the same ions. For aqueous acetic acid as an example,
formula_4
Values for each ion may be determined using measured ion transport numbers. For the cation:
formula_5
and for the anion:
formula_6
Most monovalent ions in water have limiting molar ionic conductivities in the range of 40–80 S cm2 mol−1. For example:
The order of the values for alkali metals is surprising, since it shows that the smallest cation Li+ moves more slowly in a given electric field than Na+, which in turn moves more slowly than K+. This occurs because of the effect of solvation of water molecules: the smaller Li+ binds most strongly to about four water molecules so that the moving cation species is effectively Li(H2O)4+. The solvation is weaker for Na+ and still weaker for K+. The increase in halogen ion mobility from F− to Cl− to Br− is also due to decreasing solvation.
Exceptionally high values are found for H+ (349.8 S cm2 mol−1) and OH− (198.6 S cm2 mol−1), which are explained by the Grotthuss proton-hopping mechanism for the movement of these ions. The H+ also has a larger conductivity than other ions in alcohols, which have a hydroxyl group, but behaves more normally in other solvents, including liquid ammonia and nitrobenzene.
For multivalent ions, it is usual to consider the conductivity divided by the equivalent ion concentration in terms of equivalents per litre, where 1 equivalent is the quantity of ions that have the same amount of electric charge as 1 mol of a monovalent ion: mol Ca2+, mol SO42−, mol Al3+, mol Fe(CN)64−, etc. This quotient can be called the "equivalent conductivity", although IUPAC has recommended that use of this term be discontinued and the term molar conductivity be used for the values of conductivity divided by equivalent concentration. If this convention is used, then the values are in the same range as monovalent ions, e.g. 59.5 S cm2 mol−1 for Ca2+ and 80.0 S cm2 mol−1 for SO42−.
From the ionic molar conductivities of cations and anions, effective ionic radii can be calculated using the concept of Stokes radius. The values obtained for an ionic radius in solution calculated this way can be quite different from the ionic radius for the same ion in crystals, due to the effect of hydration in solution.
Applications.
Ostwald's law of dilution, which gives the dissociation constant of a weak electrolyte as a function of concentration, can be written in terms of molar conductivity. Thus, the p"K"a values of acids can be calculated by measuring the molar conductivity and extrapolating to zero concentration. Namely, p"K"a = p() at the zero-concentration limit, where "K" is the dissociation constant from Ostwald's law.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\Lambda_\\text{m} = \\frac{\\kappa}{c},"
},
{
"math_id": 1,
"text": "\\Lambda_\\text{m} =\\Lambda_\\text{m}^\\circ - K\\sqrt{c} = \\alpha f_\\lambda \\Lambda_\\text{m}^\\circ,"
},
{
"math_id": 2,
"text": "\\Lambda_\\text{m}^\\circ = \\sum_i \\nu_i \\lambda_i,"
},
{
"math_id": 3,
"text": "\\lambda = z \\mu F"
},
{
"math_id": 4,
"text": "\\begin{aligned}\n \\Lambda_\\text{m}^\\circ(\\ce{CH3COOH}) &= \\lambda(\\ce{CH3COO-}) + \\lambda(\\ce{H+}) \\\\\n &= \\Lambda_\\text{m}^\\circ(\\ce{CH3COONa}) + \\Lambda_\\text{m}^\\circ(\\ce{HCl}) - \\Lambda_\\text{m}^\\circ(\\ce{NaCl}).\n\\end{aligned}"
},
{
"math_id": 5,
"text": "\\lambda^+ = t_+ \\cdot \\frac{\\Lambda_0}{\\nu^+}"
},
{
"math_id": 6,
"text": "\\lambda^- = t_- \\cdot \\frac{\\Lambda_0}{\\nu^-}."
}
] | https://en.wikipedia.org/wiki?curid=9350418 |
9350540 | Ascher H. Shapiro | American author and professor of mechanical engineering and fluid mechanics
Ascher Herman Shapiro (May 20, 1916 – November 26, 2004) was a professor of Mechanical Engineering at MIT. He grew up in New York City.
Early life and education.
Shapiro was born and raised in Brooklyn, New York, to Jewish Lithuanian immigrant parents. He earned his S.B. in 1938 and an Sc.D. in 1946 in the field of mechanical engineering at the Massachusetts Institute of Technology (MIT).
Career.
After starting at MIT as a laboratory assistant in mechanical engineering, Shapiro was eventually appointed assistant professor at MIT in 1943 where he taught fluid mechanics. A prolific author of texts in his field, his two-volume treatise, "The Dynamics and Thermodynamics of Compressible Fluid Flow", published in 1953 and 1954, is considered a classic. His 1961 book "Shape and Flow: The Fluid Dynamics of Drag" explained boundary layer phenomena and drag in simple, non-mathematical terms. He also founded the National Council for Fluid Mechanics Films (NCFMF), in cooperation with the Educational Development Center. From there, Shapiro was appointed Chair of the Institute's Faculty in 1964-1965 and head of the Department of Mechanical Engineering from 1965 to 1974.
In 1962 he demonstrated the Coriolis effect in a bathtub-sized water tank placed in MIT (latitude 42° N). The experiment required extreme precision, since the acceleration due to Coriolis effect is only formula_0 that of gravity. The tank was filled, kept static for 24 hours, then drained. The vortex was measured by a cross made of two silvers of wood pinned above the draining hole. It takes 20 minutes to drain, and the cross starts turning only around 15 minutes. At the end it is turning at 1 rotation every 3 to 4 seconds.
Shapiro was elected to American Academy of Arts and Sciences in 1952, the National Academy of Sciences in 1967, and National Academy of Engineering in 1974. He was awarded the Benjamin Garver Lamme Award by the American Society of Engineering Education in 1977. He was awarded the Fluids Engineering Award in 1977 and the Drucker Medal in 1999 by the American Society of Mechanical Engineers. He was awarded honorary Doctor of Science in 1978 by the University of Salford and in 1985 by the Technion.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "3\\times 10^{-7}"
}
] | https://en.wikipedia.org/wiki?curid=9350540 |
9350828 | Philo line | Type of line segment
In geometry, the Philo line is a line segment defined from an angle and a point inside the angle as the shortest line segment through the point that has its endpoints on the two sides of the angle. Also known as the Philon line, it is named after Philo of Byzantium, a Greek writer on mechanical devices, who lived probably during the 1st or 2nd century BC. Philo used the line to double the cube; because doubling the cube cannot be done by a straightedge and compass construction, neither can constructing the Philo line.
Geometric characterization.
The defining point of a Philo line, and the base of a perpendicular from the apex of the angle to the line, are equidistant from the endpoints of the line.
That is, suppose that segment formula_0 is the Philo line for point formula_1 and angle formula_2, and let formula_3 be the base of a perpendicular line formula_4 to formula_0. Then formula_5 and formula_6.
Conversely, if formula_1 and formula_3 are any two points equidistant from the ends of a line segment formula_0, and if formula_7 is any point on the line through formula_3 that is perpendicular to formula_0, then formula_0 is the Philo line for angle formula_2 and point formula_1.
Algebraic Construction.
A suitable fixation of the line given the directions from formula_7 to formula_8 and from formula_7 to formula_9 and the location of formula_1 in that infinite triangle is obtained by the following algebra:
The point formula_7 is put into the center of the coordinate system, the direction from formula_7 to formula_8 defines the horizontal formula_10-coordinate, and the direction from formula_7 to formula_9 defines the line with the equation formula_11 in the rectilinear coordinate system. formula_12 is the tangent of the angle in the triangle formula_2. Then formula_1 has the Cartesian Coordinates formula_13 and the task is to find formula_14 on the horizontal axis and formula_15 on the other side of the triangle.
The equation of a bundle of lines with inclinations formula_16 that
run through the point formula_17 is
formula_18
These lines intersect the horizontal axis at
formula_19
which has the solution
formula_20
These lines intersect the opposite side formula_21 at
formula_22
which has the solution
formula_23
The squared Euclidean distance between the intersections of the horizontal line
and the diagonal is
formula_24
The Philo Line is defined by the minimum of that distance at
negative formula_16.
An arithmetic expression for the location of the minimum
is obtained by setting the derivative formula_25,
so
formula_26
So calculating the root of the polynomial in the numerator,
formula_27
determines the slope of the particular line in the line bundle which has the shortest length.
[The global minimum at inclination formula_28 from the root of the other factor is not of interest; it does not define a triangle but means
that the horizontal line, the diagonal and the line of the bundle all intersect at formula_29.]
formula_30 is the tangent of the angle formula_31.
Inverting the equation above as formula_32 and plugging this into the previous equation
one finds that formula_33 is a root of the cubic polynomial
formula_34
So solving that cubic equation finds the intersection of the Philo line on the horizontal axis.
Plugging in the same expression into the expression for the squared distance gives
formula_35
Location of formula_3.
Since the line formula_4 is orthogonal to formula_36, its slope is formula_37, so the points on that line are formula_38. The coordinates of the point formula_39 are calculated by intersecting this line with the Philo line, formula_40. formula_41 yields
formula_42
formula_43
With the coordinates formula_44 shown above, the squared distance from formula_9 to formula_3 is
formula_45.
The squared distance from formula_8 to formula_1 is
formula_46.
The difference of these two expressions is
formula_47.
Given the cubic equation for formula_16 above, which is one of the two cubic polynomials in the numerator, this is zero.
This is the algebraic proof that the minimization of formula_0 leads to formula_48.
Special case: right angle.
The equation of a bundle of lines with inclination formula_16 that
run through the point formula_17, formula_49, has an intersection with the formula_10-axis given above.
If formula_2 form a right angle, the limit formula_50 of the previous section results
in the following special case:
These lines intersect the formula_51-axis at
formula_52
which has the solution
formula_53
The squared Euclidean distance between the intersections of the horizontal line and vertical lines
is
formula_54
The Philo Line is defined by the minimum of that curve (at
negative formula_16).
An arithmetic expression for the location of the minimum
is where the derivative formula_25,
so
formula_55
equivalent to
formula_56
Therefore
formula_57
Alternatively, inverting the previous equations as formula_32 and plugging this into another equation above
one finds
formula_58
Doubling the cube.
The Philo line can be used to double the cube, that is, to construct a geometric representation of the cube root of two, and this was Philo's purpose in defining this line. Specifically, let formula_59 be a rectangle whose aspect ratio formula_60 is formula_61, as in the figure. Let formula_62 be the Philo line of point formula_1 with respect to right angle formula_63. Define point formula_64 to be the point of intersection of line formula_62 and of the circle through points formula_59. Because triangle formula_65 is inscribed in the circle with formula_66 as diameter, it is a right triangle, and formula_64 is the base of a perpendicular from the apex of the angle to the Philo line.
Let formula_67 be the point where line formula_68 crosses a perpendicular line through formula_64. Then the equalities of segments formula_69, formula_70, and formula_71 follow from the characteristic property of the Philo line. The similarity of the right triangles formula_72, formula_73, and formula_74 follow by perpendicular bisection of right triangles. Combining these equalities and similarities gives the equality of proportions formula_75 or more concisely formula_76. Since the first and last terms of these three equal proportions are in the ratio formula_61, the proportions themselves must all be formula_77, the proportion that is required to double the cube.
Since doubling the cube is impossible with a straightedge and compass construction, it is similarly impossible to construct the Philo line with these tools.
Minimizing the area.
Given the point formula_1 and the angle formula_2, a variant of the problem may minimize the area of the triangle formula_31. With the expressions for formula_78 and formula_44 given above, the area is half the product of height and base length,
formula_79.
Finding the slope formula_16 that minimizes the area means to set formula_80,
formula_81.
Again discarding the root formula_82 which does not define a triangle, the slope is in that
case
formula_83
and the minimum area
formula_84.
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "DE"
},
{
"math_id": 1,
"text": "P"
},
{
"math_id": 2,
"text": "DOE"
},
{
"math_id": 3,
"text": "Q"
},
{
"math_id": 4,
"text": "OQ"
},
{
"math_id": 5,
"text": "DP=EQ"
},
{
"math_id": 6,
"text": "DQ=EP"
},
{
"math_id": 7,
"text": "O"
},
{
"math_id": 8,
"text": "E"
},
{
"math_id": 9,
"text": "D"
},
{
"math_id": 10,
"text": "x"
},
{
"math_id": 11,
"text": "y{{=}}mx"
},
{
"math_id": 12,
"text": "m"
},
{
"math_id": 13,
"text": "(P_x,P_y)"
},
{
"math_id": 14,
"text": "E=(E_x,0)"
},
{
"math_id": 15,
"text": "D=(D_x,D_y)=(D_x,mD_x)"
},
{
"math_id": 16,
"text": "\\alpha"
},
{
"math_id": 17,
"text": "(x,y)=(P_x,P_y)"
},
{
"math_id": 18,
"text": "\ny=\\alpha(x-P_x)+P_y.\n"
},
{
"math_id": 19,
"text": "\n\\alpha(x-P_x)+P_y=0\n"
},
{
"math_id": 20,
"text": "\n(E_x,E_y)=\\left(P_x-\\frac{P_y}{\\alpha},0\\right).\n"
},
{
"math_id": 21,
"text": "y=mx"
},
{
"math_id": 22,
"text": "\n\\alpha(x-P_x)+P_y=mx\n"
},
{
"math_id": 23,
"text": "\n(D_x,D_y)=\\left(\\frac{\\alpha P_x-P_y}{\\alpha-m},m\\frac{\\alpha P_x-P_y}{\\alpha-m}\\right).\n"
},
{
"math_id": 24,
"text": "\nED^2 = d^2=(E_x-D_x)^2+(E_y-D_y)^2 = \\frac{m^2(\\alpha P_x-P_y)^2(1+\\alpha^2)}{\\alpha^2(\\alpha-m)^2}.\n"
},
{
"math_id": 25,
"text": "\\partial d^2/\\partial \\alpha=0"
},
{
"math_id": 26,
"text": "\n-2m^2\\frac{(P_x\\alpha -P_y)[(mP_x-P_y)\\alpha^3+P_x\\alpha^2-2P_y\\alpha+P_ym]}{\\alpha^3 (\\alpha-m)^3}=0 .\n"
},
{
"math_id": 27,
"text": "\n(mP_x-P_y)\\alpha^3+P_x\\alpha^2-2P_y\\alpha+P_ym=0\n"
},
{
"math_id": 28,
"text": "\\alpha=P_y/P_x"
},
{
"math_id": 29,
"text": "(0,0)"
},
{
"math_id": 30,
"text": "-\\alpha"
},
{
"math_id": 31,
"text": "OED"
},
{
"math_id": 32,
"text": "\\alpha_1=P_y/(P_x-E_x)"
},
{
"math_id": 33,
"text": "E_x"
},
{
"math_id": 34,
"text": "\nmx^3+(2P_y-3mP_x)x^2+3P_x(mP_x-P_y)x-(mP_x-P_y)(P_x^2+P_y^2) .\n"
},
{
"math_id": 35,
"text": "\nd^2=\n\\frac{P_y^2+x^2-2xP_x+P_x^2}{(P_y+mx-mP_x)^2}\nx^2m^2\n.\n"
},
{
"math_id": 36,
"text": "ED"
},
{
"math_id": 37,
"text": "-1/\\alpha"
},
{
"math_id": 38,
"text": "y=-x/\\alpha"
},
{
"math_id": 39,
"text": "Q=(Q_x,Q_y)"
},
{
"math_id": 40,
"text": "y=\\alpha(x-P_x)+P_y"
},
{
"math_id": 41,
"text": "\\alpha(x-P_x)+P_y=-x/\\alpha"
},
{
"math_id": 42,
"text": "Q_x=\\frac{(\\alpha P_x-P_y)\\alpha}{1+\\alpha^2}"
},
{
"math_id": 43,
"text": "Q_y=-Q_x/\\alpha = \\frac{P_y-\\alpha P_x}{1+\\alpha^2}"
},
{
"math_id": 44,
"text": "(D_x,D_y)"
},
{
"math_id": 45,
"text": "DQ^2 = (D_x-Q_x)^2+(D_y-Q_y)^2 = \\frac{(\\alpha P_x-P_y)^2(1+\\alpha m)^2}{(1+\\alpha^2)(\\alpha-m)^2}"
},
{
"math_id": 46,
"text": "EP^2 \\equiv (E_x-P_x)^2+(E_y-P_y)^2 = \\frac{P_y ^2(1+\\alpha^2)}{\\alpha^2}"
},
{
"math_id": 47,
"text": "DQ^2-EP^2 = \\frac{[(P_xm+P_y)\\alpha^3+(P_x-2P_ym)\\alpha^2-P_ym]\n[(P_xm-P_y)\\alpha^3+P_x\\alpha^2-2P_y\\alpha+P_ym]}{\\alpha^2(1+\\alpha^2)(a-m)^2}"
},
{
"math_id": 48,
"text": "DQ=PE"
},
{
"math_id": 49,
"text": "P_x,P_y>0"
},
{
"math_id": 50,
"text": "m\\to\\infty"
},
{
"math_id": 51,
"text": "y"
},
{
"math_id": 52,
"text": "\n\\alpha(-P_x)+P_y\n"
},
{
"math_id": 53,
"text": "\n(D_x,D_y)=(0,P_y-\\alpha P_x).\n"
},
{
"math_id": 54,
"text": " d^2=(E_x-D_x)^2+(E_y-D_y)^2 = \\frac{(\\alpha P_x-P_y)^2(1+\\alpha^2)}{\\alpha^2}.\n"
},
{
"math_id": 55,
"text": "\n2\\frac{(P_x\\alpha -P_y)(P_x\\alpha^3+P_y)}{\\alpha^3}=0\n"
},
{
"math_id": 56,
"text": "\n\\alpha = -\\sqrt[3]{P_y/P_x}\n"
},
{
"math_id": 57,
"text": "\nd=\\frac{P_y-\\alpha P_x}{|\\alpha|}\\sqrt{1+\\alpha^2}\n=P_x[1+(P_y/P_x)^{2/3}]^{3/2}.\n"
},
{
"math_id": 58,
"text": "\nE_x=P_x+P_y\\sqrt[3]{P_y/P_x}.\n"
},
{
"math_id": 59,
"text": "PQRS"
},
{
"math_id": 60,
"text": "PQ:QR"
},
{
"math_id": 61,
"text": "1:2"
},
{
"math_id": 62,
"text": "TU"
},
{
"math_id": 63,
"text": "QRS"
},
{
"math_id": 64,
"text": "V"
},
{
"math_id": 65,
"text": "RVP"
},
{
"math_id": 66,
"text": "RP"
},
{
"math_id": 67,
"text": "W"
},
{
"math_id": 68,
"text": "QR"
},
{
"math_id": 69,
"text": "RS=PQ"
},
{
"math_id": 70,
"text": "RW=QU"
},
{
"math_id": 71,
"text": "WU=RQ"
},
{
"math_id": 72,
"text": "PQU"
},
{
"math_id": 73,
"text": "RWV"
},
{
"math_id": 74,
"text": "VWU"
},
{
"math_id": 75,
"text": "RS:RW = PQ:QU = RW:WV = WV:WU = WV:RQ"
},
{
"math_id": 76,
"text": "RS:RW = RW:WV = WV:RQ"
},
{
"math_id": 77,
"text": "1:\\sqrt[3]{2}"
},
{
"math_id": 78,
"text": "(E_x,E_y)"
},
{
"math_id": 79,
"text": "A = D_yE_x/2 =\\frac{m(\\alpha P_x-P_y)^2}{2\\alpha(\\alpha-m)}"
},
{
"math_id": 80,
"text": "\\partial A/\\partial \\alpha=0"
},
{
"math_id": 81,
"text": " - \\frac{m(\\alpha P_x-P_y)[(mP_x-2P_y)\\alpha+P_ym]}{2\\alpha^2(\\alpha-m)^2}=0"
},
{
"math_id": 82,
"text": "\\alpha = P_y/P_x"
},
{
"math_id": 83,
"text": " \\alpha = -\\frac{mP_y}{mP_x-2P_y}"
},
{
"math_id": 84,
"text": " A = \\frac{2P_y(mP_x-P_y)}{m}"
}
] | https://en.wikipedia.org/wiki?curid=9350828 |
9351 | Economy of Egypt | The economy of Egypt is a highly centralized economy, focused on import substitution under president Gamal Abdel Nasser (1954–1970). During the rule of president Abdel Fattah el-Sisi (2014–present), the economy follows Egypt's 2030 Vision. The policy is aimed at diversifying Egypt's economy. The country's economy is the second largest in Africa by nominal GDP, and 42nd in worldwide ranking as of 2024.
Since the 2000s, the pace of structural reforms (including fiscal and monetary policies, taxation, privatization and new business legislation) helped Egypt move towards a more market-oriented economy and prompted increased foreign investment. The reforms and policies have strengthened macroeconomic annual growth results. As Egypt's economy healed, other prominent issues like unemployment and poverty began to decline significantly.
The country benefits from political stability; its proximity to Europe, and increased exports. From an investor perspective, Egypt is stable and well-supported by external stakeholders.
History.
From the 1850s until the 1930s, Egypt's economy was heavily reliant on long-staple cotton, introduced in the mid-1820s during the reign of Muhammad Ali (1805–49) and made possible by the switch from basin irrigation to perennial, modern irrigation. Cotton cultivation was a key ingredient in an ambitious program that the Egyptian ruler undertook to diversify and develop the economy.
Another such ingredient was industrialization. Industrialization, however, proved for various domestic and external reasons to be less than successful, and until the 1930s, virtually no industrial build-up occurred. The failure of industrialization resulted largely from tariff restrictions that Britain imposed on Egypt through a 1838 commercial treaty, which allowed only minuscule tariffs, if any. The isolated industrial ventures initiated by members of Egypt's landed aristocracy, who otherwise channeled their investment into land acquisition and speculation, were nipped in the bud by foreign competition. The few surviving enterprises were owned by foreigners. These enterprises either enjoyed natural protection, as in the case of sugar and cotton processing, or benefited from the special skills that the foreign owners had acquired, as in the case of cigarette making by Greeks and Turks.
World War I and the National Awakening: Important shortages during World War I and the demand created by the presence in the country of large Allied forces led to the opening of a number of small manufacturing plants, and older ones found it profitable to increase their output. There was no prospect for real progress in industrial development, however, until 1930, when the last of the commercial treaties with their "most favoured nations" clauses, expired. Nevertheless, the brief wartime prosperity had served a good purpose, It had awakened official Egypt, of which the higher echelons were at that time mainly representative of the class of great landlords, in whose hands the wealth of the country Is largely concentrated, to the country's potentialities for industrialization. It had also brought to official Egypt some appreciation of how much the country as a whole could benefit if there was a transition from an economy based on the export of agricultural raw materials and the importation of manufactured goods, to one in which a full effort was made to satisfy the local demand for such goods with local manufactures supplied by local raw materials.
The result was the appointment in 1916 of a Commission on Commerce and Industry to study the situation, the creation in 1920 in the Ministry of Finance of a department (now Ministry) of Commerce and Industry, and the founding in 1922 of a Federation of Industries. The report of the commission, issued in 1917, focused attention on the significance of industrialization as a step toward sound economy and a remedy for the country's already grave problem of overpopulation.
Tariff Reform and Direct Government Aid: Nevertheless, the date of the turning point in Egypt's industrial development may properly be said to be February 16, 1930, when the last of the commercial treaties, that with Italy, expired. The government had ready a tariff reform measure drafted by a committee of foreign experts engaged for the purpose in 1927, and, in command at last of its own tariff policy, put it into effect the very next day after the expiration of the treaty. The reforms were designed specifically to encourage and protect local manufacturing. High duties, in many cases practically prohibitive, were imposed on imports of products considered competitive with local products or for which the prospects for local manufacture seemed good. Duties on numerous raw materials were at the same time greatly reduced, although there were still many which, for revenue purposes, remained heavily taxed. The protection thus afforded to industry had an almost immediate effect, as is reflected by the increase in imports of raw material and machinery and decrease in those of manufactured goods in 1938 as compared-with 1913.
World War II Prosperity and After: As has been pointed out, World War II gave new impetus to industrialization. During much of the North African campaign Egypt was a strategic base for the Allied forces. The prosperity they brought meant greatly augmented demands for certain supplies that could be furnished only by local industry. Also, in the Middle East the difficulty of getting supplies from other sources greatly increased the demand for Egyptian products. Hence many of the local industries expanded and diversified their output, many new enterprises were founded, and a greatly enlarged trade with the Middle East opened Egyptian eyes to the marketing possibilities there. Scarcely less important, of some 300,000 Egyptians who were employed by the Allied forces, many gained technical training and experience in manufacturing and repair work and in the servicing and maintenance of equipment. The aftermath was quite different from that of World War I. Local consumers had become more accustomed to local manufactures, a greater variety of products was available, and the quality of many items had been improved so that they could compete successfully with imported products. Moreover, many industrial organizations had profited greatly from the rise in prices which accompanied the increased demand for their products, and hence, for the first time in the history of Egyptian industry, possessed reserve funds that they could use for further expansion, modernization of their equipment, and even for new ventures.
Consequently, no such decline in industrial activity ensued as that which followed the brief period of prosperity during World War I. Instead, many industries continued to expand and a number of new ones were founded. The manufacture of textiles, rayon, plastics, chemical fertilizers, rubber goods, pharmaceutics, and steel castings, and refrigeration were among the more important enterprises involved in this new development. During the three-year period immediately after the end of the war more than a hundred new stock companies, with a total capital of upwards of £20,000,000, were formed. Various concessions offered by the government also brought in a number of branches of foreign companies with a view both to satisfying the demand for certain commodities that local manufacturers could not duplicate and at the same time to providing additional employment for local labor - among them automobile assembly plants and plants for the manufacture of toilet soap, electrical fixtures, and soft drinks. In response to this surge in industrial activity the government proceeded to put into effect certain long-needed measures designed to promote the organization of industrial companies and to assist them with credit facilities. The most important of these measures were the enactment of a Companies Law in 1947 and in 1949 the foundation of an Industrial Bank in which 51 per cent of the shares were government owned.
The beginnings of industrialization awaited the depression of the late 1920s and 1930s and World War II. The depression sent cotton prices tumbling, and Britain acceded to Egyptian demands to raise tariffs. Moreover, World War II, by substantially reducing the flow of foreign goods into the country, gave further impetus to the establishment of import-substitution industries. A distinguishing feature of the factories built at this time was that they were owned by Egyptian entrepreneurs.
In spite of the lack of industrialization, the economy grew rapidly throughout the nineteenth century. Growth, however, was confined to the cotton sector and the supporting transportation, financial, and other facilities. Little of the cotton revenues was invested in economic development. The revenues were largely drained out of the country as repatriated profits or repayments of debts that the state had incurred to pay for irrigation works and the extravagance of the khedives.
Rapid economic growth ended in the early 1900s. The supply of readily available land had been largely exhausted and multiple cropping, concentration on cotton, and perennial irrigation had lessened the fertility of the soil. Cotton yields dropped in the early 1900s and recovered their former level only in the 1940s, through investments in modern inputs such as fertilizers and drainage.
The fall in agricultural productivity and trade led to a stagnation in the per capita gross national product (GNP) between the end of World War I and the 1952 Revolution: the GNP averaged E£, in 1954 prices, at both ends of the period. By 1952 Egypt was in the throes of both economic and political crises, which culminated in the assumption of power by the Free Officers.
By necessity if not by design, the revolutionary regime gave considerably greater priority to economic development than did the monarchy, and the economy has been a central government concern since then. While the economy grew steadily, it sometimes exhibited sharp fluctuations. Analysis of economic growth is further complicated by the difficulty in obtaining reliable statistics. Growth figures are often disputed, and economists contend that growth estimates may be grossly inaccurate because of the informal economy and workers' remittances, which may contribute as much as one-fourth of GNP. According to one estimate, the gross domestic product (GDP), at 1965 constant prices, grew at an annual compound rate of about 4.2 percent between 1955 and 1975. This was about 1.7 times larger than the annual population growth rate of 2.5 percent in the same period. The period between 1967 and 1974, the final years of Gamal Abdul Nasser's presidency and the early part of Anwar el-Sadat's, however, were lean years, with growth rates of only about 3.3 percent. The slowdown was caused by many factors, including agricultural and industrial stagnation and the costs of the June 1967 war. Investments, which were a crucial factor for the preceding growth, also nose-dived and recovered only in 1975 after the dramatic 1973 increase in oil prices.
Like most countries in the Middle East, Egypt partook of the oil boom and suffered the subsequent slump. Available figures suggest that between 1975 and 1980 the GDP (at 1980 prices) grew at an annual rate of more than 11 percent. This impressive achievement resulted, not from the contribution of manufacturing or agriculture, but from oil exports, remittances, foreign aid, and grants. From the mid-1980s, GDP growth slowed as a result of the 1985-86 crash in oil prices. In the two succeeding years, the GDP grew at no more than an annual rate of 2.9 percent. Of concern for the future was the decline of the fixed investment ratio from around 30 percent during most of the 1975-85 decade to 22 percent in 1987.
Several additional economic periods followed:
Reform era.
Under comprehensive economic reforms initiated in 1991, Egypt has relaxed many price controls, reduced subsidies, reduced inflation, cut taxes, and partially liberalized trade and investment. Manufacturing had become less dominated by the public sector, especially in heavy industries. A process of public sector reform and privatization has begun to enhance opportunities for the private sector.
Agriculture, mainly in private hands, has been largely deregulated, with the exception of cotton and sugar production. Construction, non-financial services, and domestic wholesale and retail trades are largely private. This has promoted a steady increase of GDP and the annual growth rate. The Government of Egypt tamed inflation bringing it down from double-digit to a single digit. Currently, GDP is rising smartly by 7% per annum due to successful diversification.
Gross domestic product (GDP) per capita based on purchasing-power-parity (PPP) increased fourfold between 1981 and 2006, from Int$1,355 in 1981, to Int$2,525 in 1991, to Int$3,686 in 2001 and to an estimated Int$4,535 in 2006. Based on national currency, GDP per capita at constant 1999 prices increased from E£ in 1981, to E£ in 1991, to E£ in 2001 and to E£ in 2006.
Based on the current US$ prices, GDP per capita increased from US$587 in 1981, to US$869 in 1991, to US$1,461 in 2001 and to an estimated US$1,518 (which translates to less than US$130 per month) in 2006. According to the World Bank Country Classification, Egypt has been promoted from the low income category to lower middle income category. As of 2013, the average weekly salaries in Egypt reached E£ (approx. US$92), which grew by 20% from the previous year.
The reform program is a work in progress. Noteworthy that the reform record has substantially improved since Nazif government came to power. Egypt has made substantial progress in developing its legal, tax and investment infrastructure. Indeed, over the past five years, Egypt has passed, amended and admitted over 15 legislative pieces. The economy is expected to grow by about 4% to 6% in 2009–2010.
Surging domestic inflationary pressures from both economic growth and elevated international food prices led the Central Bank of Egypt to increase the overnight lending and deposit rates in sequential moves since February 2008. The rates stood at 11.5% and 13.5%, respectively, since 18 September 2008.
The rise of the World Global Financial Crisis led to a set of fiscal-monetary policy measures to face its repercussions on the national economy, including reducing the overnight lending and deposit rates by 1% on 12 February 2009. The rates currently stand at 10.5% and 12.5%, respectively.
Reform of energy and food subsidies, privatization of the state-owned Bank of Cairo, and inflation targeting are perhaps the most controversial economic issues in 2007–2008 and 2008–2009.
External trade and remittances.
Egypt's trade balance marked US$10.36 billion in FY2005 compared to US$7.5 billion. Egypt's main exports consist of natural gas, and non-petroleum products such as ready-made clothes, cotton textiles, medical and petrochemical products, citrus fruits, rice and dried onion, and more recently cement, steel, and ceramics.
Egypt's main imports consist of pharmaceuticals and non-petroleum products such as wheat, maize, cars and car spare parts. The current account grew from 0.7% of GDP in FY2002 to 3.3% at FY2005. Egypt's Current Account made a surplus of US$4,478 million in FY2005 compared to a deficit of US$158 million in FY2004. Italy and the USA are the top export markets for Egyptian goods and services. In the Arab world, Egypt has the largest non-oil GDP as of 2018.
According to the International Organization for Migration, an estimated 2.7 million Egyptians abroad contribute actively to the development of their country through remittance inflows, circulation of human and social capital, as well as investment. In 2009 Egypt was the biggest recipient of remittances in the Middle East; an estimated US$7.8 bn was received in 2009, representing approximately 5% of national GDP, with a decline of 10% from 2008, due mostly to the effect of the financial crisis. According to data from Egypt's Central Bank, the United States was the top sending country of remittances (23%), followed by Kuwait (15%), the United Arab Emirates (14%) and Saudi Arabia (9%).
Public finances.
On the revenues side, total revenues of the government were E£ in FY2002 and are projected to reach E£ in FY2008. Much of the increase came from a rise in customs, excise and tax revenues, particularly personal income and sales, entertainment, and vice taxes which constituted the bulk of total domestic taxes, due to recent tax reforms. This trend is likely to gradually widen the tax base in the forthcoming years. Revenues, however, have remained more or less constant (about 21% ) as a percentage of the GDP over the past few years.
On the expenditures side, strong expenditure growth has remained a main feature of the budget. This is mainly a result of continued strong expansion of (1) the public-sector wages driven by government pledges. Wages and Compensations increased from E£ in FY2002 to E£ in FY2008; (2) high interest payments on the public debt stock. Interest payments rose from E£ in FY2002 to E£ in FY2008. Importantly, dramatic increase in domestic debt which is projected to be roughly 62% of GDP in FY2008 up from 58.4% in FY2002; and (3) the costs of food and energy subsidies, which rose from E£ in FY2002 to E£ in FY2008.
The overall deficit, after adjusting for net acquisition of financial assets, remains almost unchanged from the cash deficit. The budget's overall deficit of E£ or −10.2% of GDP for FY2002 has become E£ in FY2007, so that is narrowed to −6.7% of GDP. Deficit is financed largely by domestic borrowing and revenue from divestment sales, which became a standard accounting practice in budget Egypt. The government aims at more sales of State assets in FY2008.
Recently, the fiscal conduct of the government faced strong criticism and heated debate in the Egyptian Parliament. Remarks were made about weak governance and management, loose implementation of tax collection procedures and penalties for offenders, and improper accounting of the overall system of basic subsidies and domestic debt, leading to domestic market disruptions, high inflation, increased inefficiencies and waste in the domestic economy.
Treasury bonds and notes issued to the Central Bank of Egypt constitute the bulk of the government domestic debt. Since FY2001, net government domestic debt (i.e. after excluding budget sector deposits) has been rising at a fluctuating but increasing rate. In 2014, it reached 77% up from 54.3% of GDP in 2001.
Opportunity cost of conflict.
A report by Strategic Foresight Group has calculated the opportunity cost of conflict for Egypt since 1991 is almost US$800 billion. In other words, had there been peace since 1991, an average Egyptian citizen would be earning over US$3,000 instead of US$1,700 he or she may earn next year.
The financial sector.
The Central Bank of Egypt is the national reserve bank and controls and regulates the financial market and the Egyptian pound. There is a State regulatory authority for the Cairo Stock Exchange. State-owned or Nationalized banks still account for 85% of bank accounts in Egypt and around 60% of the total savings. The penetration of banking is low in rural areas at only 57% of households.
Monetary policy.
Up until 2007, there have been several favorable conditions that allowed the Central Bank of Egypt to accumulate net international reserves, which increased from US$20 billion in FY2005, to US$23 billion in FY2006, and to US$30 billion FY2007 contributing to growth in both reserve money and in broad money (M2). This declined to US$16.4 billion in Oct 2015, according to the Central Bank of Egypt.
Credit extended to the private sector in Egypt declined significantly reaching about E£ in FY2005. This credit crunch is due to the non-performing loans extended by the banks to business tycoons and top government officials.
Lending criteria have been tightened following the passing of Money Laundry Law 80 in 2002 and Banking Law 88 in 2003. Interest rates are no longer the dominant factor in banks' lending decisions. In fact, both the inefficiency and absence of the role of the Central Bank of Egypt in qualitative and quantitative control as well as implementing banking procedures and standards was almost entirely responsible for the non-performing loans crisis. Banks steadily reduced credit from its peak of about E£ in FY1999 and alternatively invested in more liquid no-risk securities such as treasury bills and government bonds. Improving private sector access to credit will critically depend on resolving the problem of non-performing loans with businesses and top government officials.
The era of inflation targeting—i.e. maintaining inflation within a band—has perhaps begun in Egypt more recently. Country experiences show that inflation targeting is a best-practice strategy for monetary policy. While the monetary policy appears more responsive to inflationary pressures recently in Egypt, it is noted that there is no core inflation measure and the Central Bank of Egypt takes targeting decisions based on the inflation rate released by the CAPMAS consumer price index off-the-shelf.
Surging domestic inflationary pressures from both economic growth and elevated international food prices led the Central Bank of Egypt (CBE) to increase the overnight lending and deposit rates in sequential moves since 2008: it was raised by 0.25% on 10 February 2008, by 0.5% on 25 March 2008, by 0.5% on 8 May 2008, by 0.5% on 26 June 2008, by 0.5% on 7 August 2008 and most recently on 18 September 2008 for the sixth time in a year by 0.5% when it stood at 11.5% and 13.5%, respectively.
The rise of the World Global Financial Crisis led to a set of fiscal-monetary policy measures to face its repercussions on the national economy, including reducing the overnight lending and deposit rates by 1% on 12 February 2009. The rates currently stand at 10.5% and 12.5%, respectively. The CBE is expected to further cut on interest rates over 2009, with seemingly little fear on Egyptian pound depreciation resulting from decreased interest rates.
Exchange rate policy.
The exchange rate has been linked to the US dollar since the 1950s. Several regimes were adopted including initially the conventional peg in the sixties, regular crawling peg in the seventies and the eighties and crawling bands in the nineties. Over that time period, there were several exchange rate markets including black market, parallel market and the official market. With the turn of the new millennium, Egypt introduced a managed float regime and successfully unified the pound exchange rate vis-à-vis foreign currencies.
The transition to the unified exchange rate regime was completed in December 2004. Shortly later, Egypt has notified the International Monetary Fund (IMF) that it has accepted the obligations of Article VIII, Section 2, 3, and 4 of the IMF Articles of Agreement, with effect from 2 January 2005. IMF members accepting the obligations of Article VIII undertake to refrain from imposing restrictions on the making of payments and transfers for current international transactions, or from engaging in discriminatory currency arrangements or multiple currency practices, except with IMF approval.
By accepting the obligations of Article VIII, Egypt gives assurance to the international community that it will pursue economic policies that will not impose restrictions on the making of payments and transfers for current international transactions unnecessary, and will contribute to a multilateral payments system free of restrictions.
In the fiscal year 2004 and over most of the fiscal year 2005, the pound depreciated against the US dollar. Since the second half of the fiscal year 2006 until the end of the fiscal year 2007, the pound gradually appreciated to E£ per US$1. While it was likely to continue appreciating in the short-term, given the skyrocketing oil prices and the weakening US economy, the advent of the global economic crisis of 2008, and resulting behavior of foreign investors exiting from the stock market in Egypt increased the dollar exchange rate against the Egyptian pound, which rose by more than 4% since Lehman Brothers declared bankruptcy. As the demand pressure from exiting foreign investors eases, the dollar exchange rate against the Egyptian pound is expected to decline. It stands at E£ per US$1 as of 18 June 2013. Due to the rising power of the US dollar, as of January 2015 one dollar equals E£.
On 3 November 2016, the Egyptian government announced that it would float the Egyptian pound in an effort to revive its economy, which had been suffering since 2011.
The conditions of a 2022 IMF loan required the currency to float with the result that it depreciated rapidly prompting international institutions and neighbors such as Saudi Arabia to help. The country has $83.3 B of foreign-currency debt outstanding.
Data.
The following table shows the main economic indicators in 1986–2021 (with IMF staff estimates in 2022–2027). Inflation below 10% is in green.
Natural resources.
Land, agriculture and crops.
Warm weather and plentiful water have in the past produced several crops a year. However, since 2009 increasing desertification has become a problem. "Egypt loses an estimated 11,736 hectares of agricultural land every year, making the nation's 3.1 million hectares of agricultural land prone "to total destruction in the foreseeable future", said Abdel Rahman Attia, a professor of agriculture at Cairo University, to IRIN. Scarcity of clean water is also a problem.
Cotton, rice, wheat, corn, sugarcane, sugar beets, onions, tobacco, and beans are the principal crops. Land is worked intensively and yields are high. Increasingly, a few modern techniques are applied to producing fruits, vegetables and flowers, in addition to cotton, for export. Further improvement is possible. The most common traditional farms occupy each, typically in a canal-irrigated area along the banks of the Nile. Many small farmers also own cows, water buffalos, and chickens. Between 1953 and 1971, some farms were collectivised, especially in Upper Egypt and parts of the Nile Delta.
Several researchers questioned the domestic (and import) policies for dealing with the so-called the "wheat game" since the former Minister of Agriculture Yousef Wali was in office ( 1982-2004 ).
In 2006, areas planted with wheat in Egypt exceeded producing approximately 6 million metric tons. The domestic supply price farmers receive in Egypt is E£ (formula_0 US$211) per ton compared to approximately E£ (formula_0 US$340) per ton for import from the US, Egypt's main supplier of wheat and corn. Egypt is the U.S.'s largest market for wheat and corn sales, accounting for US$1 billion annually and about 46% of Egypt's needs from imported wheat. Other sources of imported wheat, include Kazakhstan, Canada, France, Syria, Argentina, and Australia. There are plans to increase the areas planted with wheat up to nearly by 2017 to narrow the gap between domestic food supply and demand. However, the low amount of gluten in Egypt wheat means that foreign wheat must be mixed in to produce bread that people will want to eat.
Egypt would be the first ever electronic Egyptian Commodities Exchange in the MENA region to facilitate the well-being of its small farmers and supply of products at reasonable prices abolishing the monopoly of goods.
The Western Desert accounts for about two-thirds of the country's land area. For the most part, it is a massive sandy plateau marked by seven major depressions. One of these, Fayoum, was connected about 3,600 years ago to the Nile by canals. Today, it is an important irrigated agricultural area.
Practically all Egyptian agriculture takes place in some of fertile soil in the Nile Valley and Delta.
Some desert lands are being developed for agriculture, including the controversial but ambitious Toshka project in Upper Egypt, but some other fertile lands in the Nile Valley and Delta are being lost to urbanization and erosion. Larger modern farms are becoming more important in the desert.
The agriculture objectives on the desert lands are often questioned; the desert farm lands which were offered regularly at different levels and prices were restricted to a limited group of elites selected very carefully, who later profiteered retailing the granted large desert farm land by pieces. This allegedly transforms the desert farms to tourist resorts, hits all government plans to develop and improve the conditions of the poor, and causes serious negative impact on agriculture and the overall national economy over time. One company, for example, bought over 70 hectares of large desert farm for a price as low as E£ per square meter and now sells for E£ per square meter. In numbers, 70 hectares bought for about US$6,000 in 2000 sells for over US$3.7 million in 2007. Currently, no clear solution exists to deal with these activities.
Agriculture biomass, including agricultural wastes and animal manure, produce approximately 30 million metric tons of dry material per year that could be massively and decisively used, "inter alia", for generating bioenergy and improve the quality of life in rural Egypt. Despite plans of establishing waste-to-energy plants, this resource remains terribly underused.
Since early 2008, with the world food prices soaring, especially for grains, calls for striking a "new deal" on agriculture increased. Indeed, 2008 arguably marks the birth of a new national agriculture policy and reform.
Acquisition and ownership of desert land in Egypt is governed by so-called "Egyptian Desert Land Law". It defines desert land as the land two kilometers outside the border of the city. Foreign partners and shareholders may be involved in ownership of the desert land, provided Egyptians own at least 51% of the capital.
Water resources.
"Egypt", wrote the Greek historian Herodotus 25 centuries ago, "is the gift of the Nile." The land's seemingly inexhaustible resources of water and soil carried by this mighty river created in the Nile Valley and Delta the world's most extensive oasis. Without the Nile River, Egypt would be little more than a desert wasteland.
The river carves a narrow, cultivated floodplain, never more than 20 kilometers wide, as it travels northward toward Cairo from Lake Nasser on the Sudanese border, behind the Aswan High Dam. Just north of Cairo, the Nile spreads out over what was once a broad estuary that has been filled by riverine deposits to form a fertile delta about wide at the seaward base and about from south to north.
Before the construction of dams on the Nile, particularly the Aswan High Dam (started in 1960, completed in 1970), the fertility of the Nile Valley was sustained by the water flow and the silt deposited by the annual flood. Sediment is now obstructed by the Aswan High Dam and retained in Lake Nasser. The interruption of yearly, natural fertilization and the increasing salinity of the soil has been a manageable problem resulting from the dam. The benefits remain impressive: more intensive farming on thousands of square kilometers of land made possible by improved irrigation, prevention of flood damage, and the generation of millions of gigajoules of electricity at low cost.
Groundwater.
The rain falling on the coast of the southern regions are the main source of recharge of the main reservoir. There is a free-floating layer of the reservoir water on top of sea water up to a distance of 20 km south of the Mediterranean Sea. The majority of wells in the coastal plain depend on the water level in the main reservoir. The coastal water supply comes from water percolating through the coastal sand and water runoff from the south. This low salinity water is used for many purposes.
Mineral and energy resources.
Egypt's mineral and energy resources include petroleum, natural gas, phosphates, gold and iron ore. Crude oil is found primarily in the Gulf of Suez and in the Western Desert. Natural gas is found mainly in the Nile Delta, off the Mediterranean shore, and in the Western Desert. Oil and gas accounted for approximately 7% of GDP in fiscal year 2000–01.
Export of petroleum and related products amounted to US$2.6 billion in the year 2000. In late 2001, Egypt's benchmark "Suez Blend" was about US$16.73 per barrel ($105/m3), the lowest price since 1999.
Crude oil production has been in decline for several years since its peak level in 1993, from in 1993 to in 1997 and to in 2005. (See Figure). At the same time, the domestic consumption of oil increased steadily ( and in 1997 and 2005 respectively), but in 2008, oil consumption reached to . It is easy to see from the graph that a linear trend projects that domestic demand outpaced supply in (2008–2009), turning Egypt to a net importer of oil. To minimize this potential, the government of Egypt has been encouraging the exploration, production and domestic consumption of natural gas. Oil Production was in 2008, and natural gas output continued to increase and reached 48.3 billion cubic meters in 2008.
Domestic resources meet only about 33% of Egypt's domestic demand, meaning large imports from Saudi Arabia, UAE and Iraq are necessary.
Over the last 15 years, more than 180 petroleum exploration agreements have been signed and multinational oil companies spent more than US$27 billion in exploration companions. These activities led to the findings of about 18 crude oil fields and 16 natural gas fields in FY 2001. The total number of findings rose to 49 in FY 2005. As a result of these findings, crude oil reserves as of 2009 are estimated at , and proven natural gas reserves are 1.656 trillion cubic meters with likely additional discoveries with more exploration campaigns.
In August 2007, it was announced that signs of oil reserves in Kom Ombo basin, about north of Aswan, was found and a concession agreement was signed with Centorion Energy International for drilling. The main natural gas producer in Egypt is the International Egyptian Oilfield Company (IEOC), a branch of Italian Eni. Other companies including BP, APA Corporation and Royal Dutch Shell carry out activities of exploration and production by means of concessions granted for a period of generally ample time (often 20 years) and in different geographic zones of oil and gas deposits in the country.
Gold mining is more recently a fast-growing industry with vast untapped gold reserves in the Eastern Desert. To develop this nascent sector the Egyptian government took a first step by awarding mineral concessions, in what was considered the first international bid round. Two miners who have produced encouraging technical results include AngloGold Ashanti and Alexander Nubia International.
Gold production facilities are now reality from the Sukari Hills, located close to Marsa Alam in the Eastern Desert. The concession of the mine was granted to Centamin, an Australian joint stock company, with a gold exploitation lease for a 160-square-kilometer area. Sami El-Raghy, Centamin chairman, has repeatedly stated that he believes Egypt's yearly revenues from gold in the future could exceed the total revenues from the Suez Canal, tourism and the petroleum industry.
The Ministry of Petroleum and Mineral Resources has established expanding the Egyptian petrochemical industry and increasing exports of natural gas as its most significant strategic objectives and in 2009 about 38% of local gas production was exported.
As of 2009, most Egyptian gas exports (approximately 70%) are delivered in the form of liquefied natural gas (LNG) by ship to Europe and the United States. Egypt and Jordan agreed to construct the Arab Gas Pipeline from Al Arish to Aqaba to export natural gas to Jordan; with its completion in July 2003, Egypt began to export of gas per year via pipeline as well. Total investment in this project is about $220 million. In 2003, Egypt, Jordan and Syria reached an agreement to extend this pipeline to Syria, which paves the way for a future connection with Turkey, Lebanon and Cyprus by 2010. As of 2009, Egypt began to export to Syria of gas per year, accounting for 20% of total consumption in Syria.
In addition, the East Mediterranean Gas (EMG), a joint company established in 2000 and owned by Egyptian General Petroleum Corporation (EGPC) (68.4%), the private Israeli company Merhav (25%) as well as Ampal-American Israel Corp. (6.6%), has been granted the rights to export natural gas from Egypt to Israel and other locations in the region via underwater pipelines from Al 'Arish to Ashkelon which will provide Israel Electric Corporation (IEC) of gas per day. Gas supply started experimentally in the second half of 2007. As of 2008, Egypt produces about , from which Israel imports of account for about 2.7% of Egypt's total production of natural gas. According to a statement released on 24 March 2008, Merhav and Ampal's director, Nimrod Novik, said that the natural gas pipeline from Egypt to Israel can carry up to 9 billion cubic meters annually which sufficiently meet rising demand in Israel in the coming years.
According to a memorandum of understanding, the commitment of Egypt is contracted for 15 years at a price below US$3 per million British thermal units, though this was renegotiated at a higher price in 2009 (to between US$4 and US$5 per million BTU), while the amounts of gas supplied were increased. Exporting natural gas to Israel faces broad popular opposition in Egypt.
Agreements between Egypt and Israel allow for Israeli entities to purchase up to 7 billion cubic meters of Egyptian gas annually, making Israel one of Egypt's largest natural gas export markets. The decision to export of natural gas to Israel was passed in 1993 at the time when Dr. Hamdy Al-Bambi was Minister of Petroleum and when Mr. Amr Moussa was Minister of Foreign Affairs. The mandate to sign of the memorandum of understanding (MoU) to delegate to the Ministry of Petroleum represented by the Egyptian General Petroleum Company (EGPC) to contract with EMG Company was approved by the former Prime Minister Dr. Atef Ebeid in the Cabinet's meeting No. 68 on 5 July 2004 when he served as the acting "President of the Republic" when President Hosni Mubarak was receiving medical treatment in Germany.
A new report by Strategic Foresight Group on the Cost of Conflict in the Middle East also details how in the event of peace an oil and gas pipeline from Port Said to Gaza to Lebanon would result in a transaction value for Egypt to the tune of $1–2 billion per year.
As of June 2009, it was reported that Cairo said Israelis will dig for oil in Sinai. This report comes in the time in which the government is heavily criticized for exporting natural gas to Israel at an extremely low rate.
Starting in 2014, the Egyptian government has been retaining gas production for domestic market, reducing the volumes available for export. According to the memorandum of understanding, the Leviathan field off Israel's Mediterranean coast would supply 7 billion cubic meters annually for 15 years via an underwater pipeline. This equates to average volumes of 685 million cubic feet a day, the equivalent of just over 70% of the BG-operated Idku plant's daily volumes.
In March 2015, BP signed a $12 billion deal to develop natural gas in Egypt intended for sale in the domestic market starting in 2017.
BP said it would develop a large quantity of offshore gas, equivalent to about one-quarter of Egypt's output, and bring it onshore to be consumed by customers. Gas from the project, called West Nile Delta, is expected to begin flowing in 2017. BP said that additional exploration might lead to a doubling of the amount of gas available.
Main economic sectors.
Agricultural sector.
Irrigation.
Irrigation plays a major role in a country the very livelihood of which depends upon a single river, the Nile. Most ambitious of all the irrigation projects is that of the Aswan High Dam, completed in 1971. A report published in March 1975 by the National Council for Production and Economic Affairs indicated that the dam had proved successful in controlling floodwaters and ensuring recurring water supply, but that water consumption had been more than needed and shall be controlled. Some precious land was lost below the dam because the flow of Nile silt was stopped, and increased salinity remains a major problem. Furthermore, five years of drought in the Ethiopia highlands—the source of the Nile River's water—caused the water level of Lake Nasser, the Aswan High Dam's reservoir, to drop to the lowest level in 1987.
In 1996, the level of water behind the High Dam and in Lake Nasser reached the maximum level since the completion of the dam. Despite this unusual abundance of water supply, Egypt can only use 55.5 billion cu m (1.96 trillion cu ft) every year, according to the Nile Basin Agreement signed in 1959 between Egypt and Sudan. Another major project designed to address the water scarcity problem is the New Valley Project (the "second Nile"), aimed at development of the large artesian water supplies underlying the oases of the Western Desert.
In 2010 Egypt's fertile area totaled about , about one-quarter of which has been reclaimed from the desert after the construction of the Aswan High Dam. The government aims to increase this number to 4.8 million hectares by 2030 through additional land reclamation. Even though only 3 percent of the land is arable, it is extremely productive and can be cropped two or even three times annually. However, the reclaimed lands only add 7 percent to the total value of agricultural production. Surface irrigation is forbidden by law in reclaimed lands and is only used in the Nile Valley and the Delta, the use of pressurized irrigation and localized irrigation is compulsory in other parts of the country. Most land is cropped at least twice a year, but agricultural productivity is limited by salinity which in 2011 affected 25% of irrigated agriculture to varying degrees. This is mainly caused by insufficient drainage as well as seawater intrusion in aquifers as a result of over-extraction of groundwater, the latter primarily affects the Nile Delta. Thanks to the installation of drainage systems a reduction in salinized areas from about 1.2 million hectares in 1972 to 900 000 hectares in 2010 was achieved.
In the 1970s, despite significant investment in land reclamation, agriculture lost its position as the leading economic sector. Agricultural exports, which were 87% of all merchandise export by value in 1960, fell to 35% in 1974 and to 11% by 2001. In 2000, agriculture accounted for 17% of the country's GDP and employed 34% of the workforce.
Crops.
According to 2016 statistics from the Food and Agriculture Organization of the United Nations, Egypt is the world's largest producer of dates; the second largest producer of figs; the third largest producer of onions and eggplants; the fourth largest producer of strawberries and buffalo milk as well as the fifth largest producer of tomatoes and watermelon.
Cotton has long been a primary exported cash crop, but it is no longer vital as an export. Production in 1999 was 243,000 tons. Egypt is also a substantial producer of wheat, maize, sugarcane, fruit and vegetables, fodder, and rice; substantial quantities of wheat are also imported, especially from the United States and Russia, despite increases in yield since 1970, and significant quantities of rice are exported.
Citrus, dates, and grapes are the main fruits by cultivated area. Agricultural output in tons in 1999 included corn, 9,350,000; wheat, 6,347,000; rice, 5,816,000; potatoes, 1,900,000; and oranges, 1,525,000. The government exercises a strong degree of control over agriculture, not only to ensure the best use of irrigation water but also to confine the planting of cotton in favor of food grains. However, the government's ability to achieve this objective is limited by crop rotational constraints.
Cacti - especially cactus pears - are extensively grown throughout the country including Sinai, and extending into neighbouring countries. They are a crop of the Columbian Exchange. Cactus hedges - both intentionally planted and wild garden escapes - formed an important part of defensible positions during the Sinai and Palestine campaign of World War I. Some unfamiliar soldiers even tried eating them, to negative result.
Land ownership.
The agrarian reform law of 1952 provided that no one might hold more than 200 feddans, that is, (1 Egyptian feddan=0.42 hectares=1.038 acres), for farming, and that each landholder must either farm the land himself or rent it under specified conditions. Up to 100 additional feddans might be held if the owner had children, and additional land had to be sold to the government. In 1961, the upper limit of landholding was reduced to 100 feddans, and no person was allowed to lease more than 50 feddans. Compensation to the former owners was in bonds bearing a low rate of interest, redeemable within 40 years. A law enacted in 1969 reduced landholdings by one person to 50 feddans.
By the mid-1980s, 90% of all land titles were for holdings of less than five feddans (), and about 300,000 families, or 8% of the rural population, had received land under the agrarian reform program. According to a 1990 agricultural census, there were some three million small land holdings, almost 96% of which were under five feddans. As these small landholdings restricted the ability of farmers to use modern machinery and agricultural techniques that improve and take advantage of economies of scale, there have since the late 1980s been many reforms attempting to deregulate agriculture by liberalizing input and output prices and eliminating crop area controls. As a result, the gap between world and domestic prices for Egyptian agricultural commodities has been closed.
Industrial sector.
Automobiles manufacturing.
El Nasr Automotive Manufacturing Company is Egypt's state owned automobile company, founded in 1960 in Helwan, Egypt. Established in 1962, the company manufactures various vehicles under license from Zastava Automobili, Daimler AG, Kia, and Peugeot. Their current lineup consists of the Jeep Cherokee; the open-top, Wrangler-based Jeep AAV TJL; the Kia Spectra; the Peugeot 405; and the Peugeot 406.
Other automobile manufacturers in Egypt include Arab American Vehicles, Egy-Tech Engineering, Ghabbour Group, WAMCO (Watania Automotive Manufacturing Company) and MCV. MCV was established in 1994 to represent Mercedes-Benz in the commercial vehicle sector in Egypt, producing a range of buses and trucks for domestic sale and for export throughout the Arab World, Africa, Latin America and Eastern Europe. The manufacturing plant in El Salheya employs c. 2500 people.
Chemicals.
Abu Qir Fertilizers Company is one of the largest producers of nitrogen fertilizers in Egypt and the MENA region. It accounts for nearly 50% of all nitrogen fertilizer production in Egypt. The company was established in 1976 with the construction of its first ammonia urea production facility, located in Abu Qir, 20 kilometers east of Alexandria. Egypt Basic Industries Corporation (EBIC) is also one of the largest producers of ammonia in the country.
Consumer electronics and home appliances.
Olympic Group is the largest Egyptian company in the field of domestic appliances. The company mainly manufactures washing machines, air conditioners, refrigerators, electric water heaters and gas cookers.
Bahgat Group is a leading company in the fields of electronics, home appliances, furniture and real estate. It also owns TV stations. The group is composed of the following companies: Egy Aircon, International Electronics Products, Electrical Home appliances, General Electronics and Trading, Goldi Trading, Goldi Servicing, Egy Medical, Egyptian Plastic Industry, Egy House, Egy Speakers, Egy Marble, Dreamland and Dream TV.
Steel industries.
In 2022 Egypt was ranked the 20th largest steel producing country with a production of 9.8 million tons. EZDK is the largest steel company in Egypt and the Middle East, today part of Ezz Industries. It owns four steel plants in Alexandria, Sadat, Suez and 10th of Ramadan. It was ranked 77th on the list of the world's largest steel companies by the World Steel Association in 2020, with a production of 4.57 million tons.
Textiles and clothing.
Textiles and clothing is one of the largest manufacturing and exporting processes in the country and a huge employment absorber. The Egyptian apparel industry is attractive for two reasons. Firstly, its proximity to European markets, whose rapidly changing fashions require quick replenishment. Egypt's geographical proximity to style-conscious Europe is a logistical advantage. Secondly, the production of garments is a low-capital and high-labor-intensive industry, and the local population of 66 million provides a ready workforce as well as a natural local consumer market that acts as a springboard for exports.
The textile industry contributes with one quarter of Egypt's non-oil export proceeds, with Cotton textiles comprising the bulk of Egypt's TC export basket. The public sector accounts for 90% of cotton spinning, 60% of fabric production and 30% of apparel production in Egypt. Misr Fine Spinning and Weaving is the largest enterprise of its kind in Africa and the Middle East. The private sector apparel industry is one of the most dynamic manufacturing processes in Egypt.
The requirements of importers to Egypt of textiles and leather products were set out in the Egyptian Ministerial decrees 626/2011 and 660/2011. The Egyptian trade oversight agency, the General Organization for Export and Import Control (GOEIC), demanded in June 2012 that an inspection certificate accompany each shipment, unless the importer is pre-registered with the GOEIC. The Ministerial Decrees demand that imported goods certify their compliance with the mandatory quality and safety standards of Egypt.
Arafa Holding is a global apparel manufacturer and retailer, operating through a strong vertically integrated platform at the local & international levels.
Energy sector.
Egypt suffered blackouts during the summer of 2014 that lasted for up to six hours per day. A rapid series of reforms cut energy subsidies, and Egypt quickly developed the Zohr gas field in the Mediterranean, which was discovered in 2015. The country now has an oversupply of electricity and aims to source 20% of its electricity from renewables by 2022 and 55% by 2050.
Egypt and Cyprus are considering implementing the proposed EuroAfrica Interconnector project. This consists of laying a 2 GW HVDC undersea power cable between them and between Cyprus and Greece, thus connecting Egypt to the greater European power grid. The interconnector will make Egypt an electricity hub between Europe and Africa. The presidents of Egypt, Cyprus and the prime minister of met in Nicosia on 21 November 2017 and showed their full support for the EuroAfrica Interconnector pointing out its importance for energy security of the three countries.
On 29 October 2007, Egypt's president, Hosni Mubarak gave the go-ahead for building several nuclear power plants. Egypt's nuclear route is purely peaceful and fully transparent, but faces technical and financing obstacles. Egypt is a member of the IAEA and has both signed and ratified the Nuclear Nonproliferation Treaty (NPT). Currently, a draft Law on Nuclear Energy is being reviewed by the IAEA and expected to be passed by the Egyptian Parliament. Many other countries in the region, including Libya, Jordan, UAE, Morocco, and Saudi Arabia aspire to build nuclear power plants.
Construction and contracting sector.
Orascom Construction Industries is a leading Egyptian EPC (engineering, procurement and construction) contractor, based in Cairo, Egypt and active in more than 20 countries. OCI was established in Egypt in 1938 and owned by Onsi Sawiris. It was nationalized in 1953 and then again de-nationalized in 1977. The company is the first multinational Egyptian corporation, and is one of the core Orascom Group companies. As a cement producer, OCI owned and operated cement plants in Egypt, Algeria, Turkey, Pakistan, northern Iraq and Spain, which had a combined annual production capacity of 21 million tons.
The Talaat Moustafa Group (TMG), one of the largest conglomerates in Egypt, was founded by the former Talaat Moustafa and is headed by his son, Hisham Talaat Moustafa.
New cities.
The proposed new capital of Egypt is a large-scale project under construction since 2015, and was announced by then-Egyptian housing minister Moustafa Madbouly at the Egypt Economic Development Conference on 13 March 2015. As of 2024 the project, budgeted at $45bn, is on phase 1 and spend is approximately $58bn.
New Alamein is another city that is currently being built in Egypt's north coast planned on an area of 48,000 feddans. New Alamein is one of the fourth generation cities being built in Egypt, and the first phase is scheduled to be concluded in a year.
Services sector.
Banking and insurance.
The banking sector has gone through many stages since the establishment of the first bank in 1856, followed by the emergence of private sector and joint venture banks during the period of the Open Door Policy in the 1970s. Moreover, the Egyptian banking sector has been undergoing reforms, privatization, and mergers and acquisitions from 1991 up to today.
The banking system comprises 57 state-owned commercial banks. This includes 28 commercial banks, four of which are state-owned, 26 investment banks (11 joint venture banks and 15 branches of foreign banks), and three specialized banks. Although private and joint venture banks are growing, many remain relatively small with few branch networks. State-owned commercial banks still rank among the top lenders in Egypt's banking sector. Over the past decades, European banks have been exiting Egypt's financial sector. For instance, France's Société Générale sold National Société Générale Bank to Qatar National Bank (QNB) in 2012 which has been rebranded as QNB Alahli.
Egypt's banking system has undergone major reforms since the 1990s and today consumers are faced with a liberalized and modernized system which is supervised and regulated according to internationally accepted standards.
Although the mortgage market is underdeveloped in Egypt and as yet foreigners cannot yet obtain a mortgage for a property in Egypt. In the near future, a new mortgage law will enable purchasers to take out property loans. This will open up the market considerably and create a storm of development and real estate activity in the near future.
Communications.
Egypt has long been the cultural and informational centre of the Arab world, and Cairo is the region's largest publishing and broadcasting centre.
The telecommunications liberalisation process started in 1998 and is still ongoing, but at a slow pace. Private sector companies operate in mobile telephony, and Internet access. There were 10 million fixed phone lines, 31 million mobile phones, and 8.1 million Internet users by the August 2007.
Transport.
Transport in Egypt is centered around Cairo and largely follows the pattern of settlement along the Nile. The main line of the nation's railway network runs from Alexandria to Aswan and is operated by Egyptian National Railways. The road network has expanded rapidly to over , covering the Nile Valley and Nile Delta, Mediterranean and Red Sea coasts, the Sinai, and the Western oases.
In addition to overseas routes, Egypt Air provides reliable domestic air service to major tourist destinations from its Cairo hub. The Nile River system (about .) and the principal canals (1,600 km.) are used for local transportation.
The Suez Canal is a major waterway for international commerce and navigation, linking the Mediterranean and Red Sea. It is run by the Suez Canal Authority, headquartered in Port Said. The ministry of transportation, along with other governmental bodies are responsible for transportation in Egypt. Major ports are Alexandria, Port Said, and Damietta on the Mediterranean, and Suez, Ain Sokhna and Safaga on the Red Sea.
Tourism sector.
The Egyptian tourism industry is one of the most important sectors in the economy, in terms of high employment and incoming foreign currency. It has many constituents of tourism, mainly historical attractions especially in Cairo, Luxor and Aswan, but also beach and other sea activities. The government actively promotes foreign tourism since it is a major source of currency and investment. The political instability since January 2011 caused a reduction in tourism, but the next year it was rising. In Upper Egypt, tourists that "provided one of the most important sources of income besides farming has dried out".
Egypt's government announced the work on multiple projects within the tourism sector, most prominently the Grand Egyptian Museum. Set to open in June 2021, becoming the largest museum in the world.
Emerging sectors.
ICT sector.
The Egyptian information and communications technology sector has been growing significantly since it was separated from the transportation sector. The market for telecommunications market was officially deregulated since the beginning of 2006 according to the WTO agreement signed in 2003.
The government established ITIDA through Law 15 of the year 2004 as governmental entity. This agency aims at paving the way for the diffusion of the e-business services in Egypt capitalizing on different mandates of the authority as activating the Egyptian e-signature law and supporting an export-oriented IT sector in Egypt.
While the move could open the market for new entrants, add and improve the infrastructure for its network, and in general create a competitive market, the fixed line market is de facto monopolized by Telecom Egypt.
The cellular phone market was a duopoly with prices artificially high but witnessed in the past couple of years the traditional price war between the incumbents Mobinil and Vodafone. A 500 minutes outbound local and long-distance calling plan currently costs approximately US$30 as compared to approximately US$90 in 2005. While the current price is not so expensive, it is still above the international price as plans never allow "unlimited night & weekend minutes.
A third GSM 3.5G license was awarded in April 2006 for US$3 billion to a consortium led by the UAE company Eitesalat (66%), Egypt Post (20%), the National Bank of Egypt (NBE) (10%), and the NBE's Commercial International Bank (4%), thus moving the market from duopoly to oligopoly.
On 24 September 2006, the National Telecommunication Regulatory Authority (NTRA) announced a license award to Egyptian-Arab private sector consortium of companies to extend a maritime cable for international traffic. The US$120 million cable project will serve the Gulf region and south Europe. The construction of the cable should decrease the currently high international call costs and increase domestic demand on internet broadband services, in importantly increase exports of international telecommunication services of Egyptian companies, mostly in the Smart Village.
It is expected that NTRA will award two licenses for international gateways using open technology and deploy WiMax technology enabling the delivery of last-mile wireless broadband access as an alternative to ADSL.
The main barrier to growth for Egypt's ICT sector is the monopoly of telecommunication corporations and quarreling workforce.
Largest companies.
In 2009, three Egyptian companies were listed in the Forbes Global 2000 list – an annual ranking of the top 2000 public companies in the world by Forbes magazine. These companies were:
Investment.
The stock market capitalisation of listed companies in Egypt was put at $79.672 billion in 2005 by the World Bank dropping to $58 billion in 2012.
Investment climate.
The Egyptian equity market is one of the most developed in the region with more than 633 listed companies. Market capitalization on the exchange doubled in 2005 from US$47.2 billion to US$93.5 billion in 2006, peaking at US$139 billion in 2007. Subsequently, it has fallen to US$58 billion in 2012, with turnover surging from US$1.16 billion in January 2005 to US$6 billion in January 2006.
Private equity has not been widely used in Egypt in the past as a source of funding for businesses. The government, however, has instituted a number of policy changes and reforms specifically intended to develop internal private equity funds and to attract private equity funding from international sources.
The major industries include textiles, hydrocarbon and chemical production, and generic pharmaceutical production. Unemployment is high at about 10.5%.
Until 2003, the Egyptian economy suffered from shortages in foreign currency and excessively elevated interest rates. A series of budget reforms were conducted to redress weaknesses in Egypt's economic environment and to boost private sector involvement and confidence in the economy.
Major fiscal reforms were introduced in 2005 to tackle the informal sector which according to estimates represents somewhere between 30% and 60% of GDP. Significant tax cuts for corporations were introduced for the first time in Egyptian history. The new Income tax Law No 91 for 2005 reduced the tax rate from 40% to 20%. According to government figures, tax filing by individuals and corporations increased by 100%.
Many changes were made to cut trade tariffs. Among the legislators' goals were tackling the Black Market, reducing bureaucracy and pushing through trade liberalization measures. Amendments to Investment and Company law were introduced to attract foreign investors. For example, the number of days required for establishing a company was dramatically reduced.
Significant improvement to the domestic economic environment increased investors' confidence in Egypt. The Cairo & Alexandria Stock Exchange is considered among the best ten emerging markets in the world. The changes to the policy also attracted increased levels of foreign direct investment in Egypt. According to the UN Conference on Trade and Development's World Investment Report, Egypt was ranked the largest country in attracting foreign investment in Africa.
Given the large number of amendments to laws and regulations, Egypt has succeeded to a certain extent in conforming to international standards. Very recently the Cairo & Alexandria Stock Exchange (CASE) was welcomed with full membership into the World Federation of Exchanges (WFE)—the first Arab country to be invited.
Enforcement of these newly adopted regulatory frameworks remain, sometime problematic. Problems like corruption hamper economic development in Egypt. Many scandals involving bribery were reported during the past years. "In 2002 alone, as many as 48 high-ranking officials—including former cabinet ministers, provincial governors and MPs were convicted of influence peddling, profiteering and embezzlement. Maintaining good relations with politicians is sometimes a key to business success in Egypt. On a scale from 0 to 10 (with 0 being highly corrupt), Egypt scored a 3.3.”
According to a study by the International Organization for Migration, 20% of Egyptian remittance-receiving households interviewed channelled the remittances towards various forms of investment, while the large majority (80%) was more concerned about using remittances for meeting the daily needs of their families including spending on health care and education. Among the 20% of households that decided to invest, 39% invested in real estate, 22% invested in small businesses employing fewer than five people and the smallest proportions of investors (6%) invested in medium private business employing no more than 20 people. According to Egypt's Human Development Report 2008, despite representing approximately 5% of GDP, remittances provided the initial capital for only 1.4% of newly established small and medium enterprises in Egypt in 2003–2004.
On 14 March 2020, the government of Egypt published Parliament Law No. 190 – which was established in the year 2019 – regarding getting Egyptian citizenship through investment. The minimum required contribution is US$250,000 with total procedure completion expected within a 6-9 month timeframe.
Response to the global financial crisis.
The challenges of the global food crisis followed by challenges of the global financial crisis made room for more integrated policy reforms. Considering the massive economic measures that have been taken over the past 12 months or so, Egyptian economic policymakers score high based on the inside lag, i.e. the lapse of time between the moment that the shock began to affect the economy and the moment that economic (monetary and fiscal) policy as well as the regulatory policy are altered and put into effect in response to the shock to various markets: goods market (real GDP), the labor market (unemployment rate), money market (interest rate and inflation), and the financial (stock and bond) market. Indeed, moderate financial panic occurred driven—at least partially—by the fear that other investors are about to panic and sell. There were falls in stock and bond market prices, and rises in nominal interest rates.
Egypt has a population of about 97 million, with the population concentrated within a region on either side of the Nile River. The majority of the population is employed in the services sector, followed by agriculture and industrial production. Approximately one-third of Egyptian labour is engaged directly in farming, and many others work in the processing or trading of agricultural products.
Unemployment rate increased from 10.3% in FY2004 to 11.2% in 2005. The average rate of growth of employment in the publicly owned enterprises sector was −2% per year between FY1998 and FY2005 as a result of aggressive privatization program. On the other hand, private sector employment grew at an average rate of 3% over that period. In addition, the government sector employment grew by almost double the rate of the private sector over the same period.
In general, the average weekly wage in the private sector is, in many instances, higher than that of the public sector. In some other instances, e.g. whole sale and retail trades, the weekly wage is lower by half of that in the public sector.
As a result of the weakness role of the Ministry of Manpower and Trade Unions to create a balance between the rights of workers and the interests of owners of companies in the private sector, privatization has led to worsening employment problems and deterioration in their working environment and health, and many workers have recently resorted to strike and picketing.
In an effort to quell discontent over rising food prices, Egypt offered government and public sector workers a pay rise of up to 30%. The offer came on the May day speech delivered by President Mubarak to the Egyptian General Federation of Trade Unions.
"We must go in dealing with the current global (food) crisis, on two basic tracks (1) we must strengthen the food security of our low-income people, (2) we must achieve a balance between wages and prices." President Mubarak said.
The pay rise originally proposed in the government budget ranged between 15% and 20%, but the decision to double it was given on heightened worries that widespread anger over prices could lead to a social explosion. The pay rise is initiated immediately, rather than waiting for the start of the new fiscal year on 1 July 2008 and is to be financed from real resources.
While the headline CPI inflation rate was 15.8% (17.6% in rural areas, 14.4% in urban areas) in March 2008, the overall food price inflation rate was 23.7% (26.9% in rural areas, 20.5% in urban areas). Moreover, in April 2008 in urban areas, the headline CPI inflation rate reached 16.4% while food price inflation rate was 22.0%. This underlines the statement in that "the inflation rate as measured by the headline CPI does not concern the poor and low-income people, who are the majority of people in rural and urban Egypt, since they spend most of their income on food." Approximately 55 million poor and low-income citizens, representing about 75% of the population, are currently enrolled in food ration cards.
In April 2009 it was reported that Egypt feared the return of 500,000 Egyptian laborers working in the Gulf states.
In May 2019, CAPMAS reported that the annual urban consumer price inflation of Egypt has been eased to 13% in April from 14.2% in March.
Poverty and income distribution.
The Minister of Economic Development, Othman Mohamed Othman, once mentioned that the poverty rate in Egypt had risen from 19 percent of the population in 2005 to 21 percent in 2009. In 2010–2011, the poverty rate in Egypt had risen to 25% of the population.
Various statistical databases show that Egypt has:
According to the 2005 Household Income, Expenditure and Consumption Survey (HIECS), estimated per capita poverty lines vary across the regions. Data from a World Bank and Ministry of Economic Development poverty assessment based on comparisons between actual expenditures (and the cost of a consumption basket securing 2470 calories per day per person), shows that individual Egyptians who spent less than E£ per year in 2005 are considered extreme poor, those who spent less than E£ per year are poor and those who spent less than E£ per year are near poor.
Overall about 29.7% of the Egyptian population are in the range of extreme poor to near poor:
Poverty has a strong regional dimension in Egypt and concentrates in Upper Egypt, both urban (18.6%) and rural (39.1), while metropolitan areas are the least poor (5.7%). The government is currently employing a recently completed poverty map as a tool for geographic targeting of public resources.
According to a report published by the World Bank in April 2019, 60% population of the country is "either poor or vulnerable". Egypt's national poverty rate was 24.3% in 2010 and moved up to around 30% by 2015. According to the 2019 Global Hunger Index, Egypt suffers from a moderate level of hunger, ranking 61 of 117 countries, compared to 61 of 119 countries in 2018. Food affordability, quality and safety remain challenges as Egypt continues to rely on global markets for more than half of its staples.
Causes of poverty.
High cost of doing business.
According to Rapid Assessment surveys conducted by the World Bank Group in 2011 and 2012, business managers rank informal gifts or payments, anticompetitive practices and regulatory policy uncertainty high on the list of obstacles to creating and growing a business. In addition, the amount of paperwork required for construction, imports, and exports is burdensome and the time for the government to process this paperwork is lengthy. Traders need to submit eight documents to export and ten to import—as opposed to France, for example, where only 2 documents are needed both for imports and exports. Additionally, there is no bankruptcy law in Egypt and entrepreneurs who fail to repay their debts can face prison.
High population growth.
Egypt's fertility rate has dramatically declined since the 1960s (6.6 children per woman) to about 3.2 children per woman in 2021 but is still considered fairly high. Egypt's population grew from 44 million in 1981 to more than 106 million today.
Corruption.
Businesses having more informal connections within the government receive preferable treatment navigating through Egypt's cumbersome regulatory framework, providing a disincentive for competition. An inefficient and sporadically enforced legal system and a widespread culture of corruption leave businesses reliant on the use of middlemen (known as wasta) to operate, and well-connected businesses enjoy privileged treatment. Facilitation payments are an established part of 'getting things done', despite irregular payments and gifts being criminalized. Egypt is the 117 least corrupt nation out of 175 countries, according to the 2017 Corruption Perceptions Index reported by Transparency International. Corruption Rank in Egypt averaged 86.48 from 1996 until 2017, reaching an all-time high of 118 in 2012 and a record low of 41 in 1996. Facilitation payments are regarded as bribery in many countries, which prevents many foreign entities from financial involvement with Egypt since they are a required part of doing business. Corruption makes the costs of both local goods as well as imports higher, decreasing the purchasing power of individuals which magnifies poverty.
Ineffective policies.
The country lacks sustainable pragmatic policies to combat poverty. Although these policies were adopted in an attempt to reduce economic burdens on the poor, they benefited the rich more which caused more problems to the poor and increased the burdens of the government. In fact, 83 percent of food subsidy, 76 percent of electricity subsidy, 87 percent of petroleum subsidy and 76 percent of the social safety net subsidy went to the non-poor instead of the poor.
The failure in efficiently using the past bailout by the IMF has landed Egypt again in the same economic condition in August 2022 where it started. Egypt sought for a new loan from the International Monetary Fund (IMF) in August 2022, in order to deal with the fallout from sudden surge in prices, impacting the economic rights of the Egyptian people devastatingly. In July 2022, President Abdel Fattah al-Sisi requested his European allies to back him up in convincing the international financial institutions (IFI), including the IMF that the “situation in our country does not tolerate the applicable standards at this stage”, raising questions about what standards did he mean. Meanwhile, Sisi has been criticized for historically introducing economic policies that majorly benefited the elite, instead of protecting the public from the crisis. The Human Rights Watch demanded that IMF consider the human rights record of the Sisi regime as well as the failure of efficiently using the bailout funds by the IMF and other institutions.
Regional GDP.
Data shown are for the year 2021 in nominal numbers.
Role of the military.
The Egyptian armed forces have wielded substantial influence over Egypt's economy. Military-run companies play a pivotal role across various industries, contributing significantly to public spending on housing and infrastructure, including activities such as cement and food production, as well as infrastructure development like roads and bridges. According to a study by the Carnegie Middle East Centre, the Egyptian army has control over about 25% of public spending allocated to housing and infrastructure.
Despite Egypt's commitment to reducing the military's economic impact per its agreement with the International Monetary Fund (IMF), recent developments indicate an opposing trend. The National Service Products Organization (NSPO), a firm under military ownership, is currently constructing new factories for the production of fertilizers, irrigation machines, and veterinary vaccines. The government discussed selling stakes in military-run companies Safi and Wataniya for two years. Despite claims of receiving offers, there are visible asset transfers, like the rebranding of Wataniya franchises into ChillOut stations. The army's expanding economic influence, from petrol stations to media, has stifled competition, hindered private investment and contributing to slower growth, higher prices, and limited opportunities for ordinary Egyptians.
Notes.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\approx "
}
] | https://en.wikipedia.org/wiki?curid=9351 |
9351265 | Friedlander–Iwaniec theorem | Infinite prime numbers of the form a^2+b^4
In analytic number theory the Friedlander–Iwaniec theorem states that there are infinitely many prime numbers of the form formula_0. The first few such primes are
2, 5, 17, 37, 41, 97, 101, 137, 181, 197, 241, 257, 277, 281, 337, 401, 457, 577, 617, 641, 661, 677, 757, 769, 821, 857, 881, 977, … (sequence in the OEIS).
The difficulty in this statement lies in the very sparse nature of this sequence: the number of integers of the form formula_1 less than formula_2 is roughly of the order formula_3.
History.
The theorem was proved in 1997 by John Friedlander and Henryk Iwaniec. Iwaniec was awarded the 2001 Ostrowski Prize in part for his contributions to this work.
Refinements.
The theorem was refined by D.R. Heath-Brown and Xiannan Li in 2017. In particular, they proved that the polynomial formula_0 represents infinitely many primes when the variable formula_4 is also required to be prime. Namely, if formula_5 is the prime numbers less than formula_6 in the form formula_7 then
formula_8
where
formula_9
Special case.
When "b" = 1, the Friedlander–Iwaniec primes have the form formula_10, forming the set
2, 5, 17, 37, 101, 197, 257, 401, 577, 677, 1297, 1601, 2917, 3137, 4357, 5477, 7057, 8101, 8837, 12101, 13457, 14401, 15377, … (sequence in the OEIS).
It is conjectured (one of Landau's problems) that this set is infinite. However, this is not implied by the Friedlander–Iwaniec theorem.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "a^2 + b^4"
},
{
"math_id": 1,
"text": "a^2+b^4"
},
{
"math_id": 2,
"text": "X"
},
{
"math_id": 3,
"text": "X^{3/4}"
},
{
"math_id": 4,
"text": "b"
},
{
"math_id": 5,
"text": "f(n)"
},
{
"math_id": 6,
"text": "n"
},
{
"math_id": 7,
"text": "a^2 + b^4,"
},
{
"math_id": 8,
"text": "f(n) \\sim v \\frac{x^{3/4}}{\\log{x}}"
},
{
"math_id": 9,
"text": "v=2 \\sqrt{\\pi} \\frac{\\Gamma(5/4)}{\\Gamma(7/4)} \\prod_{p \\equiv 1\\bmod 4} \\frac{p-2}{p-1} \\prod_{p \\equiv 3\\bmod 4} \\frac{p}{p-1}."
},
{
"math_id": 10,
"text": "a^2+1"
}
] | https://en.wikipedia.org/wiki?curid=9351265 |
9351532 | Froude–Krylov force | Hydrodynamic force from the pressure field generated by undisturbed waves
In fluid dynamics, the Froude–Krylov force—sometimes also called the Froude–Kriloff force—is a hydrodynamical force named after William Froude and Alexei Krylov. The Froude–Krylov force is the force introduced by the unsteady pressure field generated by "undisturbed" waves. The Froude–Krylov force does, together with the diffraction force, make up the total non-viscous forces acting on a floating body in regular waves. The diffraction force is due to the floating body disturbing the waves.
Formulas.
The Froude–Krylov force can be calculated from:
formula_0
where
In the simplest case the formula may be expressed as the product of the wetted surface area (A) of the floating body, and the dynamic pressure acting from the waves on the body:
formula_5
The dynamic pressure, formula_6, close to the surface, is given by:
formula_7
where | [
{
"math_id": 0,
"text": "\n \\vec F_{FK} = - \\iint_{S_w} p ~ \\vec n ~ ds,\n"
},
{
"math_id": 1,
"text": "\\vec F_{FK}"
},
{
"math_id": 2,
"text": "S_w"
},
{
"math_id": 3,
"text": "p"
},
{
"math_id": 4,
"text": "\\vec n"
},
{
"math_id": 5,
"text": "\nF_{FK} = A \\cdot p_{dyn} \n"
},
{
"math_id": 6,
"text": "p_{dyn}"
},
{
"math_id": 7,
"text": "\np_{dyn} = \\rho \\cdot g \\cdot H/2\n"
},
{
"math_id": 8,
"text": "\\rho"
},
{
"math_id": 9,
"text": "g"
},
{
"math_id": 10,
"text": "H"
}
] | https://en.wikipedia.org/wiki?curid=9351532 |
9353592 | Sophomore's dream |
Identity expressing an integral as a sum
In mathematics, the sophomore's dream is the pair of identities (especially the first)
formula_0
discovered in 1697 by Johann Bernoulli.
The numerical values of these constants are approximately 1.291285997... and 0.7834305107..., respectively.
The name "sophomore's dream" is in contrast to the name "freshman's dream" which is given to the incorrect identity formula_1. The sophomore's dream has a similar too-good-to-be-true feel, but is true.
Proof.
The proofs of the two identities are completely analogous, so only the proof of the second is presented here.
The key ingredients of the proof are:
In details, "x""x" can be expanded as
formula_4
Therefore,
formula_5
By uniform convergence of the power series, one may interchange summation and integration to yield
formula_6
To evaluate the above integrals, one may change the variable in the integral via the substitution formula_7 With this substitution, the bounds of integration are transformed to formula_8 giving the identity
formula_9
By Euler's integral identity for the Gamma function, one has
formula_10
so that
formula_11
Summing these (and changing indexing so it starts at "n"= 1 instead of "n" = 0) yields the formula.
Historical proof.
The original proof, given in Bernoulli, and presented in modernized form in Dunham, differs from the one above in how the termwise integral formula_12 is computed, but is otherwise the same, omitting technical details to justify steps (such as termwise integration). Rather than integrating by substitution, yielding the Gamma function (which was not yet known), Bernoulli used integration by parts to iteratively compute these terms.
The integration by parts proceeds as follows, varying the two exponents independently to obtain a recursion. An indefinite integral is computed initially, omitting the constant of integration formula_13 both because this was done historically, and because it drops out when computing the definite integral.
Integrating formula_14 by substituting formula_15 and formula_16 yields:
formula_17
(also in the list of integrals of logarithmic functions). This reduces the power on the logarithm in the integrand by 1 (from formula_18 to formula_19) and thus one can compute the integral inductively, as
formula_20
where formula_21 denotes the falling factorial; there is a finite sum because the induction stops at 0, since n is an integer.
In this case formula_22, and they are integers, so
formula_23
Integrating from 0 to 1, all the terms vanish except the last term at 1, which yields:
formula_24
This is equivalent to computing Euler's integral identity formula_25 for the Gamma function on a different domain (corresponding to changing variables by substitution), as Euler's identity itself can also be computed via an analogous integration by parts.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
Formula.
<templatestyles src="Refbegin/styles.css" />
Function.
<templatestyles src="Refbegin/styles.css" />
Footnotes
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\begin{alignat}{2}\n & \\int_0^1 x^{-x}\\,dx &&= \\sum_{n=1}^\\infty n^{-n} \\\\\n & \\int_0^1 x^x \\,dx &&= \\sum_{n=1}^\\infty (-1)^{n+1}n^{-n} = - \\sum_{n=1}^\\infty (-n)^{-n}\n\\end{alignat}"
},
{
"math_id": 1,
"text": "(x+y)^n=x^n+y^n"
},
{
"math_id": 2,
"text": "x^x = \\exp(x\\ln x)"
},
{
"math_id": 3,
"text": "\\exp(x\\ln x)"
},
{
"math_id": 4,
"text": "x^x = \\exp(x \\log x) = \\sum_{n=0}^\\infty \\frac{x^n(\\log x)^n}{n!}."
},
{
"math_id": 5,
"text": "\\int_0^1 x^x\\,dx = \\int_0^1 \\sum_{n=0}^\\infty \\frac{x^n(\\log x)^n}{n!} \\,dx. "
},
{
"math_id": 6,
"text": "\\int_0^1 x^x\\,dx = \\sum_{n=0}^\\infty \\int_0^1 \\frac{x^n(\\log x)^n}{n!} \\,dx. "
},
{
"math_id": 7,
"text": "x=\\exp(-\\frac{u}{n+1})."
},
{
"math_id": 8,
"text": "0 < u < \\infty, "
},
{
"math_id": 9,
"text": "\\int_0^1 x^n(\\log x)^n\\,dx = (-1)^n (n+1)^{-(n+1)} \\int_0^\\infty u^n e^{-u}\\,du."
},
{
"math_id": 10,
"text": "\\int_0^\\infty u^n e^{-u}\\,du=n!,"
},
{
"math_id": 11,
"text": "\\int_0^1 \\frac{x^n (\\log x)^n}{n!}\\,dx\n= (-1)^n (n+1)^{-(n+1)}."
},
{
"math_id": 12,
"text": "\\int_0^1 x^n(\\log x)^n\\,dx"
},
{
"math_id": 13,
"text": "+ C"
},
{
"math_id": 14,
"text": "\\int x^m (\\log x)^n\\,dx"
},
{
"math_id": 15,
"text": "u = (\\log x)^n"
},
{
"math_id": 16,
"text": "dv=x^m\\,dx"
},
{
"math_id": 17,
"text": "\\begin{align}\n\\int x^m (\\log x)^n\\,dx\n& = \\frac{x^{m+1}(\\log x)^n}{m+1} - \\frac{n}{m+1}\\int x^{m+1} \\frac{(\\log x)^{n-1}}{x}\\,dx \\qquad\\text{(for }m\\neq -1\\text{)}\\\\\n& = \\frac{x^{m+1}}{m+1}(\\log x)^n - \\frac{n}{m+1}\\int x^m (\\log x)^{n-1}\\,dx \\qquad\\text{(for }m\\neq -1\\text{)}\n\\end{align}\n"
},
{
"math_id": 18,
"text": "n"
},
{
"math_id": 19,
"text": "n-1"
},
{
"math_id": 20,
"text": "\n\\int x^m (\\log x)^n\\,dx\n= \\frac{x^{m+1}}{m+1}\n \\cdot \\sum_{i=0}^n (-1)^i \\frac{(n)_i}{(m+1)^i} (\\log x)^{n-i}"
},
{
"math_id": 21,
"text": "(n)_i"
},
{
"math_id": 22,
"text": "m=n"
},
{
"math_id": 23,
"text": "\\int x^n (\\log x)^n\\,dx\n= \\frac{x^{n+1}}{n+1}\n \\cdot \\sum_{i=0}^n (-1)^i \\frac{(n)_i}{(n+1)^i} (\\log x)^{n-i}."
},
{
"math_id": 24,
"text": "\\int_0^1 \\frac{x^n (\\log x)^n}{n!}\\,dx\n= \\frac{1}{n!}\\frac{1^{n+1}}{n+1}\n (-1)^n \\frac{(n)_n}{(n+1)^n} = (-1)^n (n+1)^{-(n+1)}."
},
{
"math_id": 25,
"text": "\\Gamma(n+1) = n!"
}
] | https://en.wikipedia.org/wiki?curid=9353592 |
935451 | Likelihood ratios in diagnostic testing | Likelihood ratios used for assessing the value of performing a diagnostic test
In evidence-based medicine, likelihood ratios are used for assessing the value of performing a diagnostic test. They use the sensitivity and specificity of the test to determine whether a test result usefully changes the probability that a condition (such as a disease state) exists. The first description of the use of likelihood ratios for decision rules was made at a symposium on information theory in 1954. In medicine, likelihood ratios were introduced between 1975 and 1980.
Calculation.
Two versions of the likelihood ratio exist, one for positive and one for negative test results. Respectively, they are known as the <templatestyles src="Template:Visible anchor/styles.css" />positive likelihood ratio (LR+, likelihood ratio positive, likelihood ratio for positive results) and <templatestyles src="Template:Visible anchor/styles.css" />negative likelihood ratio (LR–, likelihood ratio negative, likelihood ratio for negative results).
The positive likelihood ratio is calculated as
formula_0
which is equivalent to
formula_1
or "the probability of a person who has the disease testing positive divided by the probability of a person who does not have the disease testing positive."
Here ""T"+" or ""T"−" denote that the result of the test is positive or negative, respectively. Likewise, ""D"+" or ""D"−" denote that the disease is present or absent, respectively. So "true positives" are those that test positive ("T"+) and have the disease ("D"+), and "false positives" are those that test positive ("T"+) but do not have the disease ("D"−).
The negative likelihood ratio is calculated as
formula_2
which is equivalent to
formula_3
or "the probability of a person who has the disease testing negative divided by the probability of a person who does not have the disease testing negative."
The calculation of likelihood ratios for tests with continuous values or more than two outcomes is similar to the calculation for dichotomous outcomes; a separate likelihood ratio is simply calculated for every level of test result and is called interval or stratum specific likelihood ratios.
The pretest odds of a particular diagnosis, multiplied by the likelihood ratio, determines the post-test odds. This calculation is based on Bayes' theorem. (Note that odds can be calculated from, and then converted to, probability.)
Application to medicine.
Pretest probability refers to the chance that an individual in a given population has a disorder or condition; this is the baseline probability prior to the use of a diagnostic test. Post-test probability refers to the probability that a condition is truly present given a positive test result. For a good test in a population, the post-test probability will be meaningfully higher or lower than the pretest probability. A high likelihood ratio indicates a good test for a population, and a likelihood ratio close to one indicates that a test may not be appropriate for a population.
For a screening test, the population of interest might be the general population of an area. For diagnostic testing, the ordering clinician will have observed some symptom or other factor that raises the pretest probability relative to the general population. A likelihood ratio of greater than 1 for a test in a population indicates that a positive test result is evidence that a condition is present. If the likelihood ratio for a test in a population is not clearly better than one, the test will not provide good evidence: the post-test probability will not be meaningfully different from the pretest probability. Knowing or estimating the likelihood ratio for a test in a population allows a clinician to better interpret the result.
Research suggests that physicians rarely make these calculations in practice, however, and when they do, they often make errors. A randomized controlled trial compared how well physicians interpreted diagnostic tests that were presented as either sensitivity and specificity, a likelihood ratio, or an inexact graphic of the likelihood ratio, found no difference between the three modes in interpretation of test results.
Estimation table.
This table provide examples of how changes in the likelihood ratio affects post-test probability of disease.
Calculation example.
A medical example is the likelihood that a given test result would be expected in a patient with a certain disorder compared to the likelihood that same result would occur in a patient without the target disorder.
Some sources distinguish between LR+ and LR−. A worked example is shown below.
Related calculations
This hypothetical screening test (fecal occult blood test) correctly identified two-thirds (66.7%) of patients with colorectal cancer. Unfortunately, factoring in prevalence rates reveals that this hypothetical test has a high false positive rate, and it does not reliably identify colorectal cancer in the overall population of asymptomatic people (PPV = 10%).
On the other hand, this hypothetical test demonstrates very accurate detection of cancer-free individuals (NPV ≈ 99.5%). Therefore, when used for routine colorectal cancer screening with asymptomatic adults, a negative result supplies important data for the patient and doctor, such as ruling out cancer as the cause of gastrointestinal symptoms or reassuring patients worried about developing colorectal cancer.
Confidence intervals for all the predictive parameters involved can be calculated, giving the range of values within which the true value lies at a given confidence level (e.g. 95%).
Estimation of pre- and post-test probability.
The likelihood ratio of a test provides a way to estimate the pre- and post-test probabilities of having a condition.
With "pre-test probability" and "likelihood ratio" given, then, the "post-test probabilities" can be calculated by the following three steps:
formula_4
formula_5
In equation above, "positive post-test probability" is calculated using the "likelihood ratio positive", and the "negative post-test probability" is calculated using the "likelihood ratio negative".
Odds are converted to probabilities as follows:
formula_6
multiply equation (1) by (1 − probability)
formula_7
add (probability × odds) to equation (2)
formula_8
divide equation (3) by (1 + odds)
formula_9
hence
Alternatively, post-test probability can be calculated directly from the pre-test probability and the likelihood ratio using the equation:
In fact, "post-test probability", as estimated from the "likelihood ratio" and "pre-test probability", is generally more accurate than if estimated from the "positive predictive value" of the test, if the tested individual has a different "pre-test probability" than what is the "prevalence" of that condition in the population.
Example.
Taking the medical example from above (20 true positives, 10 false negatives, and 2030 total patients), the "positive pre-test probability" is calculated as:
As demonstrated, the "positive post-test probability" is numerically equal to the "positive predictive value"; the "negative post-test probability" is numerically equal to (1 − "negative predictive value").
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\text{LR}+ = \\frac{\\text{sensitivity}}{1 - \\text{specificity}} "
},
{
"math_id": 1,
"text": " \\text{LR}+ = \\frac{\\Pr({T+}\\mid D+)}{\\Pr({T+}\\mid D-)} "
},
{
"math_id": 2,
"text": " \\text{LR}- = \\frac{1 - \\text{sensitivity}}{\\text{specificity}} "
},
{
"math_id": 3,
"text": " \\text{LR}- = \\frac{\\Pr({T-}\\mid D+)}{\\Pr({T-}\\mid D-)} "
},
{
"math_id": 4,
"text": " \\text{pretest odds} = \\frac{\\text{pretest probability}}{1 - \\text{pretest probability}} "
},
{
"math_id": 5,
"text": " \\text{posttest odds} = \\text{pretest odds} \\times \\text{likelihood ratio} "
},
{
"math_id": 6,
"text": "\\begin{align}(1)\\ \\text{ odds} = \\frac{\\text{probability}}{1-\\text{probability}} \n\\end{align}"
},
{
"math_id": 7,
"text": "\\begin{align}(2)\\ \\text{ probability} & = \\text{odds} \\times (1 - \\text{probability}) \\\\\n& = \\text{odds} - \\text{probability} \\times \\text{odds}\n\\end{align}"
},
{
"math_id": 8,
"text": "\\begin{align}(3)\\ \\text{ probability} + \\text{probability} \\times \\text{odds} & = \\text{odds} \\\\\n\\text{probability} \\times (1 + \\text{odds}) & = \\text{odds}\n\\end{align}"
},
{
"math_id": 9,
"text": "\\begin{align}(4)\\ \\text{ probability} = \\frac{\\text{odds}}{1 + \\text{odds}}\n\\end{align}"
}
] | https://en.wikipedia.org/wiki?curid=935451 |
935515 | Contingency table | Table that displays the frequency of variables
In statistics, a contingency table (also known as a cross tabulation or crosstab) is a type of table in a matrix format that displays the multivariate frequency distribution of the variables. They are heavily used in survey research, business intelligence, engineering, and scientific research. They provide a basic picture of the interrelation between two variables and can help find interactions between them. The term "contingency table" was first used by Karl Pearson in "On the Theory of Contingency and Its Relation to Association and Normal Correlation", part of the "Drapers' Company Research Memoirs Biometric Series I" published in 1904.
A crucial problem of multivariate statistics is finding the (direct-)dependence structure underlying the variables contained in high-dimensional contingency tables. If some of the conditional independences are revealed, then even the storage of the data can be done in a smarter way (see Lauritzen (2002)). In order to do this one can use information theory concepts, which gain the information only from the distribution of probability, which can be expressed easily from the contingency table by the relative frequencies.
A pivot table is a way to create contingency tables using spreadsheet software.
Example.
Suppose there are two variables, sex (male or female) and handedness (right- or left-handed). Further suppose that 100 individuals are randomly sampled from a very large population as part of a study of sex differences in handedness. A contingency table can be created to display the numbers of individuals who are male right-handed and left-handed, female right-handed and left-handed. Such a contingency table is shown below.
The numbers of the males, females, and right- and left-handed individuals are called marginal totals. The grand total (the total number of individuals represented in the contingency table) is the number in the bottom right corner.
The table allows users to see at a glance that the proportion of men who are right-handed is about the same as the proportion of women who are right-handed although the proportions are not identical. The strength of the association can be measured by the odds ratio, and the population odds ratio estimated by the sample odds ratio. The significance of the difference between the two proportions can be assessed with a variety of statistical tests including Pearson's chi-squared test, the "G"-test, Fisher's exact test, Boschloo's test, and Barnard's test, provided the entries in the table represent individuals randomly sampled from the population about which conclusions are to be drawn. If the proportions of individuals in the different columns vary significantly between rows (or vice versa), it is said that there is a "contingency" between the two variables. In other words, the two variables are "not" independent. If there is no contingency, it is said that the two variables are "independent".
The example above is the simplest kind of contingency table, a table in which each variable has only two levels; this is called a 2 × 2 contingency table. In principle, any number of rows and columns may be used. There may also be more than two variables, but higher order contingency tables are difficult to represent visually. The relation between ordinal variables, or between ordinal and categorical variables, may also be represented in contingency tables, although such a practice is rare. For more on the use of a contingency table for the relation between two ordinal variables, see Goodman and Kruskal's gamma.
Measures of association.
The degree of association between the two variables can be assessed by a number of coefficients. The following subsections describe a few of them. For a more complete discussion of their uses, see the main articles linked under each subsection heading.
Odds ratio.
The simplest measure of association for a 2 × 2 contingency table is the odds ratio. Given two events, A and B, the odds ratio is defined as the ratio of the odds of A in the presence of B and the odds of A in the absence of B, or equivalently (due to symmetry), the ratio of the odds of B in the presence of A and the odds of B in the absence of A. Two events are independent if and only if the odds ratio is 1; if the odds ratio is greater than 1, the events are positively associated; if the odds ratio is less than 1, the events are negatively associated.
The odds ratio has a simple expression in terms of probabilities; given the joint probability distribution:
formula_0
the odds ratio is:
formula_1
Phi coefficient.
A simple measure, applicable only to the case of 2 × 2 contingency tables, is the phi coefficient (φ) defined by
formula_2
where χ2 is computed as in Pearson's chi-squared test, and "N" is the grand total of observations. φ varies from 0 (corresponding to no association between the variables) to 1 or −1 (complete association or complete inverse association), provided it is based on frequency data represented in 2 × 2 tables. Then its sign equals the sign of the product of the main diagonal elements of the table minus the product of the off–diagonal elements. φ takes on the minimum value −1.0 or the maximum value of +1.0 if and only if every marginal proportion is equal to 0.5 (and two diagonal cells are empty).
Cramér's "V" and the contingency coefficient "C".
Two alternatives are the "contingency coefficient" "C", and Cramér's V.
The formulae for the "C" and "V" coefficients are:
formula_3 and
formula_4
"k" being the number of rows or the number of columns, whichever is less.
"C" suffers from the disadvantage that it does not reach a maximum of 1.0, notably the highest it can reach in a 2 × 2 table is 0.707 . It can reach values closer to 1.0 in contingency tables with more categories; for example, it can reach a maximum of 0.870 in a 4 × 4 table. It should, therefore, not be used to compare associations in different tables if they have different numbers of categories.
"C" can be adjusted so it reaches a maximum of 1.0 when there is complete association in a table of any number of rows and columns by dividing "C" by formula_5 where "k" is the number of rows or columns, when the table is square , or by formula_6 where "r" is the number of rows and "c" is the number of columns.
Tetrachoric correlation coefficient.
Another choice is the tetrachoric correlation coefficient but it is only applicable to 2 × 2 tables. Polychoric correlation is an extension of the tetrachoric correlation to tables involving variables with more than two levels.
Tetrachoric correlation assumes that the variable underlying each dichotomous measure is normally distributed. The coefficient provides "a convenient measure of [the Pearson product-moment] correlation when graduated measurements have been reduced to two categories."
The tetrachoric correlation coefficient should not be confused with the Pearson correlation coefficient computed by assigning, say, values 0.0 and 1.0 to represent the two levels of each variable (which is mathematically equivalent to the φ coefficient).
Lambda coefficient.
The lambda coefficient is a measure of the strength of association of the cross tabulations when the variables are measured at the nominal level. Values range from 0.0 (no association) to 1.0 (the maximum possible association).
Asymmetric lambda measures the percentage improvement in predicting the dependent variable. Symmetric lambda measures the percentage improvement when prediction is done in both directions.
Uncertainty coefficient.
The uncertainty coefficient, or Theil's U, is another measure for variables at the nominal level. Its values range from −1.0 (100% negative association, or perfect inversion) to +1.0 (100% positive association, or perfect agreement). A value of 0.0 indicates the absence of association.
Also, the uncertainty coefficient is conditional and an asymmetrical measure of association, which can be expressed as
formula_7.
This asymmetrical property can lead to insights not as evident in symmetrical measures of association.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\n\\begin{array}{c|cc}\n& B = 1 & B = 0 \\\\\n\\hline\nA = 1 & p_{11} & p_{10} \\\\\nA = 0 & p_{01} & p_{00}\n\\end{array}\n"
},
{
"math_id": 1,
"text": "OR = \\frac{p_{11}p_{00}}{p_{10}p_{01}}."
},
{
"math_id": 2,
"text": "\\phi=\\pm\\sqrt{\\frac{\\chi^2}{N}},"
},
{
"math_id": 3,
"text": "C=\\sqrt{\\frac{\\chi^2}{N+\\chi^2}}"
},
{
"math_id": 4,
"text": "V=\\sqrt{\\frac{\\chi^2}{N(k-1)}},"
},
{
"math_id": 5,
"text": "\\sqrt{\\frac{k-1}{k}}"
},
{
"math_id": 6,
"text": "\\sqrt[\\scriptstyle 4]{{r - 1 \\over r} \\times {c - 1 \\over c}}"
},
{
"math_id": 7,
"text": "\nU(X|Y) \\neq U(Y|X)\n"
}
] | https://en.wikipedia.org/wiki?curid=935515 |
9356047 | Jackson's inequality | Inequality on approximations of a function by algebraic or trigonometric polynomials
In approximation theory, Jackson's inequality is an inequality bounding the value of function's best approximation by algebraic or trigonometric polynomials in terms of the modulus of continuity or modulus of smoothness of the function or of its derivatives. Informally speaking, the smoother the function is, the better it can be approximated by polynomials.
Statement: trigonometric polynomials.
For trigonometric polynomials, the following was proved by Dunham Jackson:
Theorem 1: If formula_0 is an formula_1 times differentiable periodic function such that
formula_2
then, for every positive integer formula_3, there exists a trigonometric polynomial formula_4 of degree at most formula_5 such that
formula_6
where formula_7 depends only on formula_1.
The Akhiezer–Krein–Favard theorem gives the sharp value of formula_7 (called the Akhiezer–Krein–Favard constant):
formula_8
Jackson also proved the following generalisation of Theorem 1:
Theorem 2: One can find a trigonometric polynomial formula_9 of degree formula_10 such that
formula_11
where formula_12 denotes the modulus of continuity of function formula_13 with the step formula_14
An even more general result of four authors can be formulated as the following Jackson theorem.
Theorem 3: For every natural number formula_3, if formula_15 is formula_16-periodic continuous function, there exists a trigonometric polynomial formula_9 of degree formula_10 such that
formula_17
where constant formula_18 depends on formula_19 and formula_20 is the formula_21-th order modulus of smoothness.
For formula_22 this result was proved by Dunham Jackson. Antoni Zygmund proved the inequality in the case when formula_23 in 1945. Naum Akhiezer proved the theorem in the case formula_24 in 1956. For formula_25 this result was established by Sergey Stechkin in 1967.
Further remarks.
Generalisations and extensions are called Jackson-type theorems. A converse to Jackson's inequality is given by Bernstein's theorem. See also constructive function theory.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "f:[0,2\\pi]\\to \\C"
},
{
"math_id": 1,
"text": "r"
},
{
"math_id": 2,
"text": " \\left |f^{(r)}(x) \\right | \\leq 1, \\qquad x\\in[0,2\\pi],"
},
{
"math_id": 3,
"text": "n"
},
{
"math_id": 4,
"text": "T_{n-1}"
},
{
"math_id": 5,
"text": "n-1"
},
{
"math_id": 6,
"text": "\\left |f(x) - T_{n-1}(x) \\right | \\leq \\frac{C(r)}{n^r}, \\qquad x\\in[0,2\\pi], "
},
{
"math_id": 7,
"text": "C(r)"
},
{
"math_id": 8,
"text": " C(r) = \\frac{4}{\\pi} \\sum_{k=0}^\\infty \\frac{(-1)^{k(r+1)}}{(2k+1)^{r+1}}~."
},
{
"math_id": 9,
"text": "T_n"
},
{
"math_id": 10,
"text": "\\le n"
},
{
"math_id": 11,
"text": "|f(x) - T_n(x)| \\leq \\frac{C(r) \\omega \\left (\\frac{1}{n}, f^{(r)} \\right )}{n^r}, \\qquad x\\in[0,2\\pi],"
},
{
"math_id": 12,
"text": "\\omega(\\delta, g)"
},
{
"math_id": 13,
"text": "g"
},
{
"math_id": 14,
"text": "\\delta."
},
{
"math_id": 15,
"text": "f"
},
{
"math_id": 16,
"text": "2\\pi"
},
{
"math_id": 17,
"text": "|f(x)-T_n(x)|\\leq c(k)\\omega_k\\left(\\tfrac{1}{n},f\\right),\\qquad x\\in[0,2\\pi],"
},
{
"math_id": 18,
"text": "c(k)"
},
{
"math_id": 19,
"text": "k\\in\\N,"
},
{
"math_id": 20,
"text": "\\omega_k"
},
{
"math_id": 21,
"text": "k"
},
{
"math_id": 22,
"text": "k=1"
},
{
"math_id": 23,
"text": "k=2, \\omega_2(t,f)\\le ct, t>0"
},
{
"math_id": 24,
"text": "k=2"
},
{
"math_id": 25,
"text": "k>2"
}
] | https://en.wikipedia.org/wiki?curid=9356047 |
935621 | Harold Hotelling | American statistician and econometrician (1895-1973)
Harold Hotelling (; September 29, 1895 – December 26, 1973) was an American mathematical statistician and an influential economic theorist, known for Hotelling's law, Hotelling's lemma, and Hotelling's rule in economics, as well as Hotelling's T-squared distribution in statistics. He also developed and named the principal component analysis method widely used in finance, statistics and computer science.
He was associate professor of mathematics at Stanford University from 1927 until 1931, a member of the faculty of Columbia University from 1931 until 1946, and a professor of Mathematical Statistics at the University of North Carolina at Chapel Hill from 1946 until his death. A street in Chapel Hill bears his name. In 1972, he received the North Carolina Award for contributions to science.
Statistics.
Hotelling is known to statisticians because of Hotelling's T-squared distribution which is a generalization of the Student's t-distribution in multivariate setting, and its use in statistical hypothesis testing and confidence regions. He also introduced canonical correlation analysis.
At the beginning of his statistical career Hotelling came under the influence of R.A. Fisher, whose "Statistical Methods for Research Workers" had "revolutionary importance", according to Hotelling's review. Hotelling was able to maintain professional relations with Fisher, despite the latter's temper tantrums and polemics. Hotelling suggested that Fisher use the English word "cumulants" for Thiele's Danish "semi-invariants". Fisher's emphasis on the sampling distribution of a statistic was extended by Jerzy Neyman and Egon Pearson with greater precision and wider applications, which Hotelling recognized. Hotelling sponsored refugees from European anti-semitism and Nazism, welcoming Henry Mann and Abraham Wald to his research group at Columbia. While at Hotelling's group, Wald developed sequential analysis and statistical decision theory, which Hotelling described as "pragmatism in action".
In the United States, Hotelling is known for his leadership of the statistics profession, in particular for his vision of a statistics department at a university, which convinced many universities to start statistics departments. Hotelling was known for his leadership of departments at Columbia University and the University of North Carolina.
Economics.
Hotelling has a crucial place in the growth of mathematical economics; several areas of active research were influenced by his economics papers. While at the University of Washington, he was encouraged to switch from pure mathematics toward mathematical economics by the famous mathematician Eric Temple Bell. Later, at Columbia University (where during 1933-34 he taught Milton Friedman statistics) in the '40s, Hotelling in turn encouraged young Kenneth Arrow to switch from mathematics and statistics applied to actuarial studies towards more general applications of mathematics in general economic theory. Hotelling is the eponym of Hotelling's law, Hotelling's lemma, and Hotelling's rule in economics.
Hotelling was influenced by the writing of Henry George and was an editorial adviser for the Georgist journal AJES.
Spatial economics.
One of Hotelling's most important contributions to economics was his conception of "spatial economics" in his 1929 article. Space was not just a barrier to moving goods around, but rather a field upon which competitors jostled to be nearest to their customers.
Hotelling considers a situation in which there are two sellers at point A and B in a line segment of size l. The buyers are distributed uniformly in this line segment and carry the merchandise to their home at cost c. Let p1 and p2 be the prices charged by A and B, and let the line segment be divided in 3 parts of size a, x+y and b, where x+y is the size of the segment between A and B, "a" the portion of segment to the left of A and "b" the portion of segment to the right of B. Therefore, a+x+y+b=l. Since the product being sold is a commodity, the point of indifference to buying is given by p1+cx=p2+cy. Solving for x and y yields:
formula_0
formula_1
Let q1 and q2 indicate the quantities sold by A and B. The sellers profit are:
formula_2
formula_3
By imposing profit maximization:
formula_4
formula_5
Hotelling obtains the economic equilibrium. Hotelling argues this equilibrium is stable even though the sellers may try to establish a price cartel.
Hotelling extrapolates from his findings about spatial economics and links it to not just physical distance, but also similarity in products. He describes how, for example, some factories might make shoes for the poor and others for the rich, but they end up alike. He also quips that, "Methodists and Presbyterian churches are too much alike; cider too homogenous."
Market socialism and Georgism.
As an extension of his research in spatial economics, Hotelling realized that it would be possible and socially optimal to finance investment in public goods through a Georgist land value tax and then provide such goods and services to the public at marginal cost (in many cases for free). This is an early expression of the Henry George theorem that Joseph Stiglitz and others expanded upon. Hotelling pointed out that when local public goods like roads and trains become congested, users create an additional marginal cost of excluding others. Hotelling became an early advocate of Georgist congestion pricing and stated that the purpose of this unique type of toll fee was in no way to recoup investment costs, but was instead a way of changing behavior and compensating those who are excluded. Hotelling describes how human attention is also in limited supply at any given time and place, which produces a rental value; he concludes that billboards could be regulated or taxed on similar grounds as other scarcity rents. Hotelling reasoned that rent and taxation were analogous, the public and private versions of a similar thing. Therefore, the social optimum would be to put taxes directly on rent. Kenneth Arrow described this as market socialism, but Mason Gaffney points out that it is actually Georgism. Hotelling added the following comment about the ethics of Georgist value capture: "The proposition that there is no ethical objection to the confiscation of the site value of land by taxation, if and when the nonlandowning classes can get the power to do so, has been ably defended by [the Georgist] H. G. Brown."
Non-convexities.
Hotelling made pioneering studies of non-convexity in economics. In economics, "non-convexity" refers to violations of the convexity assumptions of elementary economics. Basic economics textbooks concentrate on consumers with convex preferences and convex budget sets and on producers with convex production sets; for convex models, the predicted economic behavior is well understood. When convexity assumptions are violated, then many of the good properties of competitive markets need not hold: Thus, non-convexity is associated with market failures, where supply and demand differ or where market equilibria can be inefficient.
Producers with increasing returns to scale: marginal cost pricing.
In "oligopolies" (markets dominated by a few producers), especially in "monopolies" (markets dominated by one producer), non-convexities remain important. Concerns with large producers exploiting market power initiated the literature on non-convex sets, when Piero Sraffa wrote about firms with increasing returns to scale in 1926, after which Hotelling wrote about marginal cost pricing in 1938. Both Sraffa and Hotelling illuminated the market power of producers without competitors, clearly stimulating a literature on the supply-side of the economy.
Consumers with non-convex preferences.
When the consumer's preference set is non-convex, then (for some prices) the consumer's demand is not connected. A disconnected demand implies some discontinuous behavior by the consumer as discussed by Hotelling:
<templatestyles src="Template:Blockquote/styles.css" />
Following Hotelling's pioneering research on non-convexities in economics, research in economics has recognized non-convexity in new areas of economics. In these areas, non-convexity is associated with market failures, where any equilibrium need not be efficient or where no equilibrium exists because supply and demand differ. Non-convex sets arise also with environmental goods and other externalities, and with market failures, and public economics.
Non-convexities occur also with information economics, and with stock markets (and other incomplete markets). Such applications continued to motivate economists to study non-convex sets.
References.
<templatestyles src="Reflist/styles.css" />
External links.
The following have photographs: | [
{
"math_id": 0,
"text": "x=\\frac{1}{2}\\left( l-a-b+\\frac{p_{2}-p_{1}}{c} \\right)"
},
{
"math_id": 1,
"text": "y=\\frac{1}{2}\\left( l-a-b+\\frac{p_{1}-p_{2}}{c} \\right)"
},
{
"math_id": 2,
"text": "\\pi_{1}=p_{1}q_{1}=p_{1}\\left( a+x \\right)=\\frac{1}{2}\\left( l+a-b \\right)p_{1}-\\frac{p_{1}^{2}}{2c}+\\frac{p_{1}p_{2}}{2c}"
},
{
"math_id": 3,
"text": "\\pi_{2}=p_{2}q_{2}=p_{2}\\left( b+y \\right)=\\frac{1}{2}\\left( l-a+b \\right)p_{2}-\\frac{p_{2}^{2}}{2c}+\\frac{p_{1}p_{2}}{2c}"
},
{
"math_id": 4,
"text": "\\frac{\\partial \\pi_{1}}{\\partial p_{1}}=\\frac{1}{2}\\left( l+a-b \\right)-\\frac{p_{1}}{c}+\\frac{p_{2}}{2c}=0"
},
{
"math_id": 5,
"text": "\\frac{\\partial \\pi_{2}}{\\partial p_{2}}=\\frac{1}{2}\\left( l+a-b \\right)-\\frac{p_{1}}{2c}+\\frac{p_{2}}{c}=0"
}
] | https://en.wikipedia.org/wiki?curid=935621 |
935655 | Binomial test | Test of statistical significance
Binomial test is an exact test of the statistical significance of deviations from a theoretically expected distribution of observations into two categories using sample data.
Usage.
The binomial test is useful to test hypotheses about the probability (formula_0) of success:
formula_1
where formula_2 is a user-defined value between 0 and 1.
If in a sample of size formula_3 there are formula_4 successes, while we expect formula_5, the formula of the binomial distribution gives the probability of finding this value:
formula_6
If the null hypothesis formula_7 were correct, then the expected number of successes would be formula_5. We find our formula_8-value for this test by considering the probability of seeing an outcome as, or more, extreme. For a one-tailed test, this is straightforward to compute. Suppose that we want to test if formula_9. Then our formula_8-value would be,
formula_10
An analogous computation can be done if we're testing if formula_11 using the summation of the range from formula_4 to formula_3 instead.
Calculating a formula_8-value for a two-tailed test is slightly more complicated, since a binomial distribution isn't symmetric if formula_12. This means that we can't just double the formula_8-value from the one-tailed test. Recall that we want to consider events that are as, or more, extreme than the one we've seen, so we should consider the probability that we would see an event that is as, or less, likely than formula_13. Let formula_14 denote all such events. Then the two-tailed formula_8-value is calculated as,
formula_15
Common use.
One common use of the binomial test is the case where the null hypothesizes that two categories occur with equal frequency (formula_16), such as a coin toss. Tables are widely available to give the significance observed numbers of observations in the categories for this case. However, as the example below shows, the binomial test is not restricted to this case.
When there are more than two categories, and an exact test is required, the multinomial test, based on the multinomial distribution, must be used instead of the binomial test.
Large samples.
For large samples such as the example below, the binomial distribution is well approximated by convenient continuous distributions, and these are used as the basis for alternative tests that are much quicker to compute, such as Pearson's chi-squared test and the G-test. However, for small samples these approximations break down, and there is no alternative to the binomial test.
The most usual (and easiest) approximation is through the standard normal distribution, in which a z-test is performed of the test statistic formula_17, given by
formula_18
where formula_4 is the number of successes observed in a sample of size formula_3 and formula_0 is the probability of success according to the null hypothesis. An improvement on this approximation is possible by introducing a continuity correction:
formula_19
For very large formula_3, this continuity correction will be unimportant, but for intermediate values, where the exact binomial test doesn't work, it will yield a substantially more accurate result.
In notation in terms of a measured sample proportion formula_20, null hypothesis for the proportion formula_21, and sample size formula_3, where formula_22 and formula_23, one may rearrange and write the z-test above as
formula_24
by dividing by formula_3 in both numerator and denominator, which is a form that may be more familiar to some readers.
Example.
Suppose we have a board game that depends on the roll of one die and attaches special importance to rolling a 6. In a particular game, the die is rolled 235 times, and 6 comes up 51 times. If the die is fair, we would expect 6 to come up
formula_25
times. We have now observed that the number of 6s is higher than what we would expect on average by pure chance had the die been a fair one. But, is the number significantly high enough for us to conclude anything about the fairness of the die? This question can be answered by the binomial test. Our null hypothesis would be that the die is fair (probability of each number coming up on the die is 1/6).
To find an answer to this question using the binomial test, we use the binomial distribution
formula_26 with pmf formula_27 .
As we have observed a value greater than the expected value, we could consider the probability of observing 51 6s or higher under the null, which would constitute a one-tailed test (here we are basically testing whether this die is biased towards generating more 6s than expected). In order to calculate the probability of 51 or more 6s in a sample of 235 under the null hypothesis we add up the probabilities of getting exactly 51 6s, exactly 52 6s, and so on up to probability of getting exactly 235 6s:
formula_28
If we have a significance level of 5%, then this result (0.02654 < 5%) indicates that we have evidence that is significant enough to reject the null hypothesis that the die is fair.
Normally, when we are testing for fairness of a die, we are also interested if the die is biased towards generating fewer 6s than expected, and not only more 6s as we considered in the one-tailed test above. In order to consider both the biases, we use a two-tailed test. Note that to do this we cannot simply double the one-tailed p-value unless the probability of the event is 1/2. This is because the binomial distribution becomes asymmetric as that probability deviates from 1/2. There are two methods to define the two-tailed p-value. One method is to sum the probability that the total deviation in numbers of events in either direction from the expected value is either more than or less than the expected value. The probability of that occurring in our example is 0.0437. The second method involves computing the probability that the deviation from the expected value is as unlikely or more unlikely than the observed value, i.e. from a comparison of the probability density functions. This can create a subtle difference, but in this example yields the same probability of 0.0437. In both cases, the two-tailed test reveals significance at the 5% level, indicating that the number of 6s observed was significantly different for this die than the expected number at the 5% level.
In statistical software packages.
Binomial tests are available in most software used for statistical purposes. E.g.
PROC FREQ DATA=DiceRoll ;
TABLES Roll / BINOMIAL (P=0.166667) ALPHA=0.05 ;
EXACT BINOMIAL ;
WEIGHT Freq ;
RUN;
npar tests
/binomial (.5) = node1 node2.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\pi"
},
{
"math_id": 1,
"text": "H_0\\colon\\pi=\\pi_0"
},
{
"math_id": 2,
"text": "\\pi_0"
},
{
"math_id": 3,
"text": "n"
},
{
"math_id": 4,
"text": "k"
},
{
"math_id": 5,
"text": "n\\pi_0"
},
{
"math_id": 6,
"text": "\\Pr(X=k)=\\binom{n}{k}p^k(1-p)^{n-k}"
},
{
"math_id": 7,
"text": "H_0"
},
{
"math_id": 8,
"text": "p"
},
{
"math_id": 9,
"text": "\\pi<\\pi_0"
},
{
"math_id": 10,
"text": "p = \\sum_{i=0}^k\\Pr(X=i)=\\sum_{i=0}^k\\binom{n}{i}\\pi_0^i(1-\\pi_0)^{n-i}"
},
{
"math_id": 11,
"text": "\\pi>\\pi_0"
},
{
"math_id": 12,
"text": "\\pi_0\\neq 0.5"
},
{
"math_id": 13,
"text": "X=k"
},
{
"math_id": 14,
"text": "\\mathcal{I}=\\{i\\colon\\Pr(X=i)\\leq \\Pr(X=k)\\}"
},
{
"math_id": 15,
"text": "p = \\sum_{i\\in\\mathcal{I}}\\Pr(X=i)=\\sum_{i\\in\\mathcal{I}}\\binom{n}{i}\\pi_0^i(1-\\pi_0)^{n-i}"
},
{
"math_id": 16,
"text": "H_0\\colon\\pi=0.5"
},
{
"math_id": 17,
"text": "Z"
},
{
"math_id": 18,
"text": "Z=\\frac{k-n\\pi}{\\sqrt{n\\pi(1-\\pi)}}"
},
{
"math_id": 19,
"text": "Z=\\frac{k-n\\pi\\pm \\frac{1}{2}}{\\sqrt{n\\pi(1-\\pi)}}"
},
{
"math_id": 20,
"text": "\\hat{p}"
},
{
"math_id": 21,
"text": "p_0"
},
{
"math_id": 22,
"text": "\\hat{p}=k/n"
},
{
"math_id": 23,
"text": "p_0=\\pi"
},
{
"math_id": 24,
"text": " Z=\\frac{ \\hat{p}-p_0 } { \\sqrt{ \\frac{p_0(1-p_0)}{n} } }"
},
{
"math_id": 25,
"text": "235\\times1/6 = 39.17"
},
{
"math_id": 26,
"text": "B(N=235, p=1/6)"
},
{
"math_id": 27,
"text": "f(k,n,p) = \\Pr(k;n,p) = \\Pr(X = k) = \\binom{n}{k}p^k(1-p)^{n-k}"
},
{
"math_id": 28,
"text": "\\sum_{i=51}^{235} {235\\choose i}p^i(1-p)^{235-i} = 0.02654"
}
] | https://en.wikipedia.org/wiki?curid=935655 |
9357898 | Constraint (computational chemistry) | Method for satisfying the Newtonian motion of a rigid body which consists of mass points
In computational chemistry, a constraint algorithm is a method for satisfying the Newtonian motion of a rigid body which consists of mass points. A restraint algorithm is used to ensure that the distance between mass points is maintained. The general steps involved are: (i) choose novel unconstrained coordinates (internal coordinates), (ii) introduce explicit constraint forces, (iii) minimize constraint forces implicitly by the technique of Lagrange multipliers or projection methods.
Constraint algorithms are often applied to molecular dynamics simulations. Although such simulations are sometimes performed using internal coordinates that automatically satisfy the bond-length, bond-angle and torsion-angle constraints, simulations may also be performed using explicit or implicit constraint forces for these three constraints. However, explicit constraint forces give rise to inefficiency; more computational power is required to get a trajectory of a given length. Therefore, internal coordinates and implicit-force constraint solvers are generally preferred.
Constraint algorithms achieve computational efficiency by neglecting motion along some degrees of freedom. For instance, in atomistic molecular dynamics, typically the length of covalent bonds to hydrogen are constrained; however, constraint algorithms should not be used if vibrations along these degrees of freedom are important for the phenomenon being studied.
Mathematical background.
The motion of a set of "N" particles can be described by a set of second-order ordinary differential equations, Newton's second law, which can be written in matrix form
formula_0
where M is a "mass matrix" and q is the vector of generalized coordinates that describe the particles' positions. For example, the vector q may be a "3N" Cartesian coordinates of the particle positions r"k", where "k" runs from 1 to "N"; in the absence of constraints, M would be the "3N"x"3N" diagonal square matrix of the particle masses. The vector f represents the generalized forces and the scalar "V"(q) represents the potential energy, both of which are functions of the generalized coordinates q.
If "M" constraints are present, the coordinates must also satisfy "M" time-independent algebraic equations
formula_1
where the index "j" runs from 1 to "M". For brevity, these functions "g""i" are grouped into an "M"-dimensional vector g below. The task is to solve the combined set of differential-algebraic (DAE) equations, instead of just the ordinary differential equations (ODE) of Newton's second law.
This problem was studied in detail by Joseph Louis Lagrange, who laid out most of the methods for solving it. The simplest approach is to define new generalized coordinates that are unconstrained; this approach eliminates the algebraic equations and reduces the problem once again to solving an ordinary differential equation. Such an approach is used, for example, in describing the motion of a rigid body; the position and orientation of a rigid body can be described by six independent, unconstrained coordinates, rather than describing the positions of the particles that make it up and the constraints among them that maintain their relative distances. The drawback of this approach is that the equations may become unwieldy and complex; for example, the mass matrix M may become non-diagonal and depend on the generalized coordinates.
A second approach is to introduce explicit forces that work to maintain the constraint; for example, one could introduce strong spring forces that enforce the distances among mass points within a "rigid" body. The two difficulties of this approach are that the constraints are not satisfied exactly, and the strong forces may require very short time-steps, making simulations inefficient computationally.
A third approach is to use a method such as Lagrange multipliers or projection to the constraint manifold to determine the coordinate adjustments necessary to satisfy the constraints.
Finally, there are various hybrid approaches in which different sets of constraints are satisfied by different methods, e.g., internal coordinates, explicit forces and implicit-force solutions.
Internal coordinate methods.
The simplest approach to satisfying constraints in energy minimization and molecular dynamics is to represent the mechanical system in so-called "internal coordinates" corresponding to unconstrained independent degrees of freedom of the system. For example, the dihedral angles of a protein are an independent set of coordinates that specify the positions of all the atoms without requiring any constraints. The difficulty of such internal-coordinate approaches is twofold: the Newtonian equations of motion become much more complex and the internal coordinates may be difficult to define for cyclic systems of constraints, e.g., in ring puckering or when a protein has a disulfide bond.
The original methods for efficient recursive energy minimization in internal coordinates were developed by Gō and coworkers.
Efficient recursive, internal-coordinate constraint solvers were extended to molecular dynamics. Analogous methods were applied later to other systems.
Lagrange multiplier-based methods.
In most of molecular dynamics simulations that use constraint algorithms, constraints are enforced using the method of Lagrange multipliers. Given a set of "n" linear (holonomic) constraints at the time "t",
formula_2
where formula_3 and formula_4 are the positions of the two particles involved in the "k"th constraint at the time "t" and formula_5 is the prescribed inter-particle distance.
The forces due to these constraints are added in the equations of motion, resulting in, for each of the "N" particles in the system
formula_6
Adding the constraint forces does not change the total energy, as the net work done by the constraint forces (taken over the set of particles that the constraints act on) is zero. Note that the sign on formula_7 is arbitrary and some references have an opposite sign.
From integrating both sides of the equation with respect to the time, the constrained coordinates of particles at the time, formula_8, are given,
formula_9
where formula_10 is the unconstrained (or uncorrected) position of the "i"th particle after integrating the unconstrained equations of motion.
To satisfy the constraints formula_11 in the next timestep, the Lagrange multipliers should be determined as the following equation,
formula_12
This implies solving a system of formula_13 non-linear equations
formula_14
simultaneously for the formula_13 unknown Lagrange multipliers formula_7.
This system of formula_13 non-linear equations in formula_13 unknowns is commonly solved using Newton–Raphson method where the solution vector formula_15 is updated using
formula_16
where formula_17 is the Jacobian of the equations σ"k":
formula_18
Since not all particles contribute to all of constraints, formula_17 is a block matrix and can be solved individually to block-unit of the matrix. In other words, formula_17 can be solved individually for each molecule.
Instead of constantly updating the vector formula_15, the iteration can be started with formula_19, resulting in simpler expressions for formula_20 and formula_21. In this case
formula_22
then formula_23 is updated to
formula_24
After each iteration, the unconstrained particle positions are updated using
formula_25
The vector is then reset to
formula_26
The above procedure is repeated until the solution of constraint equations, formula_27, converges to a prescribed tolerance of a numerical error.
Although there are a number of algorithms to compute the Lagrange multipliers, these difference is rely only on the methods to solve the system of equations. For this methods, quasi-Newton methods are commonly used.
The SETTLE algorithm.
The SETTLE algorithm solves the system of non-linear equations analytically for formula_28 constraints in constant time. Although it does not scale to larger numbers of constraints, it is very often used to constrain rigid water molecules, which are present in almost all biological simulations and are usually modelled using three constraints (e.g. SPC/E and TIP3P water models).
The SHAKE algorithm.
The SHAKE algorithm was first developed for satisfying a bond geometry constraint during molecular dynamics simulations. The method was then generalised to handle any holonomic constraint, such as those required to maintain constant bond angles, or molecular rigidity.
In SHAKE algorithm, the system of non-linear constraint equations is solved using the Gauss–Seidel method which approximates the solution of the linear system of equations using the Newton–Raphson method;
formula_29
This amounts to assuming that formula_17 is diagonally dominant and solving the formula_30th equation only for the formula_30 unknown. In practice, we compute
formula_31
for all formula_32 iteratively until the constraint equations formula_27 are solved to a given tolerance.
The calculation cost of each iteration is formula_33, and the iterations themselves converge linearly.
A noniterative form of SHAKE was developed later on.
Several variants of the SHAKE algorithm exist. Although they differ in how they compute or apply the constraints themselves, the constraints are still modelled using Lagrange multipliers which are computed using the Gauss–Seidel method.
The original SHAKE algorithm is capable of constraining both rigid and flexible molecules (eg. water, benzene and biphenyl) and introduces negligible error or energy drift into a molecular dynamics simulation. One issue with SHAKE is that the number of iterations required to reach a certain level of convergence does rise as molecular geometry becomes more complex. To reach 64 bit computer accuracy (a relative tolerance of formula_34) in a typical molecular dynamics simulation at a temperature of 310K, a 3-site water model having 3 constraints to maintain molecular geometry requires an average of 9 iterations (which is 3 per site per time-step). A 4-site butane model with 5 constraints needs 17 iterations (22 per site), a 6-site benzene model with 12 constraints needs 36 iterations (72 per site), while a 12-site biphenyl model with 29 constraints requires 92 iterations (229 per site per time-step). Hence the CPU requirements of the SHAKE algorithm can become significant, particularly if a molecular model has a high degree of rigidity.
A later extension of the method, QSHAKE (Quaternion SHAKE) was developed as a faster alternative for molecules composed of rigid units, but it is not as general purpose. It works satisfactorily for "rigid" loops such as aromatic ring systems but QSHAKE fails for flexible loops, such as when a protein has a disulfide bond.
Further extensions include RATTLE, WIGGLE, and MSHAKE.
While RATTLE works the same way as SHAKE, yet using the Velocity Verlet time integration scheme, WIGGLE extends SHAKE and RATTLE by using an initial estimate for the Lagrange multipliers formula_7 based on the particle velocities. It is worth mentioning that MSHAKE computes corrections on the constraint "forces", achieving better convergence.
A final modification to the SHAKE algorithm is the P-SHAKE algorithm that is applied to very rigid or semi-rigid molecules. P-SHAKE computes and updates a pre-conditioner which is applied to the constraint gradients before the SHAKE iteration, causing the Jacobian formula_17 to become diagonal or strongly diagonally dominant. The thus de-coupled constraints converge much faster (quadratically as opposed to linearly) at a cost of formula_35.
The M-SHAKE algorithm.
The M-SHAKE algorithm solves the non-linear system of equations using Newton's method directly. In each iteration, the linear system of equations
formula_36
is solved exactly using an LU decomposition. Each iteration costs formula_37 operations, yet the solution converges quadratically, requiring fewer iterations than SHAKE.
This solution was first proposed in 1986 by Ciccotti and Ryckaert under the title "the matrix method", yet differed in the solution of the linear system of equations. Ciccotti and Ryckaert suggest inverting the matrix formula_17 directly, yet doing so only once, in the first iteration. The first iteration then costs formula_37 operations, whereas the following iterations cost only formula_35 operations (for the matrix-vector multiplication). This improvement comes at a cost though, since the Jacobian is no longer updated, convergence is only linear, albeit at a much faster rate than for the SHAKE algorithm.
Several variants of this approach based on sparse matrix techniques were studied by Barth "et al.".
SHAPE algorithm.
The SHAPE algorithm is a multicenter analog of SHAKE for constraining rigid bodies of three or more centers. Like SHAKE, an unconstrained step is taken and then corrected by directly calculating and applying the rigid body rotation matrix that satisfies:
formula_38
This approach involves a single 3×3 matrix diagonalization followed by three or four rapid Newton iterations to determine the rotation matrix. SHAPE provides the identical trajectory that is provided with fully converged iterative SHAKE, yet it is found to be more efficient and more accurate than SHAKE when applied to systems involving three or more centers. It extends the ability of SHAKE like constraints to linear systems with three or more atoms, planar systems with four or more atoms, and to significantly larger rigid structures where SHAKE is intractable. It also allows rigid bodies to be linked with one or two common centers (e.g. peptide planes) by solving rigid body constraints iteratively in the same basic manner that SHAKE is used for atoms involving more than one SHAKE constraint.
LINCS algorithm.
An alternative constraint method, LINCS (Linear Constraint Solver) was developed in 1997 by Hess, Bekker, Berendsen and Fraaije, and was based on the 1986 method of Edberg, Evans and Morriss (EEM), and a modification thereof by Baranyai and Evans (BE).
LINCS applies Lagrange multipliers to the constraint forces and solves for the multipliers by using a series expansion to approximate the inverse of the Jacobian formula_17:
formula_39
in each step of the Newton iteration. This approximation only works for matrices with eigenvalues smaller than 1, making the LINCS algorithm suitable only for molecules with low connectivity.
LINCS has been reported to be 3–4 times faster than SHAKE.
Hybrid methods.
Hybrid methods have also been introduced in which the constraints are divided into two groups; the constraints of the first group are solved using internal coordinates whereas those of the second group are solved using constraint forces, e.g., by a Lagrange multiplier or projection method. This approach was pioneered by Lagrange, and result in "Lagrange equations of the mixed type".
References and footnotes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\n\\mathbf{M} \\cdot \\frac{d^{2}\\mathbf{q}}{dt^{2}} = \\mathbf{f} = -\\frac{\\partial V}{\\partial \\mathbf{q}}\n"
},
{
"math_id": 1,
"text": "\ng_{j}(\\mathbf{q}) = 0 \n"
},
{
"math_id": 2,
"text": "\\sigma_k(t) := \\| \\mathbf x_{k\\alpha}(t) - \\mathbf x_{k\\beta}(t) \\|^2 - d_k^2 = 0, \\quad k=1 \\ldots n"
},
{
"math_id": 3,
"text": "\\scriptstyle \\mathbf x_{k\\alpha}(t)"
},
{
"math_id": 4,
"text": "\\scriptstyle\\mathbf x_{k\\beta}(t)"
},
{
"math_id": 5,
"text": "d_k"
},
{
"math_id": 6,
"text": "\\frac{\\partial^2 \\mathbf x_i(t)}{\\partial t^2} m_i = -\\frac{\\partial}{\\partial \\mathbf x_i} \\left[ V(\\mathbf x_i(t)) - \\sum_{k=1}^n \\lambda_k \\sigma_k(t) \\right], \\quad i=1 \\ldots N."
},
{
"math_id": 7,
"text": "\\lambda_k"
},
{
"math_id": 8,
"text": "t + \\Delta t"
},
{
"math_id": 9,
"text": "\\mathbf x_i(t + \\Delta t) = \\hat{\\mathbf x}_i(t + \\Delta t) + \\sum_{k=1}^n \\lambda_k \\frac{\\partial\\sigma_k(t)}{\\partial \\mathbf x_i}\\left(\\Delta t\\right)^2m_i^{-1}, \\quad i=1 \\ldots N"
},
{
"math_id": 10,
"text": "\\hat{\\mathbf x}_i(t + \\Delta t)"
},
{
"math_id": 11,
"text": "\\sigma_k(t + \\Delta t)"
},
{
"math_id": 12,
"text": "\\sigma_k(t + \\Delta t) := \\left\\| \\mathbf x_{k\\alpha}(t+\\Delta t) - \\mathbf x_{k\\beta}(t+\\Delta t)\\right\\|^2 - d_k^2 = 0."
},
{
"math_id": 13,
"text": "n"
},
{
"math_id": 14,
"text": "\\sigma_j(t + \\Delta t) := \\left\\| \\hat{\\mathbf x}_{j\\alpha}(t+\\Delta t) - \\hat{\\mathbf x}_{j\\beta}(t+\\Delta t) + \\sum_{k=1}^n \\lambda_k \\left(\\Delta t\\right)^2 \\left[ \\frac{\\partial\\sigma_k(t)}{\\partial \\mathbf x_{j\\alpha}}m_{j\\alpha}^{-1} - \\frac{\\partial\\sigma_k(t)}{\\partial \\mathbf x_{j\\beta}}m_{j\\beta}^{-1}\\right] \\right\\|^2 - d_j^2 = 0, \\quad j = 1 \\ldots n"
},
{
"math_id": 15,
"text": "\\underline{\\lambda}"
},
{
"math_id": 16,
"text": "\\underline{\\lambda}^{(l+1)} \\leftarrow \\underline{\\lambda}^{(l)} - \\mathbf J_\\sigma^{-1} \\underline{\\sigma}(t+\\Delta t)"
},
{
"math_id": 17,
"text": "\\mathbf J_\\sigma"
},
{
"math_id": 18,
"text": "\\mathbf J = \\left( \\begin{array}{cccc}\n \\frac{\\partial\\sigma_1}{\\partial\\lambda_1} & \\frac{\\partial\\sigma_1}{\\partial\\lambda_2} & \\cdots & \\frac{\\partial\\sigma_1}{\\partial\\lambda_n} \\\\[5pt]\n \\frac{\\partial\\sigma_2}{\\partial\\lambda_1} & \\frac{\\partial\\sigma_2}{\\partial\\lambda_2} & \\cdots & \\frac{\\partial\\sigma_2}{\\partial\\lambda_n} \\\\[5pt]\n \\vdots & \\vdots & \\ddots & \\vdots \\\\[5pt]\n \\frac{\\partial\\sigma_n}{\\partial\\lambda_1} & \\frac{\\partial\\sigma_n}{\\partial\\lambda_2} & \\cdots & \\frac{\\partial\\sigma_n}{\\partial\\lambda_n} \\end{array}\\right)."
},
{
"math_id": 19,
"text": "\\underline{\\lambda}^{(0)} = \\mathbf 0"
},
{
"math_id": 20,
"text": "\\sigma_k(t)"
},
{
"math_id": 21,
"text": "\\frac{\\partial \\sigma_k(t)}{\\partial \\lambda_j}"
},
{
"math_id": 22,
"text": " J_{ij} = \\left.\\frac{\\partial\\sigma_j}{\\partial\\lambda_i}\\right|_{\\mathbf \\lambda=0} = 2\\left[\\hat{x}_{j\\alpha} - \\hat{x}_{j\\beta}\\right]\\left[\\frac{\\partial \\sigma_i}{\\partial x_{j\\alpha}} m_{j\\alpha}^{-1} - \\frac{\\partial \\sigma_i}{\\partial x_{j\\beta}} m_{j\\beta}^{-1} \\right]. "
},
{
"math_id": 23,
"text": "\\lambda"
},
{
"math_id": 24,
"text": " \\mathbf \\lambda_j = - \\mathbf J^{-1}\\left[ \\left\\| \\hat{\\mathbf x}_{j\\alpha}(t+\\Delta t) - \\hat{\\mathbf x}_{j\\beta}(t+\\Delta t)\\right\\|^2 - d_j^2\\right]."
},
{
"math_id": 25,
"text": "\\hat{\\mathbf x}_i(t+\\Delta t) \\leftarrow \\hat{\\mathbf x}_i(t+\\Delta t) + \\sum_{k=1}^n \\lambda_k\\frac{\\partial \\sigma_k}{\\partial \\mathbf x_i} \\left(\\Delta t\\right)^2m_i^{-1}."
},
{
"math_id": 26,
"text": "\\underline{\\lambda} = \\mathbf 0."
},
{
"math_id": 27,
"text": "\\sigma_k(t+\\Delta t)"
},
{
"math_id": 28,
"text": "n=3"
},
{
"math_id": 29,
"text": "\\underline{\\lambda} = -\\mathbf J_\\sigma^{-1} \\underline{\\sigma}."
},
{
"math_id": 30,
"text": "k"
},
{
"math_id": 31,
"text": "\n\\begin{align}\n\\lambda_k & \\leftarrow \\frac{\\sigma_k(t)}{\\partial \\sigma_k(t)/\\partial \\lambda_k}, \\\\[5pt]\n\\mathbf x_{k\\alpha} & \\leftarrow \\mathbf x_{k\\alpha} + \\lambda_k \\frac{\\partial \\sigma_k(t)}{\\partial \\mathbf x_{k\\alpha}}, \\\\[5pt]\n\\mathbf x_{k\\beta} & \\leftarrow \\mathbf x_{k\\beta} + \\lambda_k \\frac{\\partial \\sigma_k(t)}{\\partial \\mathbf x_{k\\beta}},\n\\end{align}\n"
},
{
"math_id": 32,
"text": "k=1\\ldots n"
},
{
"math_id": 33,
"text": "\\mathcal O(n)"
},
{
"math_id": 34,
"text": "\\approx 10^{-16}"
},
{
"math_id": 35,
"text": "\\mathcal O(n^2)"
},
{
"math_id": 36,
"text": "\\underline{\\lambda} = -\\mathbf J_\\sigma^{-1} \\underline{\\sigma}"
},
{
"math_id": 37,
"text": "\\mathcal O(n^3)"
},
{
"math_id": 38,
"text": " L^\\text{rigid} \\left( t + \\frac{\\Delta t} 2 \\right) = L^\\text{nonrigid} \\left( t + \\frac{\\Delta t} 2 \\right)"
},
{
"math_id": 39,
"text": "(\\mathbf I - \\mathbf J_\\sigma)^{-1} = \\mathbf I + \\mathbf J_\\sigma + \\mathbf J_\\sigma^2 + \\mathbf J_\\sigma^3 + \\cdots"
}
] | https://en.wikipedia.org/wiki?curid=9357898 |
935830 | Kelvin wave | Type of wave in the ocean or atmosphere
A Kelvin wave is a wave in the ocean, a large lake or the atmosphere that balances the Earth's Coriolis force against a topographic boundary such as a coastline, or a waveguide such as the equator. A feature of a Kelvin wave is that it is non-dispersive, i.e., the phase speed of the wave crests is equal to the group speed of the wave energy for all frequencies. This means that it retains its shape as it moves in the alongshore direction over time.
A Kelvin wave (fluid dynamics) is also a long scale perturbation mode of a vortex in superfluid dynamics; in terms of the meteorological or oceanographical derivation, one may assume that the meridional velocity component vanishes (i.e. there is no flow in the north–south direction, thus making the momentum and continuity equations much simpler). This wave is named after the discoverer, Lord Kelvin (1879).
Coastal Kelvin wave.
In a stratified ocean of mean depth "H", whose height is perturbed by some amount "η" (a function of position and time), free waves propagate along coastal boundaries (and hence become trapped in the vicinity of the coast itself) in the form of Kelvin waves. These waves are called coastal Kelvin waves. Using the assumption that the cross-shore velocity "v" is zero at the coast, "v" = 0, one may solve a frequency relation for the phase speed of coastal Kelvin waves, which are among the class of waves called boundary waves, edge waves, trapped waves, or surface waves (similar to the Lamb waves). Assuming that the depth "H" is constant, the (linearised) primitive equations then become the following:
formula_0
formula_1
formula_2
in which "f" is the Coriolis coefficient, which depends on the latitude φ:
formula_3
where Ω ≈ 2π / (86164 sec) ≈ is the angular speed of rotation of the earth.
If one assumes that "u", the flow perpendicular to the coast, is zero, then the primitive equations become the following:
formula_4
formula_5
formula_6
The first and third of these equations are solved at constant "x" by waves moving in either the positive or negative "y" direction at a speed formula_7 the speed of so-called shallow-water gravity waves without the effect of Earth's rotation. However, only one of the two solutions is valid, having an amplitude that decreases with distance from the coast, whereas in the other solution the amplitude increases with distance from the coast. For an observer traveling with the wave, the coastal boundary (maximum amplitude) is always to the right in the northern hemisphere and to the left in the southern hemisphere (i.e. these waves move equatorward – negative phase speed – at the western side of an ocean and poleward – positive phase speed – at the eastern boundary; the waves move cyclonically around an ocean basin).
If we assume constant "f", the general solution is an arbitrary wave form formula_8 propagating at speed "c" multiplied by formula_9 with the sign chosen so that the amplitude decreases with distance from the coast.
Equatorial Kelvin wave.
Kelvin waves can also exist going eastward parallel to the equator. Although waves can cross the equator, the Kelvin wave solution does not. The primitive equations are identical to those used to develop the coastal Kelvin wave solution (U-momentum, V-momentum, and continuity equations). Because these waves are equatorial, the Coriolis parameter vanishes at 0 degrees; therefore, it is necessary to use the equatorial beta plane approximation:
formula_10
where "β" is the variation of the Coriolis parameter with latitude. The wave speed is identical to that of coastal Kelvin waves (for the same depth "H"), indicating that the equatorial Kelvin waves propagate toward the east without dispersion (as if the earth were a non-rotating planet). The dependence of the amplitude on "x" (here the north-south direction) though is now formula_11
For a depth of four kilometres, the wave speed, formula_12 is about 200 metres per second, but for the first baroclinic mode in the ocean, a typical phase speed would be about 2.8 m/s, causing an equatorial Kelvin wave to take 2 months to cross the Pacific Ocean between New Guinea and South America; for higher ocean and atmospheric modes, the phase speeds are comparable to fluid flow speeds.
When the wave at the Equator is moving to the east, a height gradient going downwards toward the north is countered by a force toward the Equator because the water will be moving eastward and the Coriolis force acts to the right of the direction of motion in the Northern Hemisphere, and vice versa in the Southern Hemisphere. Note that for a wave moving toward the west, the Coriolis force would not restore a northward or southward deviation back toward the Equator; thus, equatorial Kelvin waves are only possible for eastward motion (as noted above). Both atmospheric and oceanic equatorial Kelvin waves play an important role in the dynamics of El Nino-Southern Oscillation, by transmitting changes in conditions in the Western Pacific to the Eastern Pacific.
There have been studies that connect equatorial Kelvin waves to coastal Kelvin waves. Moore (1968) found that as an equatorial Kelvin wave strikes an "eastern boundary", part of the energy is reflected in the form of planetary and gravity waves; and the remainder of the energy is carried poleward along the eastern boundary as coastal Kelvin waves. This process indicates that some energy may be lost from the equatorial region and transported to the poleward region.
Equatorial Kelvin waves are often associated with anomalies in surface wind stress. For example, positive (eastward) anomalies in wind stress in the central Pacific excite positive anomalies in 20 °C isotherm depth which propagate to the east as equatorial Kelvin waves.
In 2017, using data from ERA5, equatorial Kelvin waves were shown to be a case of classical topologically protected excitations, similar to those found in a topological insulator.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\frac{\\partial u}{\\partial x} + \\frac{\\partial v}{\\partial y} = \\frac{-1}{H} \\frac{\\partial \\eta}{\\partial t}"
},
{
"math_id": 1,
"text": "\\frac{\\partial u}{\\partial t} = - g \\frac{\\partial \\eta}{\\partial x} + f v"
},
{
"math_id": 2,
"text": "\\frac{\\partial v}{\\partial t} = - g \\frac{\\partial \\eta}{\\partial y} - f u."
},
{
"math_id": 3,
"text": "f = 2\\,\\Omega\\,\\sin \\phi"
},
{
"math_id": 4,
"text": "\\frac{\\partial v}{\\partial y} = \\frac{-1}{H} \\frac{\\partial \\eta}{\\partial t}"
},
{
"math_id": 5,
"text": "g \\frac{\\partial \\eta}{\\partial x} = f v"
},
{
"math_id": 6,
"text": "\\frac{\\partial v}{\\partial t} = - g \\frac{\\partial \\eta}{\\partial y}"
},
{
"math_id": 7,
"text": "c=\\sqrt{gH},"
},
{
"math_id": 8,
"text": "W(y-ct)"
},
{
"math_id": 9,
"text": "\\exp(\\pm fy/\\sqrt{gH}),"
},
{
"math_id": 10,
"text": "f = \\beta y,"
},
{
"math_id": 11,
"text": "\\exp(-x^2\\beta/(2\\sqrt{gH}))."
},
{
"math_id": 12,
"text": "\\sqrt{gH},"
}
] | https://en.wikipedia.org/wiki?curid=935830 |
935979 | Tesla (unit) | SI unit of magnetic field strength
<templatestyles src="Template:Infobox/styles-images.css" />
The tesla (symbol: T) is the unit of magnetic flux density (also called magnetic B-field strength) in the International System of Units (SI).
One tesla is equal to one weber per square metre. The unit was announced during the General Conference on Weights and Measures in 1960 and is named in honour of Serbian-American electrical and mechanical engineer Nikola Tesla, upon the proposal of the Slovenian electrical engineer France Avčin.
Definition.
A particle, carrying a charge of one coulomb (C), and moving perpendicularly through a magnetic field of one tesla, at a speed of one metre per second (m/s), experiences a force with magnitude one newton (N), according to the Lorentz force law. That is,
formula_0
As an SI derived unit, the tesla can also be expressed in terms of other units. For example, a magnetic flux of 1 weber (Wb) through a surface of one square meter is equal to a magnetic flux density of 1 tesla. That is,
formula_1
Expressed only in SI base units, 1 tesla is:
formula_2
where A is ampere, kg is kilogram, and s is second.
Additional equivalences result from the derivation of coulombs from amperes (A), formula_3:
formula_4
the relationship between newtons and joules (J), formula_5:
formula_6
and the derivation of the weber from volts (V), formula_7:
formula_8
The tesla is named after Nikola Tesla. As with every SI unit named for a person, its symbol starts with an upper case letter (T), but when written in full, it follows the rules for capitalisation of a common noun; i.e., "tesla" becomes capitalised at the beginning of a sentence and in titles but is otherwise in lower case.
Electric vs. magnetic field.
In the production of the Lorentz force, the difference between electric fields and magnetic fields is that a force from a magnetic field on a charged particle is generally due to the charged particle's movement, while the force imparted by an electric field on a charged particle is not due to the charged particle's movement. This may be appreciated by looking at the units for each. The unit of electric field in the MKS system of units is newtons per coulomb, N/C, while the magnetic field (in teslas) can be written as N/(C⋅m/s). The dividing factor between the two types of field is metres per second (m/s), which is velocity. This relationship immediately highlights the fact that whether a static electromagnetic field is seen as purely magnetic, or purely electric, or some combination of these, is dependent upon one's reference frame (that is, one's velocity relative to the field).
In ferromagnets, the movement creating the magnetic field is the electron spin (and to a lesser extent electron orbital angular momentum). In a current-carrying wire (electromagnets) the movement is due to electrons moving through the wire (whether the wire is straight or circular).
Conversion to non-SI units.
One tesla is equivalent to:
<templatestyles src="Plainlist/styles.css"/>* 10,000 (or 104) G (gauss), used in the CGS system. Thus, 1 G = 10−4 T = 100 μT (microtesla).
For the relation to the units of the magnetising field (ampere per metre or Oersted), see the article on permeability.
Examples.
The following examples are listed in the ascending order of the magnetic-field strength.
Notes and references.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathrm{T = \\dfrac{N{\\cdot}s}{C{\\cdot}m}}."
},
{
"math_id": 1,
"text": "\\mathrm{T = \\dfrac{Wb}{m^2}}."
},
{
"math_id": 2,
"text": "\\mathrm{T = \\dfrac{kg}{A{\\cdot}s^2}},"
},
{
"math_id": 3,
"text": "\\mathrm{C = A {\\cdot} s}"
},
{
"math_id": 4,
"text": "\\mathrm{T = \\dfrac{N}{A{\\cdot}m}},"
},
{
"math_id": 5,
"text": "\\mathrm{J = N {\\cdot} m}"
},
{
"math_id": 6,
"text": "\\mathrm{T = \\dfrac{J}{A{\\cdot}m^2}},"
},
{
"math_id": 7,
"text": "\\mathrm{Wb = V {\\cdot} s}"
},
{
"math_id": 8,
"text": "\\mathrm{T = \\dfrac{V{\\cdot}{s}}{m^2}}."
}
] | https://en.wikipedia.org/wiki?curid=935979 |
936371 | Jordan algebra | Not-necessarily-associative commutative algebra satisfying (𝑥𝑦)𝑥²=𝑥(𝑦𝑥²)
In abstract algebra, a Jordan algebra is a nonassociative algebra over a field whose multiplication satisfies the following axioms:
The product of two elements "x" and "y" in a Jordan algebra is also denoted "x" ∘ "y", particularly to avoid confusion with the product of a related associative algebra.
The axioms imply that a Jordan algebra is power-associative, meaning that formula_2 is independent of how we parenthesize this expression. They also imply that formula_3 for all positive integers "m" and "n". Thus, we may equivalently define a Jordan algebra to be a commutative, power-associative algebra such that for any element formula_4, the operations of multiplying by powers formula_5 all commute.
Jordan algebras were introduced by Pascual Jordan (1933) in an effort to formalize the notion of an algebra of observables in quantum electrodynamics. It was soon shown that the algebras were not useful in this context, however they have since found many applications in mathematics. The algebras were originally called "r-number systems", but were renamed "Jordan algebras" by Abraham Adrian Albert (1946), who began the systematic study of general Jordan algebras.
Special Jordan algebras.
Notice first that an associative algebra is a Jordan algebra if and only if it is commutative.
Given any associative algebra "A" (not of characteristic 2), one can construct a Jordan algebra "A"+ using the with same underlying addition and a new multiplication, the Jordan product defined by:
formula_6
These Jordan algebras and their subalgebras are called special Jordan algebras, while all others are exceptional Jordan algebras. This construction is analogous to the Lie algebra associated to "A", whose product (Lie bracket) is defined by the commutator formula_7.
The Shirshov–Cohn theorem states that any Jordan algebra with two generators is special. Related to this, Macdonald's theorem states that any polynomial in three variables, having degree one in one of the variables, and which vanishes in every special Jordan algebra, vanishes in every Jordan algebra.
Hermitian Jordan algebras.
If ("A", "σ") is an associative algebra with an involution "σ", then if "σ"("x") = "x" and "σ"("y") = "y" it follows that formula_8 Thus the set of all elements fixed by the involution (sometimes called the "hermitian" elements) form a subalgebra of "A"+, which is sometimes denoted H("A","σ").
Examples.
1. The set of self-adjoint real, complex, or quaternionic matrices with multiplication
formula_9
form a special Jordan algebra.
2. The set of 3×3 self-adjoint matrices over the octonions, again with multiplication
formula_10
is a 27 dimensional, exceptional Jordan algebra (it is exceptional because the octonions are not associative). This was the first example of an Albert algebra. Its automorphism group is the exceptional Lie group F4. Since over the complex numbers this is the only simple exceptional Jordan algebra up to isomorphism, it is often referred to as "the" exceptional Jordan algebra. Over the real numbers there are three isomorphism classes of simple exceptional Jordan algebras.
Derivations and structure algebra.
A derivation of a Jordan algebra "A" is an endomorphism "D" of "A" such that "D"("xy") = "D"("x")"y"+"xD"("y"). The derivations form a Lie algebra der("A"). The Jordan identity implies that if "x" and "y" are elements of "A", then the endomorphism sending "z" to "x"("yz")−"y"("xz") is a derivation. Thus the direct sum of "A" and der("A") can be made into a Lie algebra, called the structure algebra of "A", str("A").
A simple example is provided by the Hermitian Jordan algebras H("A","σ"). In this case any element "x" of "A" with "σ"("x")=−"x" defines a derivation. In many important examples, the structure algebra of H("A","σ") is "A".
Derivation and structure algebras also form part of Tits' construction of the Freudenthal magic square.
Formally real Jordan algebras.
A (possibly nonassociative) algebra over the real numbers is said to be formally real if it satisfies the property that a sum of "n" squares can only vanish if each one vanishes individually. In 1932, Jordan attempted to axiomatize quantum theory by saying that the algebra of observables of any quantum system should be a formally real algebra that is commutative ("xy" = "yx") and power-associative (the associative law holds for products involving only "x", so that powers of any element "x" are unambiguously defined). He proved that any such algebra is a Jordan algebra.
Not every Jordan algebra is formally real, but classified the finite-dimensional formally real Jordan algebras, also called Euclidean Jordan algebras. Every formally real Jordan algebra can be written as a direct sum of so-called simple ones, which are not themselves direct sums in a nontrivial way. In finite dimensions, the simple formally real Jordan algebras come in four infinite families, together with one exceptional case:
where the right-hand side is defined using the usual inner product on R"n". This is sometimes called a spin factor or a Jordan algebra of Clifford type.
Of these possibilities, so far it appears that nature makes use only of the "n"×"n" complex matrices as algebras of observables. However, the spin factors play a role in special relativity, and all the formally real Jordan algebras are related to projective geometry.
Peirce decomposition.
If "e" is an idempotent in a Jordan algebra "A" ("e"2 = "e") and "R" is the operation of multiplication by "e", then
so the only eigenvalues of "R" are 0, 1/2, 1. If the Jordan algebra "A" is finite-dimensional over a field of characteristic not 2, this implies that it is a direct sum of subspaces "A" = "A"0("e") ⊕ "A"1/2("e") ⊕ "A"1("e") of the three eigenspaces. This decomposition was first considered by for totally real Jordan algebras. It was later studied in full generality by and called the Peirce decomposition of "A" relative to the idempotent "e".
Special kinds and generalizations.
Infinite-dimensional Jordan algebras.
In 1979, Efim Zelmanov classified infinite-dimensional simple (and prime non-degenerate) Jordan algebras. They are either of Hermitian or Clifford type. In particular, the only exceptional simple Jordan algebras are finite-dimensional Albert algebras, which have dimension 27.
Jordan operator algebras.
The theory of operator algebras has been extended to cover Jordan operator algebras.
The counterparts of C*-algebras are JB algebras, which in finite dimensions are called Euclidean Jordan algebras. The norm on the real Jordan algebra must be complete and satisfy the axioms:
formula_12
These axioms guarantee that the Jordan algebra is formally real, so that, if a sum of squares of terms is zero, those terms must be zero. The complexifications of JB algebras are called Jordan C*-algebras or JB*-algebras. They have been used extensively in complex geometry to extend Koecher's Jordan algebraic treatment of bounded symmetric domains to infinite dimensions. Not all JB algebras can be realized as Jordan algebras of self-adjoint operators on a Hilbert space, exactly as in finite dimensions. The exceptional Albert algebra is the common obstruction.
The Jordan algebra analogue of von Neumann algebras is played by JBW algebras. These turn out to be JB algebras which, as Banach spaces, are the dual spaces of Banach spaces. Much of the structure theory of von Neumann algebras can be carried over to JBW algebras. In particular the JBW factors—those with center reduced to R—are completely understood in terms of von Neumann algebras. Apart from the exceptional Albert algebra, all JWB factors can be realised as Jordan algebras of self-adjoint operators on a Hilbert space closed in the weak operator topology. Of these the spin factors can be constructed very simply from real Hilbert spaces. All other JWB factors are either the self-adjoint part of a von Neumann factor or its fixed point subalgebra under a period 2 *-antiautomorphism of the von Neumann factor.
Jordan rings.
A Jordan ring is a generalization of Jordan algebras, requiring only that the Jordan ring be over a general ring rather than a field. Alternatively one can define a Jordan ring as a commutative nonassociative ring that respects the Jordan identity.
Jordan superalgebras.
Jordan superalgebras were introduced by Kac, Kantor and Kaplansky; these are formula_13-graded algebras formula_14 where formula_15 is a Jordan algebra and formula_16 has a "Lie-like" product with values in formula_15.
Any formula_13-graded associative algebra formula_17 becomes a Jordan superalgebra with respect to the graded Jordan brace
formula_18
Jordan simple superalgebras over an algebraically closed field of characteristic 0 were classified by . They include several families and some exceptional algebras, notably formula_19 and formula_20.
J-structures.
The concept of J-structure was introduced by to develop a theory of Jordan algebras using linear algebraic groups and axioms taking the Jordan inversion as basic operation and Hua's identity as a basic relation. In characteristic not equal to 2 the theory of J-structures is essentially the same as that of Jordan algebras.
Quadratic Jordan algebras.
Quadratic Jordan algebras are a generalization of (linear) Jordan algebras introduced by Kevin McCrimmon (1966). The fundamental identities of the quadratic representation of a linear Jordan algebra are used as axioms to define a quadratic Jordan algebra over a field of arbitrary characteristic. There is a uniform description of finite-dimensional simple quadratic Jordan algebras, independent of characteristic: in characteristic not equal to 2 the theory of quadratic Jordan algebras reduces to that of linear Jordan algebras.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "xy = yx"
},
{
"math_id": 1,
"text": "(xy)(xx) = x(y(xx))"
},
{
"math_id": 2,
"text": "x^n = x \\cdots x"
},
{
"math_id": 3,
"text": "x^m (x^n y) = x^n(x^m y)"
},
{
"math_id": 4,
"text": "x"
},
{
"math_id": 5,
"text": "x^n"
},
{
"math_id": 6,
"text": "x\\circ y = \\frac{xy+yx}{2}."
},
{
"math_id": 7,
"text": "[x,y] = xy - yx"
},
{
"math_id": 8,
"text": "\\sigma(xy + yx) = xy + yx."
},
{
"math_id": 9,
"text": "(xy + yx)/2"
},
{
"math_id": 10,
"text": "(xy + yx)/2,"
},
{
"math_id": 11,
"text": "x^2 = \\langle x, x\\rangle "
},
{
"math_id": 12,
"text": "\\displaystyle{\\|a\\circ b\\|\\le \\|a\\|\\cdot \\|b\\|,\\,\\,\\, \\|a^2\\|=\\|a\\|^2,\\,\\,\\, \\|a^2\\|\\le \\|a^2 +b^2\\|.}"
},
{
"math_id": 13,
"text": "\\mathbb{Z}/2"
},
{
"math_id": 14,
"text": "J_0 \\oplus J_1"
},
{
"math_id": 15,
"text": "J_0"
},
{
"math_id": 16,
"text": "J_1"
},
{
"math_id": 17,
"text": "A_0 \\oplus A_1"
},
{
"math_id": 18,
"text": "\\{x_i,y_j\\} = x_i y_j + (-1)^{ij} y_j x_i \\ . "
},
{
"math_id": 19,
"text": "K_3"
},
{
"math_id": 20,
"text": "K_{10}"
}
] | https://en.wikipedia.org/wiki?curid=936371 |
936584 | 365 (number) | Natural number
365 (three hundred [and] sixty-five) is the natural number following 364 and preceding 366.
Mathematics.
365 is a semiprime centered square number. It is also the fifth 38-gonal number.
For multiplication, it is calculated as formula_0. Both 5 and 73 are prime numbers.
It is the smallest number that has more than one expression as a sum of consecutive square numbers:
formula_1
formula_2
There are no known primes with period 365, while at least one prime with each of the periods 1 to 364 is known.
Timekeeping.
There are 365.2422 solar days in the mean tropical year. Several solar calendars have a year containing 365 days. Related to this, in Ontario, the driver's license learner's permit used to be called "365" because it was valid for only 366 days. Financial and scientific calculations often use a 365-day calendar to simplify daily rates.
Religious meanings.
Judaism and Christianity.
In the Jewish faith there are 365 "negative commandments". Also, the Bible states that Enoch lived for 365 years before entering heaven alive (see ).
Gnosticism.
The letters of the deity Abraxas, in the Greek notation, make up the number 365. This number was subsequently viewed as signifying the levels of heaven.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "5 \\times 73"
},
{
"math_id": 1,
"text": "365 = 13^2 + 14^2"
},
{
"math_id": 2,
"text": "365 = 10^2 + 11^2 + 12^2"
}
] | https://en.wikipedia.org/wiki?curid=936584 |
9366097 | Truncation error | Error from taking a finite sum of an infinite series
In numerical analysis and scientific computing, truncation error is an error caused by approximating a mathematical process.
Examples.
Infinite series.
A summation series for formula_0 is given by an infinite series such as
formula_1
In reality, we can only use a finite number of these terms as it would take an infinite amount of computational time to make use of all of them. So let's suppose we use only three terms of the series, then
formula_2
In this case, the truncation error is formula_3
Example A:
Given the following infinite series, find the truncation error for "x" = 0.75 if only the first three terms of the series are used.
formula_4
Solution
Using only first three terms of the series gives
formula_5
The sum of an infinite geometrical series
formula_6
is given by
formula_7
For our series, "a" = 1 and "r" = 0.75, to give
formula_8
The truncation error hence is
formula_9
Differentiation.
The definition of the exact first derivative of the function is given by
formula_10
However, if we are calculating the derivative numerically, formula_11 has to be finite. The error caused by choosing formula_11 to be finite is a truncation error in the mathematical process of differentiation.
Example A:
Find the truncation in calculating the first derivative of formula_12 at formula_13 using a step size of formula_14
Solution:
The first derivative of formula_12 is
formula_15
and at formula_13,
formula_16
The approximate value is given by
formula_17
The truncation error hence is
formula_18
Integration.
The definition of the exact integral of a function formula_19 from formula_20 to formula_21 is given as follows.
Let formula_22 be a function defined on a closed interval formula_23 of the real numbers, formula_24, and
formula_25
be a partition of "I", where
formula_26
formula_27
where formula_28 and formula_29.
This implies that we are finding the area under the curve using infinite rectangles. However, if we are calculating the integral numerically, we can only use a finite number of rectangles. The error caused by choosing a finite number of rectangles as opposed to an infinite number of them is a truncation error in the mathematical process of integration.
Example A.
For the integral
formula_30
find the truncation error if a two-segment left-hand Riemann sum is used with equal width of segments.
Solution
We have the exact value as
formula_31
Using two rectangles of equal width to approximate the area (see Figure 2) under the curve, the approximate value of the integral
formula_32
formula_33
Occasionally, by mistake, round-off error (the consequence of using finite precision floating point numbers on computers), is also called truncation error, especially if the number is rounded by chopping. That is not the correct use of "truncation error"; however calling it truncating a number may be acceptable.
Addition.
Truncation error can cause formula_34 within a computer when formula_35 because formula_36 (like it should), while formula_37. Here, formula_38 has a truncation error equal to 1. This truncation error occurs because computers do not store the least significant digits of an extremely large integer.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " e^x"
},
{
"math_id": 1,
"text": " e^x=1+ x+ \\frac{x^2}{2!} + \\frac{x^3}{3!}+ \\frac{x^4}{4!}+ \\cdots"
},
{
"math_id": 2,
"text": "e^x\\approx 1+x+ \\frac{x^2}{2!}"
},
{
"math_id": 3,
"text": " \\frac{x^3}{3!}+\\frac{x^4}{4!}+ \\cdots"
},
{
"math_id": 4,
"text": " S = 1 + x + x^2 + x^3 + \\cdots, \\qquad \\left|x\\right|<1. "
},
{
"math_id": 5,
"text": "\\begin{align}\nS_3 &= \\left(1+x+x^2\\right)_{x=0.75} \\\\\n& = 1+0.75+\\left(0.75\\right)^2 \\\\\n&= 2.3125\n\\end{align}"
},
{
"math_id": 6,
"text": " S = a + ar + ar^2 + ar^3 + \\cdots,\\ r<1 "
},
{
"math_id": 7,
"text": " S = \\frac{a}{1-r}"
},
{
"math_id": 8,
"text": " S=\\frac{1}{1-0.75}=4"
},
{
"math_id": 9,
"text": " \\mathrm{TE} = 4 - 2.3125 = 1.6875"
},
{
"math_id": 10,
"text": "f'(x) = \\lim_{h \\to 0} \\frac{f(x+h)-f(x)}{h}"
},
{
"math_id": 11,
"text": "h"
},
{
"math_id": 12,
"text": "f(x)=5x^3"
},
{
"math_id": 13,
"text": "x=7"
},
{
"math_id": 14,
"text": "h=0.25"
},
{
"math_id": 15,
"text": "f'(x) = 15x^2,"
},
{
"math_id": 16,
"text": "f'(7) = 735."
},
{
"math_id": 17,
"text": "f'(7) = \\frac{f(7+0.25)-f(7)}{0.25} = 761.5625"
},
{
"math_id": 18,
"text": " \\mathrm{TE} = 735 - 761.5625 = -26.5625"
},
{
"math_id": 19,
"text": " f(x) "
},
{
"math_id": 20,
"text": " a "
},
{
"math_id": 21,
"text": " b "
},
{
"math_id": 22,
"text": "f: [a,b] \\to \\Reals"
},
{
"math_id": 23,
"text": "[a,b]"
},
{
"math_id": 24,
"text": "\\Reals"
},
{
"math_id": 25,
"text": "P = \\left \\{[x_0,x_1], [x_1,x_2], \\dots,[x_{n-1},x_n] \\right \\},"
},
{
"math_id": 26,
"text": "a = x_0 < x_1 < x_2 < \\cdots < x_n = b."
},
{
"math_id": 27,
"text": " \\int_{a}^b f(x) \\, dx = \\sum_{i=1}^{n} f(x_i^*)\\, \\Delta x_i"
},
{
"math_id": 28,
"text": "\\Delta x_i = x_i - x_{i-1}"
},
{
"math_id": 29,
"text": "x_i^* \\in [x_{i-1}, x_i]"
},
{
"math_id": 30,
"text": " \\int_{3}^{9}x^{2}{dx}"
},
{
"math_id": 31,
"text": " \\begin{align}\n\\int_{3}^{9}{x^{2}{dx}} &= \\left[ \\frac{x^{3}}{3} \\right]_{3}^{9} \\\\\n& = \\left[ \\frac{9^{3} - 3^{3}}{3} \\right] \\\\\n& = 234\n\\end{align}"
},
{
"math_id": 32,
"text": "\\begin{align}\n\\int_3^9 x^2 \\, dx &\\approx \\left. \\left(x^2\\right) \\right|_{x = 3}(6 - 3) + \\left. \\left(x^2\\right) \\right|_{x = 6}(9 - 6) \\\\\n& = (3^2)3 + (6^2)3 \\\\\n&= 27 + 108 \\\\\n&= 135\n\\end{align}"
},
{
"math_id": 33,
"text": "\\begin{align}\n\\text{Truncation Error} &= \\text{Exact Value} - \\text{Approximate Value} \\\\\n&= 234 - 135 \\\\\n&= 99.\n\\end{align}"
},
{
"math_id": 34,
"text": "(A+B)+C \\neq A+(B+C)"
},
{
"math_id": 35,
"text": "A = -10^{25}, B = 10^{25}, C = 1"
},
{
"math_id": 36,
"text": "(A+B)+C = (0)+C = 1"
},
{
"math_id": 37,
"text": "A+(B+C) = A+(B)=0"
},
{
"math_id": 38,
"text": "A+(B+C)"
}
] | https://en.wikipedia.org/wiki?curid=9366097 |
9367435 | Variable-range hopping | Variable-range hopping is a model used to describe carrier transport in a disordered semiconductor or in amorphous solid by hopping in an extended temperature range. It has a characteristic temperature dependence of
formula_0
where formula_1 is the conductivity and formula_2 is a parameter dependent on the model under consideration.
Mott variable-range hopping.
The Mott variable-range hopping describes low-temperature conduction in strongly disordered systems with localized charge-carrier states and has a characteristic temperature dependence of
formula_3
for three-dimensional conductance (with formula_2 = 1/4), and is generalized to "d"-dimensions
formula_4.
Hopping conduction at low temperatures is of great interest because of the savings the semiconductor industry could achieve if they were able to replace single-crystal devices with glass layers.
Derivation.
The original Mott paper introduced a simplifying assumption that the hopping energy depends inversely on the cube of the hopping distance (in the three-dimensional case). Later it was shown that this assumption was unnecessary, and this proof is followed here. In the original paper, the hopping probability at a given temperature was seen to depend on two parameters, "R" the spatial separation of the sites, and "W", their energy separation. Apsley and Hughes noted that in a truly amorphous system, these variables are random and independent and so can be combined into a single parameter, the "range" formula_5 between two sites, which determines the probability of hopping between them.
Mott showed that the probability of hopping between two states of spatial separation formula_6 and energy separation "W" has the form:
formula_7
where α−1 is the attenuation length for a hydrogen-like localised wave-function. This assumes that hopping to a state with a higher energy is the rate limiting process.
We now define formula_8, the "range" between two states, so formula_9. The states may be regarded as points in a four-dimensional random array (three spatial coordinates and one energy coordinate), with the "distance" between them given by the range formula_5.
Conduction is the result of many series of hops through this four-dimensional array and as short-range hops are favoured, it is the average nearest-neighbour "distance" between states which determines the overall conductivity. Thus the conductivity has the form
formula_10
where formula_11is the average nearest-neighbour range. The problem is therefore to calculate this quantity.
The first step is to obtain formula_12, the total number of states within a range formula_5 of some initial state at the Fermi level. For "d"-dimensions, and under particular assumptions this turns out to be
formula_13
where formula_14.
The particular assumptions are simply that formula_11 is well less than the band-width and comfortably bigger than the interatomic spacing.
Then the probability that a state with range formula_5 is the nearest neighbour in the four-dimensional space (or in general the ("d"+1)-dimensional space) is
formula_15
the nearest-neighbour distribution.
For the "d"-dimensional case then
formula_16.
This can be evaluated by making a simple substitution of formula_17 into the gamma function, formula_18
After some algebra this gives
formula_19
and hence that
formula_20.
Non-constant density of states.
When the density of states is not constant (odd power law N(E)), the Mott conductivity is also recovered, as shown in this article.
Efros–Shklovskii variable-range hopping.
The Efros–Shklovskii (ES) variable-range hopping is a conduction model which accounts for the Coulomb gap, a small jump in the density of states near the Fermi level due to interactions between localized electrons. It was named after Alexei L. Efros and Boris Shklovskii who proposed it in 1975.
The consideration of the Coulomb gap changes the temperature dependence to
formula_21
for all dimensions (i.e. formula_2 = 1/2).
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\sigma= \\sigma_0e^{-(T_0/T)^\\beta}"
},
{
"math_id": 1,
"text": "\\sigma"
},
{
"math_id": 2,
"text": "\\beta"
},
{
"math_id": 3,
"text": "\\sigma= \\sigma_0e^{-(T_0/T)^{1/4}}"
},
{
"math_id": 4,
"text": "\\sigma= \\sigma_0e^{-(T_0/T)^{1/(d+1)}}"
},
{
"math_id": 5,
"text": "\\textstyle\\mathcal{R}"
},
{
"math_id": 6,
"text": "\\textstyle R"
},
{
"math_id": 7,
"text": "P\\sim \\exp \\left[-2\\alpha R-\\frac{W}{kT}\\right]"
},
{
"math_id": 8,
"text": "\\textstyle\\mathcal{R} = 2\\alpha R+W/kT"
},
{
"math_id": 9,
"text": "\\textstyle P\\sim \\exp (-\\mathcal{R})"
},
{
"math_id": 10,
"text": "\\sigma \\sim \\exp (-\\overline{\\mathcal{R}}_{nn})"
},
{
"math_id": 11,
"text": "\\textstyle\\overline{\\mathcal{R}}_{nn}"
},
{
"math_id": 12,
"text": "\\textstyle\\mathcal{N}(\\mathcal{R})"
},
{
"math_id": 13,
"text": "\\mathcal{N}(\\mathcal{R}) = K \\mathcal{R}^{d+1}"
},
{
"math_id": 14,
"text": "\\textstyle K = \\frac{N\\pi kT}{3\\times 2^d \\alpha^d}"
},
{
"math_id": 15,
"text": "P_{nn}(\\mathcal{R}) = \\frac{\\partial \\mathcal{N}(\\mathcal{R})}{\\partial \\mathcal{R}} \\exp [-\\mathcal{N}(\\mathcal{R})]"
},
{
"math_id": 16,
"text": "\\overline{\\mathcal{R}}_{nn} = \\int_0^\\infty (d+1)K\\mathcal{R}^{d+1}\\exp (-K\\mathcal{R}^{d+1})d\\mathcal{R}"
},
{
"math_id": 17,
"text": "\\textstyle t=K\\mathcal{R}^{d+1}"
},
{
"math_id": 18,
"text": "\\textstyle \\Gamma(z) = \\int_0^\\infty t^{z-1} e^{-t}\\,\\mathrm{d}t"
},
{
"math_id": 19,
"text": "\\overline{\\mathcal{R}}_{nn} = \\frac{\\Gamma(\\frac{d+2}{d+1})}{K^{\\frac{1}{d+1}}}"
},
{
"math_id": 20,
"text": "\\sigma \\propto \\exp \\left(-T^{-\\frac{1}{d+1}}\\right)"
},
{
"math_id": 21,
"text": "\\sigma= \\sigma_0e^{-(T_0/T)^{1/2}}"
}
] | https://en.wikipedia.org/wiki?curid=9367435 |
9368062 | Magnetic tweezers | Trap using a magnetic field to trap micrometre-seized ferromagnetic beads
Magnetic tweezers (MT) are scientific instruments for the manipulation and characterization of biomolecules or polymers. These apparatus exert forces and torques to individual molecules or groups of molecules. It can be used to measure the tensile strength or the force generated by molecules.
Most commonly magnetic tweezers are used to study mechanical properties of biological macromolecules like DNA or proteins in single-molecule experiments. Other applications are the rheology of soft matter, and studies of force-regulated processes in living cells. Forces are typically on the order of pico- to nanonewtons (pN to nN). Due to their simple architecture, magnetic tweezers are a popular biophysical tool.
In experiments, the molecule of interest is attached to a magnetic microparticle. The magnetic tweezer is equipped with magnets that are used to manipulate the magnetic particles whose position is measured with the help of video microscopy.
Construction principle and physics of magnetic tweezers.
A magnetic tweezers apparatus consists of magnetic micro-particles, which can be manipulated with the help of an external magnetic field. The position of the magnetic particles is then determined by a microscopic objective with a camera.
Magnetic particles.
Magnetic particles for the operation in magnetic tweezers come with a wide range of properties and have to be chosen according to the intended application. Two basic types of magnetic particles are described in the following paragraphs; however there are also others like magnetic nanoparticles in ferrofluids, which allow experiments inside a cell.
Superparamagnetic beads are commercially available with a number of different characteristics. The most common is the use of spherical particles of a diameter in the micrometer range. They consist of a porous latex matrix in which magnetic nanoparticles have been embedded. Latex is auto-fluorescent and may therefore be advantageous for the imaging of their position. Irregular shaped particles present a larger surface and hence a higher probability to bind to the molecules to be studied. The coating of the microbeads may also contain ligands able to attach the molecules of interest. For example, the coating may contain streptavidin which couples strongly to biotin, which itself may be bound to the molecules of interest.
When exposed to an external magnetic field, these microbeads become magnetized. The induced magnetic moment formula_0 is proportional to a weak external magnetic field formula_1:
formula_2
where formula_3 is the vacuum permeability. It is also proportional to the volume formula_4 of the microspheres, which stems from the fact that the number of magnetic nanoparticles scales with the size of the bead. The magnetic susceptibility formula_5 is assumed to be scalar in this first estimation and may be calculated by formula_6, where formula_7 is the relative permeability. In a strong external field, the induced magnetic moment saturates at a material dependent value formula_8. The force formula_9 experienced by a microbead can be derived from the potential formula_10 of this magnetic moment in an outer magnetic field:
formula_11
The outer magnetic field can be evaluated numerically with the help of finite element analysis or by simply measuring the magnetic field with the help of a Hall effect sensor. Theoretically it would be possible to calculate the force on the beads with these formulae; however the results are not very reliable due to uncertainties of the involved variables, but they allow estimating the order of magnitude and help to better understand the system. More accurate numerical values can be obtained considering the Brownian motion of the beads.
Due to anisotropies in the stochastic distribution of the nanoparticles within the microbead the magnetic moment is not perfectly aligned with the outer magnetic field i.e. the magnetic susceptibility tensor cannot be reduced to a scalar. For this reason, the beads are also subjected to a torque formula_12which tries to align formula_13 and formula_1:
formula_14
The torques generated by this method are typically much greater than formula_15, which is more than necessary to twist the molecules of interest.
The use of ferromagnetic nanowires for the operation of magnetic tweezers enlarges their experimental application range. The length of these wires typically is in the order of tens of nanometers up to tens of micrometers, which is much larger than their diameter. In comparison with superparamagnetic beads, they allow the application of much larger forces and torques. In addition to that, they present a remnant magnetic moment. This allows the operation in weak magnetic field strengths. It is possible to produce nanowires with surface segments that present different chemical properties, which allows controlling the position where the studied molecules can bind to the wire.
Magnets.
To be able to exert torques on the microbeads at least two magnets are necessary, but many other configurations have been realized, reaching from only one magnet that only pulls the magnetic microbeads to a system of six electromagnets that allows fully controlling the 3-dimensional position and rotation via a digital feedback loop. The magnetic field strength decreases roughly exponentially with the distance from the axis linking the two magnets on a typical scale of about the width of the gap between the magnets. Since this scale is rather large in comparison to the distances, when the microbead moves in an experiment, the force acting on it may be treated as constant. Therefore, magnetic tweezers are passive force clamps due to the nature of their construction in contrast to optical tweezers, although they may be used as positive clamps, too, when combined with a feedback loop. The field strength may be increased by sharpening the pole face of the magnet which, however, also diminishes the area where the field may be considered as constant. An iron ring connection the outer poles of the magnets may help to reduce stray fields. Magnetic tweezers can be operated with both permanent magnets and electromagnets. The two techniques have their specific advantages.
Permanent magnets of magnetic tweezers are usually out of rare earth materials, like neodymium and can reach field strengths exceeding 1.3 Tesla. The force on the beads may be controlled by moving the magnets along the vertical axis. Moving them up decreases the field strength at the position of the bead and vice versa. Torques on the magnetic beads may be exerted by turning the magnets around the vertical axis to change the direction of the field. The size of the magnets is in the order of millimeters as well as their spacing.
The use of electromagnets in magnetic tweezers has the advantage that the field strength and direction can be changed just by adjusting the amplitude and the phase of the current for the magnets. For this reason, the magnets do not need to be moved which allows a faster control of the system and reduces mechanical noise. In order to increase the maximum field strength, a core of a soft paramagnetic material with high saturation and low remanence may be added to the solenoid. In any case, however, the typical field strengths are much lower compared to those of permanent magnets of comparable size. Additionally, using electromagnets requires high currents that produce heat that may necessitate a cooling system.
Bead tracking system.
The displacement of the magnetic beads corresponds to the response of the system to the imposed magnetic field and hence needs to be precisely measured: In a typical set-up, the experimental volume is illuminated from the top so that the beads produce diffraction rings in the focal plane of an objective which is placed under the tethering surface. The diffraction pattern is then recorded by a CCD-camera. The image can be analyzed in real time by a computer. The detection of the position in the plane of the tethering surface is not complicated since it corresponds to the center of the diffraction rings. The precision can be up to a few nanometers. For the position along the vertical axis, the diffraction pattern needs to be compared to reference images, which show the diffraction pattern of the considered bead in a number of known distances from the focal plane. These calibration images are obtained by keeping a bead fixed while displacing the objective, i.e. the focal plane, with the help of piezoelectric elements by known distances. With the help of interpolation, the resolution can reach precision of up 10 nm along this axis. The obtained coordinates may be used as input for a digital feedback loop that controls the magnetic field strength, for example, in order to keep the bead at a certain position.
Non-magnetic beads are usually also added to the sample as a reference to provide a background displacement vector. They have a different diameter as the magnetic beads so that they are optically distinguishable. This is necessary to detect potential drift of the fluid. For example, if the density of magnetic particles is too high, they may drag the surrounding viscous fluid with them. The displacement vector of a magnetic bead can be determined by subtracting its initial position vector and this background displacement vector from its current position.
Force Calibration.
The determination of the force that is exerted by the magnetic field on the magnetic beads can be calculated considering thermal fluctuations of the bead in the horizontal plane: The problem is rotational symmetric with respect to the vertical axis; hereafter one arbitrarily picked direction in the symmetry plane is called formula_16. The analysis is the same for the direction orthogonal to the x-direction and may be used to increase precision. If the bead leaves its equilibrium position on the formula_17-axis by formula_18 due to thermal fluctuations, it will be subjected to a restoring force formula_19 that increases linearly with formula_18 in the first order approximation. Considering only absolute values of the involved vectors it is geometrically clear that the proportionality constant is the force exerted by the magnets formula_20 over the length formula_21 of the molecule that keeps the bead anchored to the tethering surface:
formula_22.
The equipartition theorem states that the mean energy that is stored in this "spring" is equal to formula_23 per degree of freedom. Since only one direction is considered here, the potential energy of the system reads:
formula_24.
From this, a first estimate for the force acting on the bead can be deduced:
formula_25.
For a more accurate calibration, however, an analysis in Fourier space is necessary. The power spectrum density formula_26 of the position of the bead is experimentally available. A theoretical expression for this spectrum is derived in the following, which can then be fitted to the experimental curve in order to obtain the force exerted by the magnets on the bead as a fitting parameter. By definition this spectrum is the squared modulus of the Fourier transform of the position formula_27 over the spectral bandwidth formula_28:
formula_29
formula_30 can be obtained considering the equation of motion for a bead of mass formula_31:
formula_32
The term formula_33 corresponds to the Stokes friction force for a spherical particle of radius formula_34 in a medium of viscosity formula_35 and formula_36 is the restoring force which is opposed to the stochastic force formula_37 due to the Brownian motion. Here, one may neglect the inertial term formula_38, because the system is in a regime of very low Reynolds number formula_39.
The equation of motion can be Fourier transformed inserting the driving force and the position in Fourier space:
formula_40
This leads to:
formula_41.
The power spectral density of the stochastic force formula_42 can be derived by using the equipartition theorem and the fact that Brownian collisions are completely uncorrelated:
formula_43
This corresponds to the Fluctuation-dissipation theorem. With that expression, it is possible to give a theoretical expression for the power spectrum:
formula_44
The only unknown in this expression, formula_45, can be determined by fitting this expression to the experimental power spectrum. For more accurate results, one may subtract the effect due to finite camera integration time from the experimental spectrum before doing the fit.
Another force calibration method is to use the viscous drag of the microbeads: Therefore, the microbeads are pulled through the viscous medium while recording their position. Since the Reynolds number for the system is very low, it is possible to apply Stokes law to calculate the friction force which is in equilibrium with the force exerted by the magnets:
formula_46.
The velocity formula_47 can be determined by using the recorded velocity values. The force obtained via this formula can then be related to a given configuration of the magnets, which may serve as a calibration.
Typical experimental set-up.
This section gives an example for an experiment carried out by Strick, Allemand, Croquette with the help of magnetic tweezers. A double-stranded DNA molecule is fixed with multiple binding sites on one end to a glass surface and on the other to a magnetic micro bead, which can be manipulated in a magnetic tweezers apparatus. By turning the magnets, torsional stress can be applied to the DNA molecule. Rotations in the sense of the DNA helix are counted positively and vice versa. While twisting, the magnetic tweezers also allow stretching the DNA molecule. This way, torsion extension curves may be recorded at different stretching forces. For low forces (less than about 0.5 pN), the DNA forms supercoils, so called plectonemes, which decrease the extension of the DNA molecule quite symmetrically for positive and negative twists. Augmenting the pulling force already increases the extension for zero imposed torsion. Positive twists lead again to plectoneme formation that reduces the extension. Negative twist, however, does not change the extension of the DNA molecule a lot. This can be interpreted as the separation of the two strands which corresponds to the denaturation of the molecule. In the high force regime, the extension is nearly independent of the applied torsional stress. The interpretation is the apparition of local regions of highly overwound DNA. An important parameter of this experiment is also the ionic strength of the solution which affects the critical values of the applied pulling force that separate the three force regimes.
History and development.
Applying magnetic theory to the study of biology is a biophysical technique that started to appear in Germany in the early 1920s. Possibly the first demonstration was published by Alfred Heilbronn in 1922; his work looked at viscosity of protoplasts. The following year, Freundlich and Seifriz explored rheology in echinoderm eggs. Both studies included insertion of magnetic particles into cells and resulting movement observations in a magnetic field gradient.
In 1949 at Cambridge University, Francis Crick and Arthur Hughes demonstrated a novel use of the technique, calling it "The Magnetic Particle Method." The idea, which originally came from Dr. Honor Fell, was that tiny magnetic beads, phagocytoced by whole cells grown in culture, could be manipulated by an external magnetic field The tissue culture was allowed to grow in the presence of the magnetic material, and cells that contained a magnetic particle could be seen with a high power microscope. As the magnetic particle was moved through the cell by a magnetic field, measurements about the physical properties of the cytoplasm were made. Although some of their methods and measurements were self-admittedly crude, their work demonstrated the usefulness of magnetic field particle manipulation and paved the way for further developments of this technique. The magnetic particle phagocytosis method continued to be used for many years to research cytoplasm rheology and other physical properties in whole cells.
An innovation in the 1990s lead to an expansion of the technique's usefulness in a way that was similar to the then-emerging optical tweezers method. Chemically linking an individual DNA molecule between a magnetic bead and a glass slide allowed researchers to manipulate a single DNA molecule with an external magnetic field. Upon application of torsional forces to the molecule, deviations from free-form movement could be measured against theoretical standard force curves or Brownian motion analysis. This provided insight into structural and mechanical properties of DNA, such as elasticity.
Magnetic tweezers as an experimental technique has become exceptionally diverse in use and application. More recently, the introduction of even more novel methods have been discovered or proposed. Since 2002, the potential for experiments involving many tethering molecules and parallel magnetic beads has been explored, shedding light on interaction mechanics, especially in the case of DNA-binding proteins. A technique was published in 2005 that involved coating a magnetic bead with a molecular receptor and the glass slide with its ligand. This allows for a unique look at receptor-ligand dissociation force. In 2007, a new method for magnetically manipulating whole cells was developed by Kollmannsberger and Fabry. The technique involves attaching beads to the extracellular matrix and manipulating the cell from the outside of the membrane to look at structural elasticity. This method continues to be used as a means of studying rheology, as well as cellular structural proteins. A study appeared in a 2013 that used magnetic tweezers to mechanically measure the unwinding and rewinding of a single neuronal SNARE complex by tethering the entire complex between a magnetic bead and the slide, and then using the applied magnetic field force to pull the complex apart.
Biological applications.
Magnetic tweezer rheology.
Magnetic tweezers can be used to measure mechanical properties such as rheology, the study of matter flow and elasticity, in whole cells. The phagocytosis method previously described is useful for capturing a magnetic bead inside a cell. Measuring the movement of the beads inside the cell in response to manipulation from the external magnetic field yields information on the physical environment inside the cell and internal media rheology: viscosity of the cytoplasm, rigidity of internal structure, and ease of particle flow.
A whole cell may also be magnetically manipulated by attaching a magnetic bead to the extracellular matrix via fibronectin-coated magnetic beads. Fibronectin is a protein that will bind to extracellular membrane proteins. This technique allows for measurements of cell stiffness and provides insights into the functioning of structural proteins. The schematic shown at right depicts the experimental setup devised by Bonakdar and Schilling, et al. (2015) for studying the structural protein plectin in mouse cells. Stiffness was measured as proportional to bead position in response to external magnetic manipulation.
Single-molecule experiments.
Magnetic tweezers as a single-molecule method is decidedly the most common use in recent years. Through the single-molecule method, molecular tweezers provide a close look into the physical and mechanical properties of biological macromolecules. Similar to other single-molecule methods, such as optical tweezers, this method provides a way to isolate and manipulate an individual molecule free from the influences of surrounding molecules. Here, the magnetic bead is attached to a tethering surface by the molecule of interest. DNA or RNA may be tethered in either single-stranded or double-stranded form, or entire structural motifs can be tethered, such as DNA Holliday junctions, DNA hairpins, or entire nucleosomes and chromatin. By acting upon the magnetic bead with the magnetic field, different types of torsional force can be applied to study intra-DNA interactions, as well as interactions with topoisomerases or histones in chromosomes .
Single-complex studies.
Magnetic tweezers go beyond the capabilities of other single-molecule methods, however, in that interactions between and within complexes can also be observed. This has allowed recent advances in understanding more about DNA-binding proteins, receptor-ligand interactions, and restriction enzyme cleavage. A more recent application of magnetic tweezers is seen in single-complex studies. With the help of DNA as the tethering agent, an entire molecular complex may be attached between the bead and the tethering surface. In exactly the same way as with pulling a DNA hairpin apart by applying a force to the magnetic bead, an entire complex can be pulled apart and force required for the dissociation can be measured. This is also similar to the method of pulling apart receptor-ligand interactions with magnetic tweezers to measure dissociation force.
Comparison to other techniques.
This section compares the features of magnetic tweezers with those of the most important other single-molecule experimental methods: optical tweezers and atomic force microscopy. The magnetic interaction is highly specific to the used superparamagnetic microbeads. The magnetic field does practically not affect the sample. Optical tweezers have the problem that the laser beam may also interact with other particles of the biological sample due to contrasts in the refractive index. In addition to that, the laser may cause photodamage and sample heating. In the case of atomic force microscopy, it may also be hard to discriminate the interaction of the tip with the studied molecule from other nonspecific interactions.
Thanks to the low trap stiffness, the range of forces accessible with magnetic tweezers is lower in comparison with the two other techniques. The possibility to exert torque with magnetic tweezers is not unique: optically tweezers may also offer this feature when operated with birefringent microbeads in combination with a circularly polarized laser beam.
Another advantage of magnetic tweezers is that it is easy to carry out in parallel many single molecule measurements.
An important drawback of magnetic tweezers is the low temporal and spatial resolution due to the data acquisition via video-microscopy. However, with the addition of a high-speed camera, the temporal and spatial resolution has been demonstrated to reach the Angstrom-level.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\overrightarrow{m}(\\overrightarrow{B}) "
},
{
"math_id": 1,
"text": " \\overrightarrow{B} "
},
{
"math_id": 2,
"text": " \\overrightarrow{m}(\\overrightarrow{B}) = \\frac{V \\chi \\overrightarrow{B}}{\\mu_0} "
},
{
"math_id": 3,
"text": "\\mu_0 "
},
{
"math_id": 4,
"text": "V "
},
{
"math_id": 5,
"text": "\\chi "
},
{
"math_id": 6,
"text": " \\chi = 3 \\frac{\\mu_r-1}{\\mu_r+2} "
},
{
"math_id": 7,
"text": " \\mu_r "
},
{
"math_id": 8,
"text": " \\overrightarrow{m}_{sat} "
},
{
"math_id": 9,
"text": " \\overrightarrow{F} "
},
{
"math_id": 10,
"text": "U =- \\frac{1}{2} \\overrightarrow{m}(\\overrightarrow{B}) \\cdot \\overrightarrow{B} "
},
{
"math_id": 11,
"text": " \\overrightarrow{F}= - \\overrightarrow{\\nabla}U= \\begin{cases} \\frac{V \\chi}{2 \\mu_0} \\overrightarrow{\\nabla}\\left| \\overrightarrow{B} \\right|^2 & \\qquad \\text{in a weak magnetic field} \\\\ \\frac{1}{2} \\overrightarrow{\\nabla} \\left( \\overrightarrow{m}_{sat} \\cdot \\overrightarrow{B} \\right) & \\qquad \\text{in a strong magnetic field} \\end{cases} "
},
{
"math_id": 12,
"text": " \\overrightarrow{\\Gamma} "
},
{
"math_id": 13,
"text": " \\overrightarrow{m} "
},
{
"math_id": 14,
"text": " \\overrightarrow{\\Gamma} = \\overrightarrow{m} \\times \\overrightarrow{B} "
},
{
"math_id": 15,
"text": " 10^3 \\mathrm{pNnm}"
},
{
"math_id": 16,
"text": "x"
},
{
"math_id": 17,
"text": "x "
},
{
"math_id": 18,
"text": "\\delta x "
},
{
"math_id": 19,
"text": "F_{\\chi} "
},
{
"math_id": 20,
"text": "F "
},
{
"math_id": 21,
"text": "l "
},
{
"math_id": 22,
"text": "F_{\\chi} = \\frac{F}{l} \\delta x "
},
{
"math_id": 23,
"text": " \\frac{1}{2} k_BT "
},
{
"math_id": 24,
"text": " \\langle E_p \\rangle= \\frac{1}{2} \\frac{F}{l} \\langle \\delta x^2 \\rangle = \\frac{1}{2} k_BT "
},
{
"math_id": 25,
"text": " F = \\frac{lk_BT}{ \\langle \\delta x^2 \\rangle} "
},
{
"math_id": 26,
"text": "P(\\omega)"
},
{
"math_id": 27,
"text": " X(\\omega) "
},
{
"math_id": 28,
"text": " \\Delta f "
},
{
"math_id": 29,
"text": " P(\\omega) = \\frac{\\left| X(\\omega)\\right|^2}{\\Delta f} "
},
{
"math_id": 30,
"text": " X(\\omega)"
},
{
"math_id": 31,
"text": "m "
},
{
"math_id": 32,
"text": "m \\frac{\\partial^2 x(t)}{\\partial t^2} = -6 \\pi R \\eta \\frac{\\partial x(t)}{\\partial t}-\\frac{F}{l}x(t)+f(t) "
},
{
"math_id": 33,
"text": " 6 \\pi R \\eta \\frac{\\partial x(t)}{\\partial t} "
},
{
"math_id": 34,
"text": " R "
},
{
"math_id": 35,
"text": " \\eta "
},
{
"math_id": 36,
"text": " \\frac{F}{l}x(t) "
},
{
"math_id": 37,
"text": " f(t) "
},
{
"math_id": 38,
"text": "m \\frac{\\partial^2 x(t)}{\\partial t^2}"
},
{
"math_id": 39,
"text": "\\left(\\mathrm{Re}<10^{-5}\\right)"
},
{
"math_id": 40,
"text": "\n\\begin{align}\n f(t) = & \\frac{1}{2\\pi} \\int F(\\omega) \\mathrm{e}^{i\\omega t} \\mathrm{d}t \\\\\n x(t) = & \\frac{1}{2\\pi} \\int X(\\omega) \\mathrm{e}^{i\\omega t} \\mathrm{d}t.\n\\end{align}\n"
},
{
"math_id": 41,
"text": " X(\\omega) = \\frac{F(\\omega)}{6\\pi i R \\eta \\omega + \\frac{F}{l}} "
},
{
"math_id": 42,
"text": " F(\\omega) "
},
{
"math_id": 43,
"text": " \\frac{\\left|F(\\omega)\\right|^2}{\\Delta f} = 4k_BT \\cdot 6 \\pi \\eta R "
},
{
"math_id": 44,
"text": " P(\\omega) = \\frac{24 \\pi k_BT\\eta R}{36\\pi^2 R^2 \\eta^2 \\omega^2 + \\left(\\frac{F}{l}\\right)^2} "
},
{
"math_id": 45,
"text": " F "
},
{
"math_id": 46,
"text": " F=6\\pi \\eta R v "
},
{
"math_id": 47,
"text": " v "
}
] | https://en.wikipedia.org/wiki?curid=9368062 |
9368110 | Logarithmically concave measure | In mathematics, a Borel measure "μ" on "n"-dimensional Euclidean space formula_0 is called logarithmically concave (or log-concave for short) if, for any compact subsets "A" and "B" of formula_0 and 0 < "λ" < 1, one has
formula_1
where "λ" "A" + (1 − "λ") "B" denotes the Minkowski sum of "λ" "A" and (1 − "λ") "B".
Examples.
The Brunn–Minkowski inequality asserts that the Lebesgue measure is log-concave. The restriction of the Lebesgue measure to any convex set is also log-concave.
By a theorem of Borell, a probability measure on R^d is log-concave if and only if it has a density with respect to the Lebesgue measure on some affine hyperplane, and this density is a logarithmically concave function. Thus, any Gaussian measure is log-concave.
The Prékopa–Leindler inequality shows that a convolution of log-concave measures is log-concave.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbb{R}^{n}"
},
{
"math_id": 1,
"text": " \\mu(\\lambda A + (1-\\lambda) B) \\geq \\mu(A)^\\lambda \\mu(B)^{1-\\lambda}, "
}
] | https://en.wikipedia.org/wiki?curid=9368110 |
9368946 | Neutral axis | The neutral axis is an axis in the cross section of a beam (a member resisting bending) or shaft along which there are no longitudinal stresses or strains.
Theory.
If the section is symmetric, isotropic and is not curved before a bend occurs, then the neutral axis is at the geometric centroid of a beam or shaft. All fibers on one side of the neutral axis are in a state of tension, while those on the opposite side are in compression.
Since the beam is undergoing uniform bending, a plane on the beam remains plane. That is:
formula_0
Where formula_1 is the shear strain and formula_2 is the shear stress
There is a compressive (negative) strain at the top of the beam, and a tensile (positive) strain at the bottom of the beam. Therefore by the Intermediate Value Theorem, there must be some point in between the top and the bottom that has no strain, since the strain in a beam is a continuous function.
Let L be the original length of the beam (span)<br>
ε(y) is the strain as a function of coordinate on the face of the beam.<br>
σ(y) is the stress as a function of coordinate on the face of the beam.<br>
ρ is the radius of curvature of the beam at its neutral axis.<br>
θ is the bend angle
Since the bending is uniform and pure, there is therefore at a distance y from the neutral axis with the inherent property of having no strain:
formula_3
Therefore the longitudinal normal strain formula_4 varies linearly with the distance y from the neutral surface. Denoting formula_5 as the maximum strain in the beam (at a distance c from the neutral axis), it becomes clear that:
formula_6
Therefore, we can solve for ρ, and find that:
formula_7
Substituting this back into the original expression, we find that:
formula_8
Due to Hooke's Law, the stress in the beam is proportional to the strain by E, the modulus of elasticity:
formula_9
Therefore:
formula_10
formula_11
From statics, a moment (i.e. pure bending) consists of equal and opposite forces. Therefore, the total amount of force across the cross section must be 0.
formula_12
Therefore:
formula_13
Since y denotes the distance from the neutral axis to any point on the face, it is the only variable that changes with respect to dA. Therefore:
formula_14
Therefore the first moment of the cross section about its neutral axis must be zero. Therefore the neutral axis lies on the centroid of the cross section.
Note that the neutral axis does not change in length when under bending. It may seem counterintuitive at first, but this is because there are no bending stresses in the neutral axis. However, there are shear stresses (τ) in the neutral axis, zero in the middle of the span but increasing towards the supports, as can be seen in this function (Jourawski's formula);
formula_15
where<br>
T = shear force<br>
Q = first moment of area of the section above/below the neutral axis<br>
w = width of the beam<br>
I = second moment of area of the beam
This definition is suitable for the so-called long beams, i.e. its length is much larger than the other two dimensions.
Arches.
Arches also have a neutral axis if they are made of stone; stone is an inelastic medium, and has little strength in tension. Therefore as the loading on the arch changes the neutral axis moves- if the neutral axis leaves the stonework, then the arch will fail.
This theory (also known as the thrust line method) was proposed by Thomas Young and developed by Isambard Kingdom Brunel.
Practical applications.
Building trades workers should have at least a basic understanding of the concept of neutral axis, to avoid cutting openings to route wires, pipes, or ducts in locations which may dangerously compromise the strength of structural elements of a building. Building codes usually specify rules and guidelines which may be followed for routine work, but special situations and designs may need the services of a structural engineer to assure safety.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\gamma_{xy}=\\gamma_{zx}=\\tau_{xy}=\\tau_{xz}=0"
},
{
"math_id": 1,
"text": "\\gamma"
},
{
"math_id": 2,
"text": "\\tau"
},
{
"math_id": 3,
"text": "\\epsilon_x(y)=\\frac{L(y)-L}{L} = \\frac{\\theta\\,(\\rho\\, - y) - \\theta \\rho \\,}{\\theta \\rho \\,} = \\frac{-y\\theta}{\\rho \\theta} = \\frac{-y}{\\rho}"
},
{
"math_id": 4,
"text": "\\epsilon_x"
},
{
"math_id": 5,
"text": "\\epsilon_m"
},
{
"math_id": 6,
"text": "\\epsilon_m = \\frac{c}{\\rho}"
},
{
"math_id": 7,
"text": "\\rho = \\frac{c}{\\epsilon_m}"
},
{
"math_id": 8,
"text": "\\epsilon_x(y) = \\frac {-\\epsilon_my}{c}"
},
{
"math_id": 9,
"text": "\\sigma_x = E\\epsilon_x\\,"
},
{
"math_id": 10,
"text": "E\\epsilon_x(y) = \\frac {-E\\epsilon_my}{c}"
},
{
"math_id": 11,
"text": "\\sigma_x(y) = \\frac {-\\sigma_my}{c}"
},
{
"math_id": 12,
"text": "\\int \\sigma_x dA = 0 "
},
{
"math_id": 13,
"text": "\\int \\frac {-\\sigma_my}{c} dA = 0 "
},
{
"math_id": 14,
"text": "\\int y dA = 0 "
},
{
"math_id": 15,
"text": "\\tau = \\frac{T Q}{w I}"
}
] | https://en.wikipedia.org/wiki?curid=9368946 |
9369234 | Eric Bach | American computer scientist
Eric Bach is an American computer scientist who has made contributions to computational number theory.
Bach completed his undergraduate studies at the University of Michigan, Ann Arbor, and got his Ph.D. in computer science from the University of California, Berkeley, in 1984 under the supervision of Manuel Blum. He is currently a professor at the Computer Science Department, University of Wisconsin–Madison.
Among other work, he gave explicit bounds for the Chebotarev density theorem, which imply that if one assumes the generalized Riemann hypothesis then formula_0 is generated by its elements smaller than 2(log "n")2. This result shows that the generalized Riemann hypothesis implies tight bounds for the necessary run-time of the deterministic version of the Miller–Rabin primality test. Bach also did some of the first work on pinning down the actual expected run-time of the Pollard rho method where previous work relied on heuristic estimates and empirical data. He is the namesake of Bach's algorithm for generating random factored numbers.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\left(\\mathbb{Z}/n\\mathbb{Z}\\right)^*"
}
] | https://en.wikipedia.org/wiki?curid=9369234 |
936932 | Soul theorem | Complete manifolds of non-negative sectional curvature largely reduce to the compact case
In mathematics, the soul theorem is a theorem of Riemannian geometry that largely reduces the study of complete manifolds of non-negative sectional curvature to that of the compact case. Jeff Cheeger and Detlef Gromoll proved the theorem in 1972 by generalizing a 1969 result of Gromoll and Wolfgang Meyer. The related soul conjecture, formulated by Cheeger and Gromoll at that time, was proved twenty years later by Grigori Perelman.
Soul theorem.
Cheeger and Gromoll's soul theorem states:
If ("M", "g") is a complete connected Riemannian manifold with nonnegative sectional curvature, then there exists a closed totally convex, totally geodesic embedded submanifold whose normal bundle is diffeomorphic to "M".
Such a submanifold is called a soul of ("M", "g"). By the Gauss equation and total geodesicity, the induced Riemannian metric on the soul automatically has nonnegative sectional curvature. Gromoll and Meyer had earlier studied the case of positive sectional curvature, where they showed that a soul is given by a single point, and hence that M is diffeomorphic to Euclidean space.
Very simple examples, as below, show that the soul is not uniquely determined by ("M", "g") in general. However, Vladimir Sharafutdinov constructed a 1-Lipschitz retraction from M to any of its souls, thereby showing that any two souls are isometric. This mapping is known as the Sharafutdinov's retraction.
Cheeger and Gromoll also posed a converse question of whether there is a complete Riemannian metric of nonnegative sectional curvature on the total space of any vector bundle over closed manifolds of positive sectional curvature.
Examples.
{("x", "y", "z") : "z"
"x"2 + "y"2}, with the metric "g" being the ordinary Euclidean distance coming from the embedding of the paraboloid in Euclidean space R3. Here the sectional curvature is positive everywhere, though not constant. The origin (0, 0, 0) is a soul of "M". Not every point "x" of "M" is a soul of "M", since there may be geodesic loops based at "x", in which case formula_0 wouldn't be totally convex.
{("x", "y", "z") : "x"2 + "y"2
1}, again with the induced Euclidean metric. The sectional curvature is 0 everywhere. Any "horizontal" circle {("x", "y", "z") : "x"2 + "y"2
1} with fixed "z" is a soul of "M". Non-horizontal cross sections of the cylinder are not souls since they are neither totally convex nor totally geodesic.
Soul conjecture.
As mentioned above, Gromoll and Meyer proved that if g has positive sectional curvature then the soul is a point. Cheeger and Gromoll conjectured that this would hold even if g had nonnegative sectional curvature, with positivity only required of all sectional curvatures at a single point. This soul conjecture was proved by Grigori Perelman, who established the more powerful fact that Sharafutdinov's retraction is a Riemannian submersion, and even a submetry.
References.
<templatestyles src="Reflist/styles.css" />
Sources.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\{x\\}"
}
] | https://en.wikipedia.org/wiki?curid=936932 |
9373204 | General set theory | System of mathematical set theory
General set theory (GST) is George Boolos's (1998) name for a fragment of the axiomatic set theory Z. GST is sufficient for all mathematics not requiring infinite sets, and is the weakest known set theory whose theorems include the Peano axioms.
Ontology.
The ontology of GST is identical to that of ZFC, and hence is thoroughly canonical. GST features a single primitive ontological notion, that of set, and a single ontological assumption, namely that all individuals in the universe of discourse (hence all mathematical objects) are sets. There is a single primitive binary relation, set membership; that set "a" is a member of set "b" is written "a ∈ b" (usually read ""a" is an element of "b"").
Axioms.
The symbolic axioms below are from Boolos (1998: 196), and govern how sets behave and interact.
As with Z, the background logic for GST is first order logic with identity. Indeed, GST is the fragment of Z obtained by omitting the axioms Union, Power Set, Elementary Sets (essentially Pairing) and Infinity and then taking a theorem of Z, Adjunction, as an axiom.
The natural language versions of the axioms are intended to aid the intuition.
1) Axiom of Extensionality: The sets "x" and "y" are the same set if they have the same members.
formula_0
The converse of this axiom follows from the substitution property of equality.
2) Axiom Schema of Specification (or "Separation" or "Restricted Comprehension"): If "z" is a set and formula_1 is any property which may be satisfied by all, some, or no elements of "z", then there exists a subset "y" of "z" containing just those elements "x" in "z" which satisfy the property formula_1. The restriction to "z" is necessary to avoid Russell's paradox and its variants. More formally, let formula_2 be any formula in the language of GST in which "x" may occur freely and "y" does not. Then all instances of the following schema are axioms:
formula_3
3) Axiom of Adjunction: If "x" and "y" are sets, then there exists a set "w", the "adjunction" of "x" and "y", whose members are just "y" and the members of "x".
formula_4
"Adjunction" refers to an elementary operation on two sets, and has no bearing on the use of that term elsewhere in mathematics, including in category theory.
ST is GST with the axiom schema of specification replaced by the axiom of empty set.
Discussion.
Metamathematics.
Note that Specification is an axiom schema. The theory given by these axioms is not finitely axiomatizable. Montague (1961) showed that ZFC is not finitely axiomatizable, and his argument carries over to GST. Hence any axiomatization of GST must include at least one axiom schema.
With its simple axioms, GST is also immune to the three great antinomies of naïve set theory: Russell's, Burali-Forti's, and Cantor's.
GST is Interpretable in relation algebra because no part of any GST axiom lies in the scope of more than three quantifiers. This is the necessary and sufficient condition given in Tarski and Givant (1987).
Peano arithmetic.
Setting φ("x") in "Separation" to "x"≠"x", and assuming that the domain is nonempty, assures the existence of the empty set. "Adjunction" implies that if "x" is a set, then so is formula_5. Given "Adjunction", the usual construction of the successor ordinals from the empty set can proceed, one in which the natural numbers are defined as formula_6. See Peano's axioms.
GST is mutually interpretable with Peano arithmetic (thus it has the same proof-theoretic strength as PA).
The most remarkable fact about ST (and hence GST), is that these tiny fragments of set theory give rise to such rich metamathematics. While ST is a small fragment of the well-known canonical set theories ZFC and NBG, ST interprets Robinson arithmetic (Q), so that ST inherits the nontrivial metamathematics of Q. For example, ST is essentially undecidable because Q is, and every consistent theory whose theorems include the ST axioms is also essentially undecidable. This includes GST and every axiomatic set theory worth thinking about, assuming these are consistent. In fact, the undecidability of ST implies the undecidability of first-order logic with a single binary predicate letter.
Q is also incomplete in the sense of Gödel's incompleteness theorem. Any axiomatizable theory, such as ST and GST, whose theorems include the Q axioms is likewise incomplete. Moreover, the consistency of GST cannot be proved within GST itself, unless GST is in fact inconsistent.
Infinite sets.
Given any model "M" of ZFC, the collection of hereditarily finite sets in "M" will satisfy the GST axioms. Therefore, GST cannot prove the existence of even a countable infinite set, that is, of a set whose cardinality is formula_7. Even if GST did afford a countably infinite set, GST could not prove the existence of a set whose cardinality is formula_8, because GST lacks the axiom of power set. Hence GST cannot ground analysis and geometry, and is too weak to serve as a foundation for mathematics.
History.
Boolos was interested in GST only as a fragment of Z that is just powerful enough to interpret Peano arithmetic. He never lingered over GST, only mentioning it briefly in several papers discussing the systems of Frege's "Grundlagen" and "Grundgesetze", and how they could be modified to eliminate Russell's paradox. The system Aξ'[δ0] in Tarski and Givant (1987: 223) is essentially GST with an axiom schema of induction replacing Specification, and with the existence of an empty set explicitly assumed.
GST is called STZ in Burgess (2005), p. 223. Burgess's theory ST is GST with Empty Set replacing the axiom schema of specification. That the letters "ST" also appear in "GST" is a coincidence.
Footnotes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\forall x \\forall y [\\forall z [z \\in x \\leftrightarrow z \\in y] \\rightarrow x = y]."
},
{
"math_id": 1,
"text": "\\phi"
},
{
"math_id": 2,
"text": "\\phi(x)"
},
{
"math_id": 3,
"text": "\\forall z \\exists y \\forall x [x \\in y \\leftrightarrow ( x \\in z \\land \\phi(x))]."
},
{
"math_id": 4,
"text": "\\forall x \\forall y \\exist w \\forall z [ z \\in w \\leftrightarrow (z \\in x \\lor z=y)]."
},
{
"math_id": 5,
"text": "S(x) = x \\cup \\{x\\}"
},
{
"math_id": 6,
"text": "\\varnothing,\\,S(\\varnothing),\\,S(S(\\varnothing)),\\,\\ldots,"
},
{
"math_id": 7,
"text": "\\aleph_0"
},
{
"math_id": 8,
"text": "\\aleph_1"
}
] | https://en.wikipedia.org/wiki?curid=9373204 |
937408 | Picture plane | In painting, photography, graphical perspective and descriptive geometry, a picture plane is an image plane located between the "eye point" (or "oculus") and the object being viewed and is usually coextensive to the material surface of the work. It is ordinarily a vertical plane perpendicular to the sightline to the object of interest.
Features.
In the technique of graphical perspective the picture plane has several features:
Given are an eye point O (from "oculus"), a horizontal plane of reference called the "ground plane" γ and a picture plane π... The line of intersection of π and γ is called the "ground line" and denoted "GR". ... the orthogonal projection of O upon π is called the "principal vanishing point P"...The line through "P" parallel to the ground line is called the "horizon" HZ
The horizon frequently features vanishing points of lines appearing parallel in the foreground.
Position.
The orientation of the picture plane is always perpendicular of the axis that comes straight out of your eyes. For example, if you are looking to a building that is in front of you and your eyesight is entirely horizontal then the picture plane is perpendicular to the ground and to the axis of your sight.
If you are looking up or down, then the picture plane remains perpendicular to your sight and it changes the 90 degrees angle compared to the ground. When this happens a third vanishing point will appear in most cases depending on what you are seeing (or drawing).
Cut of an eject.
G. B. Halsted included the picture plane in his book "Synthetic Projective Geometry":
"To 'project' from a fixed point "M" (the 'projection vertex') a figure, the 'original', composed of points "B, C, D" etc. and straights "b, c, d" etc., is to construct the 'projecting straights' formula_0 and the 'projecting planes' formula_1 Thus is obtained a new figure composed of straights and planes, all on M, and called an 'eject' of the original."
"To 'cut' by a fixed plane μ (the picture-plane) a figure, the 'subject' made up of planes β, γ, δ, etc., and straights "b, c, d", etc., is to construct the meets formula_2 and passes formula_3 Thus is obtained a new figure composed of straights and points, all on μ, and called a 'cut' of the subject. If the subject is an eject of an original, the cut of the subject is an 'image' of the original.
Integrity of the picture plane.
A well-known phrase has accompanied many discussions of painting during the period of modernism. Coined by the influential art critic Clement Greenberg in his essay called "Modernist Painting", the phrase "integrity of the picture plane" has come to denote how the flat surface of the physical painting functions in older as opposed to more recent works. That phrase is found in the following sentence in his essay:
"The Old Masters had sensed that it was necessary to preserve what is called the integrity of the picture plane: that is, to signify the enduring presence of flatness underneath and above the most vivid illusion of three-dimensional space."
Greenberg seems to be referring to the way painting relates to the picture plane in both the modern period and the "Old Master" period.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\overline{MB},\\ \\overline{MC},\\ \\overline{MD},"
},
{
"math_id": 1,
"text": "\\overline{Mb},\\ \\overline{Mc},\\ \\overline{Md}."
},
{
"math_id": 2,
"text": "\\overline{\\mu \\beta},\\ \\overline{\\mu \\gamma},\\ \\overline{\\mu \\delta}"
},
{
"math_id": 3,
"text": "\\dot{\\mu b},\\ \\dot{\\mu c},\\ \\dot{\\mu d}."
}
] | https://en.wikipedia.org/wiki?curid=937408 |
9374505 | Schild equation | A straight line graph fitted to hypothetical points. The Schild plot of a reversible competitive antagonist should be a straight line, with linear gradient, whose y-intercept relates to the strength of the antagonist.
In pharmacology, Schild regression analysis, based upon the Schild equation, both named for Heinz Otto Schild, are tools for studying the effects of agonists and antagonists on the response caused by the receptor or on ligand-receptor binding.
Concept.
Dose-response curves can be constructed to describe response or ligand-receptor complex formation as a function of the ligand concentration. Antagonists make it harder to form these complexes by inhibiting interactions of the ligand with its receptor. This is seen as a change in the dose response curve: typically a rightward shift or a lowered maximum. A reversible competitive antagonist should cause a rightward shift in the dose response curve, such that the new curve is parallel to the old one and the maximum is unchanged. This is because reversible competitive antagonists are surmountable antagonists. The magnitude of the rightward shift can be quantified with the dose ratio, r. The dose ratio r is the ratio of the dose of agonist required for half maximal response with the antagonist formula_0 present divided by the agonist required for half maximal response without antagonist ("control"). In other words, the ratio of the EC50s of the inhibited and un-inhibited curves. Thus, r represents both the strength of an antagonist and the concentration of the antagonist that was applied. An equation derived from the Gaddum equation can be used to relate r to formula_1, as follows:
formula_2
where
A Schild plot is a double logarithmic plot, typically formula_4 as the ordinate and formula_5as the abscissa. This is done by taking the base-10 logarithm of both sides of the previous equation after subtracting 1:
formula_6
This equation is linear with respect to formula_5, allowing for easy construction of graphs without computations. This was particular valuable before the use of computers in pharmacology became widespread. The y-intercept of the equation represents the negative logarithm of formula_3 and can be used to quantify the strength of the antagonist.
These experiments must be carried out on a very wide range (therefore the logarithmic scale) as the mechanisms differ over a large scale, such as at high concentration of drug.
The fitting of the Schild plot to observed data points can be done with regression analysis.
Schild regression for ligand binding.
Although most experiments use cellular response as a measure of the effect, the effect is, in essence, a result of the binding kinetics; so, in order to illustrate the mechanism, ligand binding is used. A ligand A will bind to a receptor R according to an equilibrium constant :
formula_7
Although the equilibrium constant is more meaningful, texts often mention its inverse, the affinity constant (Kaff = k1/k−1): A better binding means an increase of binding affinity.
The equation for simple ligand binding to a single homogeneous receptor is
formula_8
This is the Hill-Langmuir equation, which is practically the Hill equation described for the agonist binding. In chemistry, this relationship is called the Langmuir equation, which describes the adsorption of molecules onto sites of a surface (see adsorption).
formula_9 is the total number of binding sites, and when the equation is plotted it is the horizontal asymptote to which the plot tends; more binding sites will be occupied as the ligand concentration increases, but there will never be 100% occupancy. The binding affinity is the concentration needed to occupy 50% of the sites; the lower this value is the easier it is for the ligand to occupy the binding site.
The binding of the ligand to the receptor at equilibrium follows the same kinetics as an enzyme at steady-state (Michaelis–Menten equation) without the conversion of the bound substrate to product.
Agonists and antagonists can have various effects on ligand binding. They can change the maximum number of binding sites, the affinity of the ligand to the receptor, both effects together or even more bizarre effects when the system being studied is more intact, such as in tissue samples. (Tissue absorption, desensitization, and other non equilibrium steady-state can be a problem.)
A surmountable drug changes the binding affinity:
A nonsurmountable drug changes the maximum binding:
The Schild regression also can reveal if there are more than one type of receptor and it can show if the experiment was done wrong as the system has not reached equilibrium.
Radioligand binding assays.
The first radio-receptor assay (RRA) was done in 1970 by Lefkowitz et al., using a radiolabeled hormone to determine the binding affinity for its receptor.
A radio-receptor assay requires the separation of the bound from the free ligand. This is done by filtration, centrifugation or dialysis.
A method that does not require separation is the scintillation proximity assay that relies on the fact that β-rays from 3H travel extremely short distances. The receptors are bound to beads coated with a polyhydroxy scintillator. Only the bound ligands to be detected.
Today, the fluorescence method is preferred to radioactive materials due to a much lower cost, lower hazard, and the possibility of multiplexing the reactions in a high-throughput manner. One problem is that fluorescent-labeled ligands have to bear a bulky fluorophore that may cause it to hinder the ligand binding. Therefore, the fluorophore used, the length of the linker, and its position must be carefully selected.
An example is by using FRET, where the ligand's fluorophore transfers its energy to the fluorophore of an antibody raised against the receptor.
Other detection methods such as surface plasmon resonance do not even require fluorophores.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\ce{B}"
},
{
"math_id": 1,
"text": "[\\ce{B}]"
},
{
"math_id": 2,
"text": "r=1+\\frac{[\\ce{B}]}{K_B}"
},
{
"math_id": 3,
"text": "K_B"
},
{
"math_id": 4,
"text": "\\log_{10}(r-1)"
},
{
"math_id": 5,
"text": "\\log_{10}[\\ce{B}]"
},
{
"math_id": 6,
"text": "\\log_{10}(r-1)=\\log_{10}[\\ce{B}]-\\log_{10}(K_B)"
},
{
"math_id": 7,
"text": "K_d = \\frac{ k_{-1} }{k_1}"
},
{
"math_id": 8,
"text": "[AR]=\\frac{[R]_t \\, [A]}{[A]+K_d}"
},
{
"math_id": 9,
"text": "[R]_t"
},
{
"math_id": 10,
"text": "K_d'= K_d \\frac{1+[B]}{K_b}"
},
{
"math_id": 11,
"text": "K_d'= K_d \\frac{K_B + [B]}{K_B + \\frac{[B]}{\\alpha} }"
},
{
"math_id": 12,
"text": "[R]'_t = \\frac{[R]_t}{1 + \\frac{[B]}{K_b} }"
}
] | https://en.wikipedia.org/wiki?curid=9374505 |
937535 | Gravity anomaly | Difference between ideal and observed gravitational acceleration at a location
The gravity anomaly at a location on the Earth's surface is the difference between the observed value of gravity and the value predicted by a theoretical model. If the Earth were an ideal oblate spheroid of uniform density, then the gravity measured at every point on its surface would be given precisely by a simple algebraic expression. However, the Earth has a rugged surface and non-uniform composition, which distorts its gravitational field. The theoretical value of gravity can be corrected for altitude and the effects of nearby terrain, but it usually still differs slightly from the measured value. This gravity anomaly can reveal the presence of subsurface structures of unusual density. For example, a mass of dense ore below the surface will give a positive anomaly due to the increased gravitational attraction of the ore.
Different theoretical models will predict different values of gravity, and so a gravity anomaly is always specified with reference to a particular model. The Bouguer, free-air, and isostatic gravity anomalies are each based on different theoretical corrections to the value of gravity.
A gravity survey is conducted by measuring the gravity anomaly at many locations in a region of interest, using a portable instrument called a gravimeter. Careful analysis of the gravity data allows geologists to make inferences about the subsurface geology.
Definition.
The gravity anomaly is the difference between the observed acceleration of an object in free fall (gravity) near a planet's surface, and the corresponding value predicted by a model of the planet's gravitational field. Typically the model is based on simplifying assumptions, such as that, under its self-gravitation and rotational motion, the planet assumes the figure of an ellipsoid of revolution. Gravity on the surface of this reference ellipsoid is then given by a simple formula which only contains the latitude. For Earth, the reference ellipsoid is the International Reference Ellipsoid, and the value of gravity predicted for points on the ellipsoid is the "normal gravity", "g"n.
Gravity anomalies were first discovered in 1672, when the French astronomer Jean Richer established an observatory on the island of Cayenne. Richter was equipped with a highly precise pendulum clock which had been carefully calibrated at Paris before his departure. However, he found that the clock ran too slowly in Cayenne, compared with the apparent motion of the stars. Fifteen years later, Isaac Newton used his newly formulated universal theory of gravitation to explain the anomaly. Newton showed that the measured value of gravity was affected by the rotation of the Earth, which caused the Earth's equator to bulge out slightly relative to its poles. Cayenne, being nearer the equator than Paris, would be both further from the center of Earth (reducing the Earth's bulk gravitational attraction slightly) and subject to stronger centrifugal acceleration from the Earth's rotation. Both these effects reduce the value of gravity, explaining why Richter's pendulum clock, which depended on the value of gravity, ran too slowly. Correcting for these effects removed most of this anomaly.
To understand the nature of the gravity anomaly due to the subsurface, a number of corrections must be made to the measured gravity value. Different theoretical models will include different corrections to the value of gravity, and so a gravity anomaly is always specified with reference to a particular model. The Bouguer, free-air, and isostatic gravity anomalies are each based on different theoretical corrections to the value of gravity.
The model field and corrections.
The starting point for the model field is the International Reference Ellipsoid, which gives the normal gravity "g"n for every point on the Earth's idealized shape. Further refinements of the model field are usually expressed as corrections added to the measured gravity or (equivalently) subtracted from the normal gravity. At a minimum, these include the tidal correction △"g"tid, the terrain correction △"g"T, and the free air correction △"g"FA. Other corrections are added for various gravitational models. The difference between the corrected measured gravity and the normal gravity is the gravity anomaly.
The normal gravity.
The normal gravity accounts for the bulk gravitation of the entire Earth, corrected for its idealized shape and rotation. It is given by the formula:
formula_0
where formula_1 = ; formula_2 = ; and formula_3 = . This is accurate to 0.1 mgal at any latitude formula_4. When greater precision is needed, a more elaborate formula gives the normal gravity with an accuracy of 0.0001 mgal.
The tidal correction.
The Sun and Moon create time-dependent tidal forces that affect the measured value of gravity by about 0.3 mgal. Two-thirds of this is from the Moon. This effect is very well understood and can be calculated precisely for a given time and location using astrophysical data and formulas, to yield the tidal correction △"g"tid.
The terrain correction.
The local topography of the land surface affects the gravity measurement. Both terrain higher than the measurement point and valleys lower than the measurement point reduce the measured value of gravity. This is taken into account by the terrain correction △"g"T. The terrain correction is calculated from knowledge of the local topography and estimates of the density of the rock making up the high ground. In effect, the terrain correction levels the terrain around the measurement point.
The terrain correction must be calculated for every point at which gravity is measured, taking into account every hill or valley whose difference in elevation from the measurement point is greater than about 5% of its distance from the measurement point. This is tedious and time-consuming but necessary for obtaining a meaningful gravity anomaly.
The free-air correction.
The next correction is the free-air correction. This takes into account the fact that the measurement is usually at a different elevation than the reference ellipsoid at the measurement latitude and longitude. For a measurement point above the reference ellipsoid, this means that the gravitational attraction of the bulk mass of the earth is slightly reduced. The free-air correction is simply 0.3086 mgal m−1 times the elevation above the reference ellipsoid.
The remaining gravity anomaly at this point in the reduction is called the "free-air anomaly". That is, the free-air anomaly is:
formula_5
Bouguer plate correction.
The free-air anomaly does not take into account the layer of material (after terrain leveling) outside the reference ellipsoid. The gravitational attraction of this layer or plate is taken into account by the Bouguer plate correction, which is ρ "h" mgal m2 kg−1. The density of crustal rock, ρ, is usually taken to be 2670 kg m3 so the Bouguer plate correction is usually taken as −0.1119 mgal m−1 "h". Here "h" is the elevation above the reference ellipsoid.
The remaining gravity anomaly at this point in the reduction is called the "Bouguer anomaly". That is, the Bouguer anomaly is:
formula_6
Isostatic correction.
The Bouguer anomaly is positive over ocean basins and negative over high continental areas. This shows that the low elevation of ocean basins and high elevation of continents is compensated by the thickness of the crust at depth. The higher terrain is held up by the buoyancy of thicker crust "floating" on the mantle.
The isostatic anomaly is defined as the Bouger anomaly minus the gravity anomaly due to the subsurface compensation, and is a measure of the local departure from isostatic equilibrium, due to dynamic processes in the viscous mantle. At the center of a level plateau, it is approximately equal to the free air anomaly. The isostatic correction is dependent on the isostatic model used to calculate isostatic balance, and so is slightly different for the Airy-Heiskanen model (which assumes that the crust and mantle are uniform in density and isostatic balance is provided by changes in crust thickness), the Pratt-Hayford model (which assumes that the bottom of the crust is at the same depth everywhere and isostatic balance is provided by lateral changes in crust density), and the Vening Meinesz elastic plate model (which assumes the crust acts like an elastic sheet).
"Forward modelling" is the process of computing the detailed shape of the compensation required by a theoretical model and using this to correct the Bouguer anomaly to yield an isostatic anomaly.
Causes.
Lateral variations in gravity anomalies are related to anomalous density distributions within the Earth. Local measurements of the gravity of Earth help us to understand the planet's internal structure.
Regional causes.
The Bouguer anomaly over continents is generally negative, especially over mountain ranges. For example, typical Bouguer anomalies in the Central Alps are −150 milligals. By contrast, the Bouguer anomaly is positive over oceans. These anomalies reflect the varying thickness of the Earth's crust. The higher continental terrain is supported by thick, low-density crust that "floats" on the denser mantle, while the ocean basins are floored by much thinner oceanic crust. The free-air and isostatic anomalies are small near the centers of ocean basins or continental plateaus, showing that these are approximately in isostatic equilibrium. The gravitational attraction of the high terrain is balanced by the reduced gravitational attraction of its underlying low-density roots. This brings the free-air anomaly, which omits the correction terms for either, close to zero. The isostatic anomaly includes correction terms for both effects, which reduces it nearly to zero as well. The Bouguer anomaly includes only the negative correction for the high terrain and so is strongly negative.
More generally, the Airy isostatic anomaly is zero over regions where there is complete isostatic compensation. The free-air anomaly is also close to zero except near boundaries of crustal blocks. The Bouger anomaly is very negative over elevated terrain. The opposite is true for the theoretical case of terrain that is completely uncompensated: The Bouger anomaly is zero while the free-air and Airy isostatic anomalies are very positive.
The Bouger anomaly map of the Alps shows additional features besides the expected deep mountain roots. A positive anomaly is associated with the Ivrea body, a wedge of dense mantle rock caught up by an ancient continental collision. The low-density sediments of the Molasse basin produce a negative anomaly. Larger surveys across the region provide evidence of a relict subduction zone. Negative isostatic anomalies in Switzerland correlate with areas of active uplift, while positive anomalies are associated with subsidence.
Over mid-ocean ridges, the free-air anomalies are small and correlate with the ocean bottom topography. The ridge and its flanks appear to be fully isostatically compensated. There is a large Bouger positive, of over 350 mgal, beyond from the ridge axis, which drops to 200 over the axis. This is consistent with seismic data and suggests the presence of a low-density magma chamber under the ridge axis.
There are intense isostatic and free-air anomalies along island arcs. These are indications of strong dynamic effects in subduction zones. The free-air anomaly is around +70 mgal along the Andes coast, and this is attributed to the subducting dense slab. The trench itself is very negative, with values more negative than −250 mgal. This arises from the low-density ocean water and sediments filling the trench.
Gravity anomalies provide clues on other processes taking place deep in the lithosphere. For example, the formation and sinking of a lithospheric root may explain negative isostatic anomalies in eastern Tien Shan. The Hawaiian gravity anomaly appears to be fully compensated within the lithosphere, not within the underlying aesthenosphere, contradicting the explanation of the Hawaiian rise as a product of aesthenosphere flow associated with the underlying mantle plume. The rise may instead be a result of lithosphere thinning: The underlying aesthenosphere is less dense than the lithosphere and it rises to produce the swell. Subsequent cooling thickens the lithosphere again and subsidence takes place.
Local anomalies.
Local anomalies are used in applied geophysics. For example, a local positive anomaly may indicate a body of metallic ores. Salt domes are typically expressed in gravity maps as lows, because salt has a low density compared to the rocks the dome intrudes.
At scales between entire mountain ranges and ore bodies, Bouguer anomalies may indicate rock types. For example, the northeast-southwest trending high across central New Jersey represents a graben of Triassic age largely filled with dense basalts.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "g_n = g_e(1 + \\beta_1 \\sin^2\\lambda + \\beta_2 \\sin^2 2\\lambda)"
},
{
"math_id": 1,
"text": "g_e"
},
{
"math_id": 2,
"text": "\\beta_1"
},
{
"math_id": 3,
"text": "\\beta_2"
},
{
"math_id": 4,
"text": "\\lambda"
},
{
"math_id": 5,
"text": "\\Delta g_F = g_m + (\\Delta g_{FA} + \\Delta g_T + \\Delta g_\\text{tide}) - g_n"
},
{
"math_id": 6,
"text": "\\Delta g_B = g_m + (\\Delta g_{BP} + \\Delta g_{FA} + \\Delta g_T + \\Delta g_\\text{tide}) - g_n"
}
] | https://en.wikipedia.org/wiki?curid=937535 |
937598 | Rearrangement inequality | In mathematics, the rearrangement inequality states that for every choice of real numbers
formula_0
and every permutation formula_1 of the numbers formula_2 we have
Informally, this means that in these types of sums, the largest sum is achieved by pairing large formula_3 values with large formula_4 values, and the smallest sum is achieved by pairing small values with large values. This can be formalised in the case that the formula_5 are distinct, meaning that
formula_6 then:
Note that the rearrangement inequality (1) makes no assumptions on the signs of the real numbers, unlike inequalities such as the arithmetic-geometric mean inequality.
<templatestyles src="Template:TOC_right/styles.css" />
Applications.
Many important inequalities can be proved by the rearrangement inequality, such as the arithmetic mean – geometric mean inequality, the Cauchy–Schwarz inequality, and Chebyshev's sum inequality.
As a simple example, consider real numbers formula_16: By applying (1) with formula_17 for all formula_18 it follows that
formula_19
for every permutation formula_1 of formula_20
Intuition.
The rearrangement inequality can be regarded as intuitive in the following way. Imagine there is a heap of $10 bills, a heap of $20 bills and one more heap of $100 bills. You are allowed to take 7 bills from a heap of your choice and then the heap disappears. In the second round you are allowed to take 5 bills from another heap and the heap disappears. In the last round you may take 3 bills from the last heap. In what order do you want to choose the heaps to maximize your profit? Obviously, the best you can do is to gain formula_21 dollars. This is exactly what the upper bound of the rearrangement inequality (1) says for the sequences formula_22 and formula_23 In this sense, it can be considered as an example of a greedy algorithm.
Geometric interpretation.
Assume that formula_24 and formula_25 Consider a rectangle of width formula_26 and height formula_27 subdivided into formula_28 columns of widths formula_29 and the same number of rows of heights formula_30 so there are formula_31 small rectangles. You are supposed to take formula_28 of these, one from each column and one from each row. The rearrangement inequality (1) says that you optimize the total area of your selection by taking the rectangles on the diagonal or the antidiagonal.
Proofs.
Proof by contradiction.
The lower bound and the corresponding discussion of equality follow by applying the results for the upper bound to
formula_32
Therefore, it suffices to prove the upper bound in (1) and discuss when equality holds.
Since there are only finitely many permutations of formula_33 there exists at least one formula_1 for which the middle term in (1)
formula_34
is maximal. In case there are several permutations with this property, let σ denote one with the highest number of integers formula_35 from formula_36 satisfying formula_37
We will now prove by contradiction, that formula_1 has to keep the order of formula_38 (then we are done with the upper bound in (1), because the identity has that property). Assume that there exists a formula_39 such that formula_40 for all formula_41 and formula_42 Hence formula_43 and there has to exist a formula_44 with formula_45 to fill the gap. Therefore,
which implies that
Expanding this product and rearranging gives
which is equivalent to (3). Hence the permutation
formula_46
which arises from formula_1 by exchanging the values formula_47 and formula_48 has at least one additional point which keeps the order compared to formula_49 namely at formula_50 satisfying formula_51 and also attains the maximum in (1) due to (4). This contradicts the choice of formula_52
If formula_6 then we have strict inequalities in (2), (3), and (4), hence the maximum can only be attained by permutations keeping the order of formula_53 and every other permutation formula_1 cannot be optimal.
Proof by induction.
As above, it suffices to treat the upper bound in (1). For a proof by mathematical induction, we start with formula_54 Observe that
formula_55
implies that
which is equivalent to
hence the upper bound in (1) is true for formula_56
If formula_57 then we get strict inequality in (5) and (6) if and only if formula_58 Hence only the identity, which is the only permutation here keeping the order of formula_59 gives the maximum.
As an induction hypothesis assume that the upper bound in the rearrangement inequality (1) is true for formula_60 with formula_61 and that in the case formula_62 there is equality only when the permutation formula_1 of formula_63 keeps the order of formula_64
Consider now formula_16 and formula_65 Take a formula_1 from the finite number of permutations of formula_66 such that the rearrangement in the middle of (1) gives the maximal result. There are two cases:
Generalizations.
Three or more sequences.
A straightforward generalization takes into account more sequences. Assume we have finite ordered sequences of nonnegative real numbers
formula_84
and a permutation formula_85 of formula_86 and another permutation formula_87 of formula_88 Then
formula_89
Note that, unlike the standard rearrangement inequality (1), this statement requires the numbers to be nonnegative. A similar statement is true for any number of sequences with all numbers nonnegative.
Functions instead of factors.
Another generalization of the rearrangement inequality states that for all real numbers formula_16 and every choice of continuously differentiable functions formula_90 for formula_91 such that their derivatives formula_92 satisfy
formula_93
the inequality
formula_94
holds for every permutation formula_95 of formula_96
Taking real numbers formula_97 and the linear functions formula_98 for real formula_3 and formula_18 the standard rearrangement inequality (1) is recovered.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "x_1 \\le \\cdots \\le x_n \\quad \\text{ and } \\quad y_1 \\le \\cdots \\le y_n"
},
{
"math_id": 1,
"text": "\\sigma"
},
{
"math_id": 2,
"text": "1, 2, \\ldots n"
},
{
"math_id": 3,
"text": "x"
},
{
"math_id": 4,
"text": "y"
},
{
"math_id": 5,
"text": "x_1, \\ldots, x_n"
},
{
"math_id": 6,
"text": "x_1 < \\cdots < x_n,"
},
{
"math_id": 7,
"text": "y_1, \\ldots, y_n,"
},
{
"math_id": 8,
"text": "y_{\\sigma(1)} \\le \\cdots \\le y_{\\sigma(n)},"
},
{
"math_id": 9,
"text": "(y_1,\\ldots,y_n) = (y_{\\sigma(1)},\\ldots,y_{\\sigma(n)})."
},
{
"math_id": 10,
"text": "y_1 = \\cdots = y_n"
},
{
"math_id": 11,
"text": "y_1, \\ldots, y_n."
},
{
"math_id": 12,
"text": "y_1 < \\cdots < y_n,"
},
{
"math_id": 13,
"text": "y_{\\sigma(1)} \\ge \\cdots \\ge y_{\\sigma(n)}."
},
{
"math_id": 14,
"text": "\\sigma(i) = n - i + 1"
},
{
"math_id": 15,
"text": "i = 1, \\ldots, n,"
},
{
"math_id": 16,
"text": "x_1 \\le \\cdots \\le x_n"
},
{
"math_id": 17,
"text": "y_i := x_i"
},
{
"math_id": 18,
"text": "i = 1,\\ldots,n,"
},
{
"math_id": 19,
"text": "x_1 x_n + \\cdots + x_n x_1\n\\le x_1 x_{\\sigma(1)} + \\cdots + x_n x_{\\sigma(n)}\n\\le x_1^2 + \\cdots + x_n^2"
},
{
"math_id": 20,
"text": "1, \\ldots, n."
},
{
"math_id": 21,
"text": "7\\cdot100 + 5\\cdot20 + 3\\cdot10"
},
{
"math_id": 22,
"text": "3 < 5 < 7"
},
{
"math_id": 23,
"text": "10 < 20 < 100."
},
{
"math_id": 24,
"text": "0 < x_1 < \\cdots < x_n"
},
{
"math_id": 25,
"text": " 0< y_1 < \\cdots < y_n."
},
{
"math_id": 26,
"text": "x_1 + \\cdots + x_n"
},
{
"math_id": 27,
"text": "y_1 + \\cdots + y_n,"
},
{
"math_id": 28,
"text": "n"
},
{
"math_id": 29,
"text": "x_1,\\ldots,x_n"
},
{
"math_id": 30,
"text": "y_1,\\ldots,y_n,"
},
{
"math_id": 31,
"text": "\\textstyle n^2"
},
{
"math_id": 32,
"text": "- y_n \\le \\cdots \\le - y_1."
},
{
"math_id": 33,
"text": "1,\\ldots,n,"
},
{
"math_id": 34,
"text": "x_1 y_{\\sigma(1)} + \\cdots + x_n y_{\\sigma(n)}"
},
{
"math_id": 35,
"text": "i"
},
{
"math_id": 36,
"text": "\\{1,\\ldots,n\\}"
},
{
"math_id": 37,
"text": "y_i = y_{\\sigma(i)}."
},
{
"math_id": 38,
"text": "y_1,\\ldots,y_n"
},
{
"math_id": 39,
"text": "j\\in\\{1,\\ldots,n-1\\}"
},
{
"math_id": 40,
"text": "y_i = y_{\\sigma(i)}"
},
{
"math_id": 41,
"text": "i\\in\\{1,\\ldots,j-1\\}"
},
{
"math_id": 42,
"text": "y_j \\neq y_{\\sigma(j)}."
},
{
"math_id": 43,
"text": "y_j < y_{\\sigma(j)}"
},
{
"math_id": 44,
"text": "k\\in\\{j+1,\\ldots,n\\}"
},
{
"math_id": 45,
"text": "y_j = y_{\\sigma(k)}"
},
{
"math_id": 46,
"text": "\\tau(i):=\\begin{cases}\\sigma(i)&\\text{for }i \\in \\{1,\\ldots,n\\}\\setminus\\{j,k\\},\\\\\n\\sigma(k)&\\text{for }i = j,\\\\\n\\sigma(j)&\\text{for }i = k,\\end{cases}"
},
{
"math_id": 47,
"text": "\\sigma(j)"
},
{
"math_id": 48,
"text": "\\sigma(k),"
},
{
"math_id": 49,
"text": "\\sigma,"
},
{
"math_id": 50,
"text": "j"
},
{
"math_id": 51,
"text": "y_j=y_{\\tau(j)},"
},
{
"math_id": 52,
"text": "\\sigma."
},
{
"math_id": 53,
"text": "y_1 \\le \\cdots \\le y_n,"
},
{
"math_id": 54,
"text": "n=2."
},
{
"math_id": 55,
"text": "x_1 \\le x_2 \\quad \\text{ and } \\quad y_1 \\le y_2"
},
{
"math_id": 56,
"text": "n = 2."
},
{
"math_id": 57,
"text": "x_1 < x_2,"
},
{
"math_id": 58,
"text": "y_1 < y_2."
},
{
"math_id": 59,
"text": "y_1 < y_2,"
},
{
"math_id": 60,
"text": "n-1"
},
{
"math_id": 61,
"text": "n\\ge3"
},
{
"math_id": 62,
"text": "x_1 <\\cdots <x_{n-1}"
},
{
"math_id": 63,
"text": "1,\\ldots,n-1"
},
{
"math_id": 64,
"text": "y_1,\\ldots,y_{n-1}."
},
{
"math_id": 65,
"text": "y_1 \\le \\cdots \\le y_n."
},
{
"math_id": 66,
"text": "1,\\ldots,n"
},
{
"math_id": 67,
"text": "\\sigma(n)=n,"
},
{
"math_id": 68,
"text": "y_n=y_{\\sigma(n)}"
},
{
"math_id": 69,
"text": "y_1,\\ldots,y_{n-1},y_n"
},
{
"math_id": 70,
"text": "x_1 < \\cdots < x_n."
},
{
"math_id": 71,
"text": "k := \\sigma(n) < n,"
},
{
"math_id": 72,
"text": "j\\in\\{1,\\dots,n-1\\}"
},
{
"math_id": 73,
"text": "\\sigma(j)=n."
},
{
"math_id": 74,
"text": "\\tau(i)=\\begin{cases}\\sigma(i)&\\text{for }i \\in \\{1,\\ldots,n\\}\\setminus\\{j,n\\},\\\\\nk&\\text{for }i = j,\\\\\nn&\\text{for }i = n,\\end{cases}"
},
{
"math_id": 75,
"text": "n."
},
{
"math_id": 76,
"text": "x_k = x_n"
},
{
"math_id": 77,
"text": "y_k = y_n,"
},
{
"math_id": 78,
"text": "\\tau"
},
{
"math_id": 79,
"text": "\\tau."
},
{
"math_id": 80,
"text": "x_k < x_n"
},
{
"math_id": 81,
"text": "y_k < y_n,"
},
{
"math_id": 82,
"text": "0 < (x_n - x_k) (y_n - y_k),"
},
{
"math_id": 83,
"text": "x_k y_n + x_n y_k < x_k y_k + x_n y_n"
},
{
"math_id": 84,
"text": "0 \\le x_1\\le\\cdots\\le x_n\\quad\\text{and}\\quad 0\\le y_1\\le\\cdots\\le y_n\\quad\\text{and}\\quad 0\\le z_1\\le\\cdots\\le z_n"
},
{
"math_id": 85,
"text": "y_{\\sigma(1)},\\ldots,y_{\\sigma(n)}"
},
{
"math_id": 86,
"text": "y_1,\\dots,y_n"
},
{
"math_id": 87,
"text": "z_{\\tau(1)},\\dots,z_{\\tau(n)}"
},
{
"math_id": 88,
"text": "z_1,\\dots,z_n."
},
{
"math_id": 89,
"text": " x_1 y_{\\sigma(1)} z_{\\tau(1)} + \\cdots + x_n y_{\\sigma(n)} z_{\\tau(n)} \\le x_1 y_1 z_1 + \\cdots + x_n y_n z_n."
},
{
"math_id": 90,
"text": "f_i: [x_1,x_n] \\to \\R"
},
{
"math_id": 91,
"text": "i = 1, 2, \\ldots, n"
},
{
"math_id": 92,
"text": "f'_1,\\ldots,f'_n"
},
{
"math_id": 93,
"text": "f'_1(x) \\le f'_2(x) \\le \\cdots \\le f'_n(x) \\quad \\text{ for all } x \\in [x_1,x_n],"
},
{
"math_id": 94,
"text": "\\sum_{i=1}^n f_{n-i+1}(x_i) \\le \\sum_{i=1}^n f_{\\sigma(i)}(x_i) \\le \\sum_{i=1}^n f_i(x_i)"
},
{
"math_id": 95,
"text": "f_{\\sigma(1)}, \\ldots, f_{\\sigma(n)}"
},
{
"math_id": 96,
"text": "f_1, \\ldots, f_n."
},
{
"math_id": 97,
"text": "y_1 \\le \\cdots \\le y_n"
},
{
"math_id": 98,
"text": "f_i(x) := x y_i"
}
] | https://en.wikipedia.org/wiki?curid=937598 |
937664 | Chebyshev's sum inequality | In mathematics, Chebyshev's sum inequality, named after Pafnuty Chebyshev, states that if
formula_0 and formula_1
then
formula_2
Similarly, if
formula_3 and formula_1
then
formula_4
Proof.
Consider the sum
formula_5
The two sequences are non-increasing, therefore "a""j" − "a""k" and "b""j" − "b""k" have the same sign for any "j", "k". Hence "S" ≥ 0.
Opening the brackets, we deduce:
formula_6
hence
formula_7
An alternative proof is simply obtained with the rearrangement inequality, writing that
formula_8
Continuous version.
There is also a continuous version of Chebyshev's sum inequality:
If "f" and "g" are real-valued, integrable functions over ["a", "b"], both non-increasing or both non-decreasing, then
formula_9
with the inequality reversed if one is non-increasing and the other is non-decreasing.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "a_1 \\geq a_2 \\geq \\cdots \\geq a_n \\quad"
},
{
"math_id": 1,
"text": "\\quad b_1 \\geq b_2 \\geq \\cdots \\geq b_n,"
},
{
"math_id": 2,
"text": "{1 \\over n} \\sum_{k=1}^n a_k b_k \\geq \\left({1 \\over n}\\sum_{k=1}^n a_k\\right)\\!\\!\\left({1 \\over n}\\sum_{k=1}^n b_k\\right)\\!."
},
{
"math_id": 3,
"text": "a_1 \\leq a_2 \\leq \\cdots \\leq a_n \\quad"
},
{
"math_id": 4,
"text": "{1 \\over n} \\sum_{k=1}^n a_k b_k \\leq \\left({1 \\over n}\\sum_{k=1}^n a_k\\right)\\!\\!\\left({1 \\over n}\\sum_{k=1}^n b_k\\right)\\!."
},
{
"math_id": 5,
"text": "S = \\sum_{j=1}^n \\sum_{k=1}^n (a_j - a_k) (b_j - b_k)."
},
{
"math_id": 6,
"text": "0 \\leq 2 n \\sum_{j=1}^n a_j b_j - 2 \\sum_{j=1}^n a_j \\, \\sum_{j=1}^n b_j,"
},
{
"math_id": 7,
"text": "\\frac{1}{n} \\sum_{j=1}^n a_j b_j \\geq \\left( \\frac{1}{n} \\sum_{j=1}^n a_j\\right)\\!\\!\\left(\\frac{1}{n} \\sum_{j=1}^n b_j\\right)\\!."
},
{
"math_id": 8,
"text": "\\sum_{i=0}^{n-1} a_i \\sum_{j=0}^{n-1} b_j = \\sum_{i=0}^{n-1} \\sum_{j=0}^{n-1} a_i b_j =\\sum_{i=0}^{n-1}\\sum_{k=0}^{n-1} a_i b_{i+k~\\text{mod}~n} = \\sum_{k=0}^{n-1} \\sum_{i=0}^{n-1} a_i b_{i+k~\\text{mod}~n}\n\\leq \\sum_{k=0}^{n-1} \\sum_{i=0}^{n-1} a_ib_i = n \\sum_i a_ib_i."
},
{
"math_id": 9,
"text": "\\frac{1}{b-a} \\int_a^b f(x)g(x) \\,dx \\geq\\! \\left(\\frac{1}{b-a} \\int_a^b f(x) \\,dx\\right)\\!\\!\\left(\\frac{1}{b-a}\\int_a^b g(x) \\,dx\\right)"
}
] | https://en.wikipedia.org/wiki?curid=937664 |
937739 | Time evolution | Change of state over time, especially in physics
Time evolution is the change of state brought about by the passage of time, applicable to systems with internal state (also called "stateful systems"). In this formulation, "time" is not required to be a continuous parameter, but may be discrete or even finite. In classical physics, time evolution of a collection of rigid bodies is governed by the principles of classical mechanics. In their most rudimentary form, these principles express the relationship between forces acting on the bodies and their acceleration given by Newton's laws of motion. These principles can be equivalently expressed more abstractly by Hamiltonian mechanics or Lagrangian mechanics.
The concept of time evolution may be applicable to other stateful systems as well. For instance, the operation of a Turing machine can be regarded as the time evolution of the machine's control state together with the state of the tape (or possibly multiple tapes) including the position of the machine's read-write head (or heads). In this case, time is considered to be discrete steps.
Stateful systems often have dual descriptions in terms of states or in terms of observable values. In such systems, time evolution can also refer to the change in observable values. This is particularly relevant in quantum mechanics where the Schrödinger picture and Heisenberg picture are (mostly) equivalent descriptions of time evolution.
Time evolution operators.
Consider a system with state space "X" for which evolution is deterministic and reversible. For concreteness let us also suppose time is a parameter that ranges over the set of real numbers R. Then time evolution is given by a family of bijective state transformations
formula_0.
F"t", "s"("x") is the state of the system at time "t", whose state at time "s" is "x". The following identity holds
formula_1
To see why this is true, suppose "x" ∈ "X" is the state at time "s". Then by the definition of F, F"t", "s"("x") is the state of the system at time "t" and consequently applying the definition once more, F"u", "t"(F"t", "s"("x")) is the state at time "u". But this is also F"u", "s"("x").
In some contexts in mathematical physics, the mappings F"t", "s" are called "propagation operators" or simply propagators. In classical mechanics, the propagators are functions that operate on the phase space of a physical system. In quantum mechanics, the propagators are usually unitary operators on a Hilbert space. The propagators can be expressed as time-ordered exponentials of the integrated Hamiltonian. The asymptotic properties of time evolution are given by the scattering matrix.
A state space with a distinguished propagator is also called a dynamical system.
To say time evolution is homogeneous means that
formula_2 for all formula_3.
In the case of a homogeneous system, the mappings G"t" = F"t",0 form a one-parameter group of transformations of "X", that is
formula_4
For non-reversible systems, the propagation operators F"t", "s" are defined whenever "t" ≥ "s" and satisfy the propagation identity
formula_5 for any formula_6.
In the homogeneous case the propagators are exponentials of the Hamiltonian.
In quantum mechanics.
In the Schrödinger picture, the Hamiltonian operator generates the time evolution of quantum states. If formula_7 is the state of the system at time formula_8, then
formula_9
This is the Schrödinger equation. Given the state at some initial time (formula_10), if formula_11 is independent of time, then the unitary time evolution operator formula_12 is the exponential operator as shown in the equation
formula_13
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(\\operatorname{F}_{t, s} \\colon X \\rightarrow X)_{s, t \\in \\mathbb{R}}"
},
{
"math_id": 1,
"text": " \\operatorname{F}_{u, t} (\\operatorname{F}_{t, s} (x)) = \\operatorname{F}_{u, s}(x). "
},
{
"math_id": 2,
"text": " \\operatorname{F}_{u, t} = \\operatorname{F}_{u - t,0}"
},
{
"math_id": 3,
"text": "u,t \\in \\mathbb{R}"
},
{
"math_id": 4,
"text": " \\operatorname{G}_{t+s} = \\operatorname{G}_{t}\\operatorname{G}_{s}."
},
{
"math_id": 5,
"text": " \\operatorname{F}_{u, t} (\\operatorname{F}_{t, s} (x)) = \\operatorname{F}_{u, s}(x)"
},
{
"math_id": 6,
"text": "u \\geq t \\geq s"
},
{
"math_id": 7,
"text": " \\left| \\psi (t) \\right\\rangle"
},
{
"math_id": 8,
"text": "t"
},
{
"math_id": 9,
"text": " H \\left| \\psi (t) \\right\\rangle = i \\hbar {\\partial\\over\\partial t} \\left| \\psi (t) \\right\\rangle."
},
{
"math_id": 10,
"text": "t = 0"
},
{
"math_id": 11,
"text": "H"
},
{
"math_id": 12,
"text": "U(t)"
},
{
"math_id": 13,
"text": " \\left| \\psi (t) \\right\\rangle = U(t)\\left| \\psi (0) \\right\\rangle = e^{-iHt/\\hbar} \\left| \\psi (0) \\right\\rangle."
}
] | https://en.wikipedia.org/wiki?curid=937739 |
937751 | Robert Manning (engineer) | Irish hydraulic engineer
Robert Manning (22 October 1816 – 9 December 1897) was an Irish hydraulic engineer best known for creation of the Manning formula.
Manning was born in Normandy, France, the son of a soldier who had fought the previous year at the Battle of Waterloo. In 1826 he moved to Waterford, Ireland and in time worked as an accountant.
In 1846, during the year of the great famine, Manning was recruited into the Arterial Drainage Division of the Irish Office of Public Works. After working as a draughtsman for a while, he was appointed an assistant engineer to Samuel Roberts later that year. In 1848, he became district engineer, a position he held until 1855. As a district engineer, he read ""Traité d'Hydraulique" by d'Aubisson des Voissons, after which he developed a great interest in hydraulics.
From 1855 to 1869, Manning was employed by the Marquis of Downshire, while he supervised the construction of the Dundrum Bay Harbour in Ireland and designed a water supply system for Belfast. After the Marquis’ death in 1869, Manning returned to the Irish Office of Public Works as assistant to the chief engineer. He became chief engineer himself in 1874, a position he held it until his retirement in 1891. In 1866 he was awarded the Telford Medal by the Institution of Civil Engineers for his paper "On… the flow of water off the ground in the Woodburn District near Carrickfergus"".
He died at his Dublin home on 9 December 1897 and was buried at Mount Jerome Cemetery. He had married his second cousin Susanna Gibson in 1848, by whom he had seven surviving children, one of whom, William Manning, was also an engineer. His second daughter was the painter Mary Ruth Manning (1853-1930).
Manning Formula.
Manning did not receive any education or formal training in fluid mechanics or engineering. His accounting background and pragmatism influenced his work and drove him to reduce problems to their simplest form. He compared and evaluated seven best known formulae of the time for the flow of water in a channel: Du Buat (1786), Eyelwein (1814), Weisbach (1845), St. Venant (1851), Neville (1860), Darcy and Bazin (1865), and Ganguillet and Kutter (1869). He calculated the velocity obtained from each formula for a given slope and for hydraulic radii varying from 0.25 m to 30 m. Then, for each condition, he found the mean value of the seven velocities and developed a formula that best fitted the data.
The first best-fit formula was the following:
formula_0
He then simplified this formula to:
formula_1
In 1885, Manning gave formula_2 the value of 2/3 and wrote his formula as follows:
formula_3
In a letter to Flamant, Manning stated: "The reciprocal of C corresponds closely with that of n, as determined by Ganguillet and Kutter; both C and n being constant for the same channel."
On 4 December 1889, at the age of 73, Manning first proposed his formula to the Institution of Civil Engineers (Ireland). This formula saw the light in 1891, in a paper written by him entitled "On the flow of water in open channels and pipes," published in the Transactions of the Institution of Civil Engineers (Ireland).
Manning did not like his own equation for two reasons: First, it was difficult in those days to determine the cube root of a number and then square it to arrive at a number to the 2/3 power. In addition, the equation was dimensionally incorrect, and so to obtain dimensional correctness he developed the following equation:
formula_4
where formula_5 = "height of a column of mercury which balances the atmosphere," and formula_6 was a dimensionless number "which varies with the nature of the surface."
However, in some late 19th century textbooks, the Manning formula was written as follows:
formula_7
Through his "Handbook of Hydraulics," King (1918) led to the widespread use of the Manning formula as we know it today, as well as to the acceptance that the Manning's coefficient formula_6 should be the reciprocal of Kutter's formula_8.
In the United States, formula_8 is referred to as Manning's friction factor, or Manning's constant. In Europe, the Strickler formula_9 is the same as Manning's formula_6, i.e., the reciprocal of formula_8.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " V = 32 \\left[ RS \\left( 1 + R^{1/3} \\right)\\right]^{1/2}"
},
{
"math_id": 1,
"text": " V = C R^{x} S^{1/2} "
},
{
"math_id": 2,
"text": "x"
},
{
"math_id": 3,
"text": " V = C R^{2/3} S^{1/2}"
},
{
"math_id": 4,
"text": " V = C (gS)^{1/2} \\left[ R^{1/2} +\\left( \\dfrac{0.22}{m^{1/2}} \\right)\\left( R - 0.15 m \\right) \\right] "
},
{
"math_id": 5,
"text": "m"
},
{
"math_id": 6,
"text": "C"
},
{
"math_id": 7,
"text": " V = \\left(\\dfrac{1}{n}\\right) R^{2/3} S^{1/2} "
},
{
"math_id": 8,
"text": "n"
},
{
"math_id": 9,
"text": "K"
}
] | https://en.wikipedia.org/wiki?curid=937751 |
9377661 | Support function | In mathematics, the support function "h""A" of a non-empty closed convex set "A" in formula_0
describes the (signed) distances of supporting hyperplanes of "A" from the origin. The support function is a convex function on formula_0.
Any non-empty closed convex set "A" is uniquely determined by "h""A". Furthermore, the support function, as a function of the set "A", is compatible with many natural geometric operations, like scaling, translation, rotation and Minkowski addition.
Due to these properties, the support function is one of the most central basic concepts in convex geometry.
Definition.
The support function formula_1
of a non-empty closed convex set "A" in formula_0 is given by
formula_2
formula_3; see
. Its interpretation is most intuitive when "x" is a unit vector:
by definition, "A" is contained in the closed half space
formula_4
and there is at least one point of "A" in the boundary
formula_5
of this half space. The hyperplane "H"("x") is therefore called a "supporting hyperplane"
with "exterior" (or "outer") unit normal vector "x".
The word "exterior" is important here, as
the orientation of "x" plays a role, the set "H"("x") is in general different from "H"(−"x").
Now "h""A"("x") is the (signed) distance of "H"("x") from the origin.
Examples.
The support function of a singleton "A" = {"a"} is formula_6.
The support function of the Euclidean unit ball formula_7 is formula_8 where formula_9 is the 2-norm.
If "A" is a line segment through the origin with endpoints −"a" and "a", then formula_10.
Properties.
As a function of "x".
The support function of a "compact" nonempty convex set is real valued and continuous, but if the
set is closed and unbounded, its support function is extended real valued (it takes the value
formula_11). As any nonempty closed convex set is the intersection of
its supporting half spaces, the function "h""A" determines "A" uniquely.
This can be used to describe certain geometric properties of convex sets analytically.
For instance, a set "A" is point symmetric with respect to the origin if and only if "h""A"
is an even function.
In general, the support function is not differentiable.
However, directional derivatives exist and yield support functions of support sets. If "A" is "compact" and convex,
and "h""A"'("u";"x") denotes the directional derivative of
"h""A" at "u" ≠ "0" in direction "x",
we have
formula_12
Here "H"("u") is the supporting hyperplane of "A" with exterior normal vector "u", defined
above. If "A" ∩ "H"("u") is a singleton {"y"}, say, it follows that the support function is differentiable at
"u" and its gradient coincides with "y". Conversely, if "h""A" is differentiable at "u", then "A" ∩ "H"("u") is a singleton. Hence "h""A" is differentiable at all points "u" ≠ "0"
if and only if "A" is "strictly convex" (the boundary of "A" does not contain any line segments).
More generally, when formula_13 is convex and closed then for any formula_14,
formula_15
where formula_16 denotes the set of subgradients of formula_17 at formula_18.
It follows directly from its definition that the support function is positive homogeneous:
formula_19
and subadditive:
formula_20
It follows that "h""A" is a convex function.
It is crucial in convex geometry that these properties characterize support functions:
Any positive homogeneous, convex, real valued function on formula_0 is the
support function of a nonempty compact convex set. Several proofs are known,
one is using the fact that the Legendre transform of a positive homogeneous, convex, real valued function
is the (convex) indicator function of a compact convex set.
Many authors restrict the support function to the Euclidean unit sphere
and consider it as a function on "S""n"-1.
The homogeneity property shows that this restriction determines the
support function on formula_0, as defined above.
As a function of "A".
The support functions of a dilated or translated set are closely related to the original set "A":
formula_21
and
formula_22
The latter generalises to
formula_23
where "A" + "B" denotes the Minkowski sum:
formula_24
The Hausdorff distance "d" H("A", "B")
of two nonempty compact convex sets "A" and "B" can be expressed in terms of support functions,
formula_25
where, on the right hand side, the uniform norm on the unit sphere is used.
The properties of the support function as a function of the set "A" are sometimes summarized in saying
that formula_26:"A" formula_27 "h" "A" maps the family of non-empty
compact convex sets to the cone of all real-valued continuous functions on the sphere whose positive
homogeneous extension is convex. Abusing terminology slightly, formula_26
is sometimes called "linear", as it respects Minkowski addition, although it is not
defined on a linear space, but rather on an (abstract) convex cone of nonempty compact convex sets.
The mapping formula_26 is an isometry between this cone, endowed with the Hausdorff metric, and
a subcone of the family of continuous functions on "S""n"-1 with the uniform norm.
Variants.
In contrast to the above, support functions are sometimes defined on the boundary of "A" rather than on
"S""n"-1, under the assumption that there exists a unique exterior unit normal at each boundary point.
Convexity is not needed for the definition.
For an oriented regular surface, "M", with a unit normal vector, "N", defined everywhere on its surface, the support function
is then defined by
formula_28.
In other words, for any formula_29, this support function gives the
signed distance of the unique hyperplane that touches "M" in "x".
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbb{R}^n"
},
{
"math_id": 1,
"text": "h_A\\colon\\mathbb{R}^n\\to\\mathbb{R}"
},
{
"math_id": 2,
"text": " h_A(x)=\\sup\\{ x\\cdot a: a\\in A\\},"
},
{
"math_id": 3,
"text": "x\\in\\mathbb{R}^n"
},
{
"math_id": 4,
"text": " \\{y\\in\\mathbb{R}^n: y\\cdot x \\leqslant h_A(x) \\}"
},
{
"math_id": 5,
"text": " H(x)= \\{y\\in\\mathbb{R}^n: y\\cdot x = h_A(x) \\}"
},
{
"math_id": 6,
"text": "h_{A}(x)=x \\cdot a"
},
{
"math_id": 7,
"text": "B = \\{ y\\in \\mathbb{R}^n\\,:\\, \\|y\\|_2 \\le 1\\} "
},
{
"math_id": 8,
"text": "h_{B}(x)=\\|x\\|_2"
},
{
"math_id": 9,
"text": "\\|\\cdot\\|_2"
},
{
"math_id": 10,
"text": "h_A(x)=|x\\cdot a|"
},
{
"math_id": 11,
"text": "\\infty"
},
{
"math_id": 12,
"text": " h_A'(u;x)= h_{A \\cap H(u)}(x) \\qquad x \\in \\mathbb{R}^n."
},
{
"math_id": 13,
"text": "A"
},
{
"math_id": 14,
"text": "u\\in \\mathbb{R}^n\\setminus\\{0\\}"
},
{
"math_id": 15,
"text": "\\partial h_A(u) = H(u)\\cap A\\,, "
},
{
"math_id": 16,
"text": "\\partial h_A(u) "
},
{
"math_id": 17,
"text": "h_A"
},
{
"math_id": 18,
"text": "u"
},
{
"math_id": 19,
"text": " h_A(\\alpha x)=\\alpha h_A(x), \\qquad \\alpha \\ge 0, x\\in \\mathbb{R}^n,"
},
{
"math_id": 20,
"text": " h_A(x+y)\\le h_A(x)+ h_A(y), \\qquad x,y\\in \\mathbb{R}^n."
},
{
"math_id": 21,
"text": " h_{\\alpha A}(x)=\\alpha h_A(x), \\qquad \\alpha \\ge 0, x\\in \\mathbb{R}^n"
},
{
"math_id": 22,
"text": " h_{A+b}(x)=h_A(x)+x\\cdot b, \\qquad x,b\\in \\mathbb{R}^n."
},
{
"math_id": 23,
"text": " h_{A+B}(x)=h_A(x)+h_B(x), \\qquad x\\in \\mathbb{R}^n,"
},
{
"math_id": 24,
"text": "A + B := \\{\\, a + b \\in \\mathbb{R}^{n} \\mid a \\in A,\\ b \\in B \\,\\}."
},
{
"math_id": 25,
"text": " d_{\\mathrm H}(A,B) = \\| h_A-h_B\\|_\\infty"
},
{
"math_id": 26,
"text": "\\tau"
},
{
"math_id": 27,
"text": "\\mapsto"
},
{
"math_id": 28,
"text": "{x}\\mapsto{x}\\cdot N({x})"
},
{
"math_id": 29,
"text": "{x}\\in M"
}
] | https://en.wikipedia.org/wiki?curid=9377661 |
937967 | Endemic (epidemiology) | Disease which is constantly present in an area
In epidemiology, an infection is said to be endemic in a specific population or populated place when that infection is constantly present, or maintained at a baseline level, without extra infections being brought into the group as a result of travel or similar means. The term describes the distribution (spread) of an infectious disease among a group of people or within a populated area. An endemic disease always has a steady, predictable number of people getting sick, but that number can be high ("hyperendemic") or low ("hypoendemic"), and the disease can be severe or mild. Also, a disease that is usually endemic can become epidemic.
For example, chickenpox is endemic (steady state) in the United Kingdom, but malaria is not. Every year, there are a few cases of malaria reported in the UK, but these do not lead to sustained transmission in the population due to the lack of a suitable vector (mosquitoes of the genus "Anopheles"). Consequently, the number of people infected by malaria in the UK is too variable to be called endemic. However, the number of people who get chickenpox in the UK varies little from year to year, so chickenpox is considered endemic in the UK.
Mathematical determination.
For an infection that relies on person-to-person transmission, to be endemic, each person who becomes infected with the disease must pass it on to one other person on average. Assuming a completely susceptible population, that means that the basic reproduction number (R0) of the infection must equal one. In a population with some immune individuals, the basic reproduction number multiplied by the proportion of susceptible individuals in the population ("S") must be one. This takes account of the probability of each individual to whom the disease may be transmitted being susceptible to it, effectively discounting the immune sector of the population. So, for a disease to be in an endemic steady state or endemic equilibrium, it holds that
formula_0
In this way, the infection neither dies out nor does the number of infected people increase exponentially but the infection is said to be in an endemic steady state. An infection that starts as an epidemic will eventually either die out (with the possibility of it resurging in a theoretically predictable cyclical manner) or reach the endemic steady state, depending on a number of factors, including the virulence of the disease and its mode of transmission.
If a disease is in an endemic steady state in a population, the relation above allows us to estimate the R0 (an important parameter) of a particular infection. This in turn can be fed into a mathematical model for the epidemic. Based on the reproduction number, we can define the epidemic waves, such as the first wave, second wave, etc. for COVID-19 in different regions and countries.
Misuse.
While it might be common to say that AIDS is endemic in some countries, meaning that it is regularly found in an area, this is a use of the word in its etymological, rather than epidemiological or ecological, form.
Some in the public wrongly assume that endemic COVID-19 means the disease severity would necessarily be mild. Endemic COVID-19 could be mild if previously acquired immunity reduces the risk of death and disability during future infections, but in itself endemicity only means that there will be a steady, predictable number of sick people.
Examples.
This is a short, incomplete list of some infections that are usually considered endemic:
Smallpox was an endemic disease until it was eradicated through vaccination.
Etymology.
The word "endemic" comes from the Greek: , , "in, within" and , , "people". | [
{
"math_id": 0,
"text": "R_0 \\times S = 1"
}
] | https://en.wikipedia.org/wiki?curid=937967 |
9379691 | Radical of a module | In mathematics, in the theory of modules, the radical of a module is a component in the theory of structure and classification. It is a generalization of the Jacobson radical for rings. In many ways, it is the dual notion to that of the socle soc("M") of "M".
Definition.
Let "R" be a ring and "M" a left "R"-module. A submodule "N" of "M" is called maximal or cosimple if the quotient "M"/"N" is a simple module. The radical of the module "M" is the intersection of all maximal submodules of "M",
formula_0
Equivalently,
formula_1
These definitions have direct dual analogues for soc("M").
Properties.
In fact, if "M" is finitely generated over a ring, then rad("M") itself is a superfluous submodule. This is because any proper submodule of "M" is contained in a maximal submodule of "M" when "M" is finitely generated. | [
{
"math_id": 0,
"text": "\\mathrm{rad}(M) = \\bigcap\\, \\{N \\mid N \\mbox{ is a maximal submodule of } M\\}"
},
{
"math_id": 1,
"text": "\\mathrm{rad}(M) = \\sum\\, \\{S \\mid S \\mbox{ is a superfluous submodule of } M\\}"
}
] | https://en.wikipedia.org/wiki?curid=9379691 |
9380238 | Fortune's algorithm | Voronoi diagram generation algorithm
Fortune's algorithm is a sweep line algorithm for generating a Voronoi diagram from a set of points in a plane using O("n" log "n") time and O("n") space. It was originally published by Steven Fortune in 1986 in his paper "A sweepline algorithm for Voronoi diagrams."
Algorithm description.
The algorithm maintains both a sweep line and a "beach line", which both move through the plane as the algorithm progresses. The sweep line is a straight line, which we may by convention assume to be vertical and moving left to right across the plane. At any time during the algorithm, the input points left of the sweep line will have been incorporated into the Voronoi diagram, while the points right of the sweep line will not have been considered yet. The beach line is not a straight line, but a complicated, piecewise curve to the left of the sweep line, composed of pieces of parabolas; it divides the portion of the plane within which the Voronoi diagram can be known, regardless of what other points might be right of the sweep line, from the rest of the plane. For each point left of the sweep line, one can define a parabola of points equidistant from that point and from the sweep line; the beach line is the boundary of the union of these parabolas. As the sweep line progresses, the vertices of the beach line, at which two parabolas cross, trace out the edges of the Voronoi diagram. The beach line progresses by keeping each parabola base exactly halfway between the points initially swept over with the sweep line, and the new position of the sweep line. Mathematically, this means each parabola is formed by using the sweep line as the directrix and the input point as the focus.
The algorithm maintains as data structures a binary search tree describing the combinatorial structure of the beach line, and a priority queue listing potential future events that could change the beach line structure. These events include the addition of another parabola to the beach line (when the sweep line crosses another input point) and the removal of a curve from the beach line (when the sweep line becomes tangent to a circle through some three input points whose parabolas form consecutive segments of the beach line). Each such event may be prioritized by the "x"-coordinate of the sweep line at the point the event occurs. The algorithm itself then consists of repeatedly removing the next event from the priority queue, finding the changes the event causes in the beach line, and updating the data structures.
As there are O("n") events to process (each being associated with some feature of the Voronoi diagram) and O(log "n") time to process an event (each consisting of a constant number of binary search tree and priority queue operations) the total time is O("n" log "n").
Pseudocode.
Pseudocode description of the algorithm.
let formula_0 be the transformation formula_1,
where formula_2 is the Euclidean distance between z and the nearest site
let T be the "beach line"
let formula_3 be the region covered by site p.
let formula_4 be the boundary ray between sites p and q.
let formula_5 be a set of sites on which this algorithm is to be applied.
let formula_6 be the sites extracted from S with minimal y-coordinate, ordered by x-coordinate
let DeleteMin(X) be the act of removing the lowest and leftmost site of X (sort by y unless they're identical, in which case sort by x)
let V be the Voronoi map of S which is to be constructed by this algorithm
formula_7
create initial vertical boundary rays formula_8
formula_9
while not IsEmpty(Q) do
p ← DeleteMin(Q)
case p of
p is a site in formula_10:
find the occurrence of a region formula_11 in T containing p,
bracketed by formula_12 on the left and formula_13 on the right
create new boundary rays formula_14 and formula_15 with bases p
replace formula_11 with formula_16 in T
delete from Q any intersection between formula_12 and formula_13
insert into Q any intersection between formula_12 and formula_14
insert into Q any intersection between formula_15 and formula_13
p is a Voronoi vertex in formula_10:
let p be the intersection of formula_17 on the left and formula_18 on the right
let formula_19 be the left neighbor of formula_17 and
let formula_20 be the right neighbor of formula_18 in T
if formula_21,
create a new boundary ray formula_22
else if p is right of the higher of q and s,
create formula_23
else
create formula_24
endif
replace formula_25 with newly created formula_13 in T
delete from Q any intersection between formula_19 and formula_17
delete from Q any intersection between formula_18 and formula_20
insert into Q any intersection between formula_19 and formula_13
insert into Q any intersection between formula_13 and formula_20
record p as the summit of formula_17 and formula_18 and the base of formula_13
output the boundary segments formula_17 and formula_18
endcase
endwhile
output the remaining boundary rays in T
Weighted sites and disks.
Additively weighted sites.
As Fortune describes in ref., a modified version of the sweep line algorithm can be used to construct an additively weighted Voronoi diagram, in which the distance to each site is offset by the weight of the site; this may equivalently be viewed as a Voronoi diagram of a set of disks, centered at the sites with radius equal to the weight of the site. the algorithm is found to have formula_26 time complexity with n being the number of sites according to ref.
Weighted sites may be used to control the areas of the Voronoi cells when using Voronoi diagrams to construct treemaps. In an additively weighted Voronoi diagram, the bisector between sites is in general a hyperbola, in contrast to unweighted Voronoi diagrams and power diagrams of disks for which it is a straight line.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\scriptstyle *(z)"
},
{
"math_id": 1,
"text": "\\scriptstyle *(z)=(z_x,z_y+d(z))"
},
{
"math_id": 2,
"text": "\\scriptstyle d(z)"
},
{
"math_id": 3,
"text": "\\scriptstyle R_p"
},
{
"math_id": 4,
"text": "\\scriptstyle C_{pq}"
},
{
"math_id": 5,
"text": "\\scriptstyle S"
},
{
"math_id": 6,
"text": "\\scriptstyle p_1,p_2,...,p_m"
},
{
"math_id": 7,
"text": "Q \\gets {p_1,p_2,\\dots,p_m,S}"
},
{
"math_id": 8,
"text": "\\scriptstyle C_{p_1,p_2}^0,C_{p_2,p_3}^0,\\dots,C_{p_{m-1},p_m}^0"
},
{
"math_id": 9,
"text": "T \\gets *(R_{p_1}),C_{p_1,p_2}^0,*(R_{p_2}),C_{p_2,p_3}^0,\\dots,*(R_{p_{m-1}}),C_{p_{m-1},p_m}^0,*(R_{p_m})"
},
{
"math_id": 10,
"text": "\\scriptstyle *(V)"
},
{
"math_id": 11,
"text": "\\scriptstyle *(R_q)"
},
{
"math_id": 12,
"text": "\\scriptstyle C_{rq}"
},
{
"math_id": 13,
"text": "\\scriptstyle C_{qs}"
},
{
"math_id": 14,
"text": "\\scriptstyle C_{pq}^-"
},
{
"math_id": 15,
"text": "\\scriptstyle C_{pq}^+"
},
{
"math_id": 16,
"text": "\\scriptstyle *(R_q),C_{pq}^-,*(R_p),C_{pq}^+,*(R_q)"
},
{
"math_id": 17,
"text": "\\scriptstyle C_{qr}"
},
{
"math_id": 18,
"text": "\\scriptstyle C_{rs}"
},
{
"math_id": 19,
"text": "\\scriptstyle C_{uq}"
},
{
"math_id": 20,
"text": "\\scriptstyle C_{sv}"
},
{
"math_id": 21,
"text": "\\scriptstyle q_y = s_y"
},
{
"math_id": 22,
"text": "\\scriptstyle C_{qs}^0"
},
{
"math_id": 23,
"text": "\\scriptstyle C_{qs}^+"
},
{
"math_id": 24,
"text": "\\scriptstyle C_{qs}^-"
},
{
"math_id": 25,
"text": "\\scriptstyle C_{qr},*(R_r),C_{rs}"
},
{
"math_id": 26,
"text": "O(n\\log(n))"
}
] | https://en.wikipedia.org/wiki?curid=9380238 |
9380313 | Cayley's mousetrap | Game in combinatorics
Mousetrap is the name of a game introduced by the English mathematician Arthur Cayley. In the game, cards numbered formula_0 through formula_1 ("say thirteen" in Cayley's original article) are shuffled to place them in some random permutation and are arranged in a circle with their faces up. Then, starting with the first card, the player begins counting formula_2 and moving to the next card as the count is incremented. If at any point the player's current count matches the number on the card currently being pointed to, that card is removed from the circle and the player starts all over at formula_0 on the next card. If the player ever removes all of the cards from the permutation in this manner, then the player wins. If the player reaches the count formula_3 and cards still remain, then the game is lost.
In order for at least one card to be removed, the initial permutation of the cards must not be a derangement. However, this is not a sufficient condition for winning, because it does not take into account subsequent removals. The number of ways the cards can be arranged such that the entire game is won, for "n" = 1, 2, ..., are
1, 1, 2, 6, 15, 84, 330, 1812, 9978, 65503, ... (sequence in the OEIS).
For example with four cards, the probability of winning is 0.25, but this reduces as the number of cards increases, and with thirteen cards it is about 0.0046. | [
{
"math_id": 0,
"text": "1"
},
{
"math_id": 1,
"text": "n"
},
{
"math_id": 2,
"text": "1, 2, 3, ..."
},
{
"math_id": 3,
"text": "n+1"
}
] | https://en.wikipedia.org/wiki?curid=9380313 |
93807 | Synchronous dynamic random-access memory | Type of computer memory
Synchronous dynamic random-access memory (synchronous dynamic RAM or SDRAM) is any DRAM where the operation of its external pin interface is coordinated by an externally supplied clock signal.
DRAM integrated circuits (ICs) produced from the early 1970s to the early 1990s used an "asynchronous" interface, in which input control signals have a direct effect on internal functions delayed only by the trip across its semiconductor pathways. SDRAM has a "synchronous" interface, whereby changes on control inputs are recognised after a rising edge of its clock input. In SDRAM families standardized by JEDEC, the clock signal controls the stepping of an internal finite-state machine that responds to incoming commands. These commands can be pipelined to improve performance, with previously started operations completing while new commands are received. The memory is divided into several equally sized but independent sections called "banks", allowing the device to operate on a memory access command in each bank simultaneously and speed up access in an interleaved fashion. This allows SDRAMs to achieve greater concurrency and higher data transfer rates than asynchronous DRAMs could.
Pipelining means that the chip can accept a new command before it has finished processing the previous one. For a pipelined write, the write command can be immediately followed by another command without waiting for the data to be written into the memory array. For a pipelined read, the requested data appears a fixed number of clock cycles (latency) after the read command, during which additional commands can be sent.
History.
The earliest DRAMs were often synchronized with the CPU clock (clocked) and were used with early microprocessors. In the mid-1970s, DRAMs moved to the asynchronous design, but in the 1990s returned to synchronous operation.
The first commercial SDRAM was the Samsung KM48SL2000 memory chip, which had a capacity of 16Mbit. It was manufactured by Samsung Electronics using a CMOS (complementary metal–oxide–semiconductor) fabrication process in 1992, and mass-produced in 1993. By 2000, SDRAM had replaced virtually all other types of DRAM in modern computers, because of its greater performance.
SDRAM latency is not inherently lower (faster access times) than asynchronous DRAM. Indeed, early SDRAM was somewhat slower than contemporaneous burst EDO DRAM due to the additional logic. The benefits of SDRAM's internal buffering come from its ability to interleave operations to multiple banks of memory, thereby increasing effective bandwidth.
Today, virtually all SDRAM is manufactured in compliance with standards established by JEDEC, an electronics industry association that adopts open standards to facilitate interoperability of electronic components. JEDEC formally adopted its first SDRAM standard in 1993 and subsequently adopted other SDRAM standards, including those for DDR, DDR2 and DDR3 SDRAM.
Double data rate SDRAM, known as DDR SDRAM, was first demonstrated by Samsung in 1997. Samsung released the first commercial DDR SDRAM chip (64Mbit) in June 1998, followed soon after by Hyundai Electronics (now SK Hynix) the same year.
SDRAM is also available in registered varieties, for systems that require greater scalability such as servers and workstations.
Today, the world's largest manufacturers of SDRAM include Samsung Electronics, SK Hynix, Micron Technology, and Nanya Technology.
Timing.
There are several limits on DRAM performance. Most noted is the read cycle time, the time between successive read operations to an open row. This time decreased from 10 ns for 100 MHz SDRAM (1 MHz = formula_0 Hz) to 5 ns for DDR-400, but remained relatively unchanged through DDR2-800 and DDR3-1600 generations. However, by operating the interface circuitry at increasingly higher multiples of the fundamental read rate, the achievable bandwidth has increased rapidly.
Another limit is the CAS latency, the time between supplying a column address and receiving the corresponding data. Again, this has remained relatively constant at 10–15 ns through the last few generations of DDR SDRAM.
In operation, CAS latency is a specific number of clock cycles programmed into the SDRAM's mode register and expected by the DRAM controller. Any value may be programmed, but the SDRAM will not operate correctly if it is too low. At higher clock rates, the useful CAS latency in clock cycles naturally increases. 10–15 ns is 2–3 cycles (CL2–3) of the 200 MHz clock of DDR-400 SDRAM, CL4-6 for DDR2-800, and CL8-12 for DDR3-1600. Slower clock cycles will naturally allow lower numbers of CAS latency cycles.
SDRAM modules have their own timing specifications, which may be slower than those of the chips on the module. When 100 MHz SDRAM chips first appeared, some manufacturers sold "100 MHz" modules that could not reliably operate at that clock rate. In response, Intel published the PC100 standard, which outlines requirements and guidelines for producing a memory module that can operate reliably at 100 MHz. This standard was widely influential, and the term "PC100" quickly became a common identifier for 100 MHz SDRAM modules, and modules are now commonly designated with "PC"-prefixed numbers (PC66, PC100 or PC133 - although the actual meaning of the numbers has changed).
Control signals.
All commands are timed relative to the rising edge of a clock signal. In addition to the clock, there are six control signals, mostly active low, which are sampled on the rising edge of the clock:
Bank selection (BAn).
SDRAM devices are internally divided into either two, four or eight independent internal data banks. One to three bank address inputs (BA0, BA1 and BA2) are used to select which bank a command is directed toward.
Addressing (A10/An).
Many commands also use an address presented on the address input pins. Some commands, which either do not use an address, or present a column address, also use A10 to select variants.
Commands.
The SDR SDRAM commands are defined as follows:
All SDRAM generations (SDR and DDRx) use essentially the same commands, with the changes being:
Construction and operation.
As an example, a 512 MB SDRAM DIMM (which contains 512 MB), might be made of eight or nine SDRAM chips, each containing 512 Mbit of storage, and each one contributing 8 bits to the DIMM's 64- or 72-bit width. A typical 512 Mbit SDRAM "chip" internally contains four independent 16 MB memory banks. Each bank is an array of 8,192 rows of 16,384 bits each. (2048 8-bit columns). A bank is either idle, active, or changing from one to the other.
The "active" command activates an idle bank. It presents a two-bit bank address (BA0–BA1) and a 13-bit row address (A0–A12), and causes a read of that row into the bank's array of all 16,384 column sense amplifiers. This is also known as "opening" the row. This operation has the side effect of refreshing the dynamic (capacitive) memory storage cells of that row.
Once the row has been activated or "opened", "read" and "write" commands are possible to that row. Activation requires a minimum amount of time, called the row-to-column delay, or tRCD before reads or writes to it may occur. This time, rounded up to the next multiple of the clock period, specifies the minimum number of wait cycles between an "active" command, and a "read" or "write" command. During these wait cycles, additional commands may be sent to other banks; because each bank operates completely independently.
Both "read" and "write" commands require a column address. Because each chip accesses eight bits of data at a time, there are 2,048 possible column addresses thus requiring only 11 address lines (A0–A9, A11).
When a "read" command is issued, the SDRAM will produce the corresponding output data on the DQ lines in time for the rising edge of the clock a few clock cycles later, depending on the configured CAS latency. Subsequent words of the burst will be produced in time for subsequent rising clock edges.
A "write" command is accompanied by the data to be written driven on to the DQ lines during the same rising clock edge. It is the duty of the memory controller to ensure that the SDRAM is not driving read data on to the DQ lines at the same time that it needs to drive write data on to those lines. This can be done by waiting until a read burst has finished, by terminating a read burst, or by using the DQM control line.
When the memory controller needs to access a different row, it must first return that bank's sense amplifiers to an idle state, ready to sense the next row. This is known as a "precharge" operation, or "closing" the row. A precharge may be commanded explicitly, or it may be performed automatically at the conclusion of a read or write operation. Again, there is a minimum time, the row precharge delay, tRP, which must elapse before that row is fully "closed" and so the bank is idle in order to receive another activate command on that bank.
Although refreshing a row is an automatic side effect of activating it, there is a minimum time for this to happen, which requires a minimum row access time tRAS delay between an "active" command opening a row, and the corresponding precharge command closing it. This limit is usually dwarfed by desired read and write commands to the row, so its value has little effect on typical performance.
Command interactions.
The no operation command is always permitted, while the load mode register command requires that all banks be idle, and a delay afterward for the changes to take effect. The auto refresh command also requires that all banks be idle, and takes a refresh cycle time tRFC to return the chip to the idle state. (This time is usually equal to tRCD+tRP.) The only other command that is permitted on an idle bank is the active command. This takes, as mentioned above, tRCD before the row is fully open and can accept read and write commands.
When a bank is open, there are four commands permitted: read, write, burst terminate, and precharge. Read and write commands begin bursts, which can be interrupted by following commands.
Interrupting a read burst.
A read, burst terminate, or precharge command may be issued at any time after a read command, and will interrupt the read burst after the configured CAS latency. So if a read command is issued on cycle 0, another read command is issued on cycle 2, and the CAS latency is 3, then the first read command will begin bursting data out during cycles 3 and 4, then the results from the second read command will appear beginning with cycle 5.
If the command issued on cycle 2 were burst terminate, or a precharge of the active bank, then no output would be generated during cycle 5.
Although the interrupting read may be to any active bank, a precharge command will only interrupt the read burst if it is to the same bank or all banks; a precharge command to a different bank will not interrupt a read burst.
Interrupting a read burst by a write command is possible, but more difficult. It can be done if the DQM signal is used to suppress output from the SDRAM so that the memory controller may drive data over the DQ lines to the SDRAM in time for the write operation. Because the effects of DQM on read data are delayed by two cycles, but the effects of DQM on write data are immediate, DQM must be raised (to mask the read data) beginning at least two cycles before write command but must be lowered for the cycle of the write command (assuming the write command is intended to have an effect).
Doing this in only two clock cycles requires careful coordination between the time the SDRAM takes to turn off its output on a clock edge and the time the data must be supplied as input to the SDRAM for the write on the following clock edge. If the clock frequency is too high to allow sufficient time, three cycles may be required.
If the read command includes auto-precharge, the precharge begins the same cycle as the interrupting command.
Burst ordering.
A modern microprocessor with a cache will generally access memory in units of cache lines. To transfer a 64-byte cache line requires eight consecutive accesses to a 64-bit DIMM, which can all be triggered by a single read or write command by configuring the SDRAM chips, using the mode register, to perform eight-word bursts. A cache line fetch is typically triggered by a read from a particular address, and SDRAM allows the "critical word" of the cache line to be transferred first. ("Word" here refers to the width of the SDRAM chip or DIMM, which is 64 bits for a typical DIMM.) SDRAM chips support two possible conventions for the ordering of the remaining words in the cache line.
Bursts always access an aligned block of BL consecutive words beginning on a multiple of BL. So, for example, a four-word burst access to any column address from four to seven will return words four to seven. The ordering, however, depends on the requested address, and the configured burst type option: sequential or interleaved. Typically, a memory controller will require one or the other. When the burst length is one or two, the burst type does not matter. For a burst length of one, the requested word is the only word accessed. For a burst length of two, the requested word is accessed first, and the other word in the aligned block is accessed second. This is the following word if an even address was specified, and the previous word if an odd address was specified.
For the sequential burst mode, later words are accessed in increasing address order, wrapping back to the start of the block when the end is reached. So, for example, for a burst length of four, and a requested column address of five, the words would be accessed in the order 5-6-7-4. If the burst length were eight, the access order would be 5-6-7-0-1-2-3-4. This is done by adding a counter to the column address, and ignoring carries past the burst length. The interleaved burst mode computes the address using an exclusive or operation between the counter and the address. Using the same starting address of five, a four-word burst would return words in the order 5-4-7-6. An eight-word burst would be 5-4-7-6-1-0-3-2. Although more confusing to humans, this can be easier to implement in hardware, and is preferred by Intel for its microprocessors.
If the requested column address is at the start of a block, both burst modes (sequential and interleaved) return data in the same sequential sequence 0-1-2-3-4-5-6-7. The difference only matters if fetching a cache line from memory in critical-word-first order.
Mode register.
Single data rate SDRAM has a single 10-bit programmable mode register. Later double-data-rate SDRAM standards add additional mode registers, addressed using the bank address pins. For SDR SDRAM, the bank address pins and address lines A10 and above are ignored, but should be zero during a mode register write.
The bits are M9 through M0, presented on address lines A9 through A0 during a load mode register cycle.
Later (double data rate) SDRAM standards use more mode register bits, and provide additional mode registers called "extended mode registers". The register number is encoded on the bank address pins during the load mode register command. For example, DDR2 SDRAM has a 13-bit mode register, a 13-bit extended mode register No. 1 (EMR1), and a 5-bit extended mode register No. 2 (EMR2).
Auto refresh.
It is possible to refresh a RAM chip by opening and closing (activating and precharging) each row in each bank. However, to simplify the memory controller, SDRAM chips support an "auto refresh" command, which performs these operations to one row in each bank simultaneously. The SDRAM also maintains an internal counter, which iterates over all possible rows. The memory controller must simply issue a sufficient number of auto refresh commands (one per row, 8192 in the example we have been using) every refresh interval (tREF = 64 ms is a common value). All banks must be idle (closed, precharged) when this command is issued.
Low power modes.
As mentioned, the clock enable (CKE) input can be used to effectively stop the clock to an SDRAM. The CKE input is sampled each rising edge of the clock, and if it is low, the following rising edge of the clock is ignored for all purposes other than checking CKE. As long as CKE is low, it is permissible to change the clock rate, or even stop the clock entirely.
If CKE is lowered while the SDRAM is performing operations, it simply "freezes" in place until CKE is raised again.
If the SDRAM is idle (all banks precharged, no commands in progress) when CKE is lowered, the SDRAM automatically enters power-down mode, consuming minimal power until CKE is raised again. This must not last longer than the maximum refresh interval tREF, or memory contents may be lost. It is legal to stop the clock entirely during this time for additional power savings.
Finally, if CKE is lowered at the same time as an auto-refresh command is sent to the SDRAM, the SDRAM enters self-refresh mode. This is like power down, but the SDRAM uses an on-chip timer to generate internal refresh cycles as necessary. The clock may be stopped during this time. While self-refresh mode consumes slightly more power than power-down mode, it allows the memory controller to be disabled entirely, which commonly more than makes up the difference.
SDRAM designed for battery-powered devices offers some additional power-saving options. One is temperature-dependent refresh; an on-chip temperature sensor reduces the refresh rate at lower temperatures, rather than always running it at the worst-case rate. Another is selective refresh, which limits self-refresh to a portion of the DRAM array. The fraction which is refreshed is configured using an extended mode register. The third, implemented in Mobile DDR (LPDDR) and LPDDR2 is "deep power down" mode, which invalidates the memory and requires a full reinitialization to exit from. This is activated by sending a "burst terminate" command while lowering CKE.
DDR SDRAM prefetch architecture.
DDR SDRAM employs prefetch architecture to allow quick and easy access to multiple data words located on a common physical row in the memory.
The prefetch architecture takes advantage of the specific characteristics of memory accesses to DRAM. Typical DRAM memory operations involve three phases: bitline precharge, row access, column access. Row access is the heart of a read operation, as it involves the careful sensing of the tiny signals in DRAM memory cells; it is the slowest phase of memory operation. However, once a row is read, subsequent column accesses to that same row can be very quick, as the sense amplifiers also act as latches. For reference, a row of a 1 Gbit DDR3 device is 2,048 bits wide, so internally 2,048 bits are read into 2,048 separate sense amplifiers during the row access phase. Row accesses might take 50 ns, depending on the speed of the DRAM, whereas column accesses off an open row are less than 10 ns.
Traditional DRAM architectures have long supported fast column access to bits on an open row. For an 8-bit-wide memory chip with a 2,048 bit wide row, accesses to any of the 256 datawords (2048/8) on the row can be very quick, provided no intervening accesses to other rows occur.
The drawback of the older fast column access method was that a new column address had to be sent for each additional dataword on the row. The address bus had to operate at the same frequency as the data bus. Prefetch architecture simplifies this process by allowing a single address request to result in multiple data words.
In a prefetch buffer architecture, when a memory access occurs to a row the buffer grabs a set of adjacent data words on the row and reads them out ("bursts" them) in rapid-fire sequence on the IO pins, without the need for individual column address requests. This assumes the CPU wants adjacent datawords in memory, which in practice is very often the case. For instance, in DDR1, two adjacent data words will be read from each chip in the same clock cycle and placed in the pre-fetch buffer. Each word will then be transmitted on consecutive rising and falling edges of the clock cycle. Similarly, in DDR2 with a 4n pre-fetch buffer, four consecutive data words are read and placed in buffer while a clock, which is twice faster than the internal clock of DDR, transmits each of the word in consecutive rising and falling edge of the faster external clock
The prefetch buffer depth can also be thought of as the ratio between the core memory frequency and the IO frequency. In an 8n prefetch architecture (such as DDR3), the IOs will operate 8 times faster than the memory core (each memory access results in a burst of 8 datawords on the IOs). Thus, a 200 MHz memory core is combined with IOs that each operate eight times faster (1600 megabits per second). If the memory has 16 IOs, the total read bandwidth would be 200 MHz x 8 datawords/access x 16 IOs = 25.6 gigabits per second (Gbit/s) or 3.2 gigabytes per second (GB/s). Modules with multiple DRAM chips can provide correspondingly higher bandwidth.
Each generation of SDRAM has a different prefetch buffer size:
Generations.
SDR.
Originally simply known as "SDRAM", single data rate SDRAM can accept one command and transfer one word of data per clock cycle. Chips are made with a variety of data bus sizes (most commonly 4, 8 or 16 bits), but chips are generally assembled into 168-pin DIMMs that read or write 64 (non-ECC) or 72 (ECC) bits at a time.
Use of the data bus is intricate and thus requires a complex DRAM controller circuit. This is because data written to the DRAM must be presented in the same cycle as the write command, but reads produce output 2 or 3 cycles after the read command. The DRAM controller must ensure that the data bus is never required for a read and a write at the same time.
Typical SDR SDRAM clock rates are 66, 100, and 133 MHz (periods of 15, 10, and 7.5 ns), respectively denoted PC66, PC100, and PC133. Clock rates up to 200 MHz were available. It operates at a voltage of 3.3 V.
This type of SDRAM is slower than the DDR variants, because only one word of data is transmitted per clock cycle (single data rate). But this type is also faster than its predecessors extended data out DRAM (EDO-RAM) and fast page mode DRAM (FPM-RAM) which took typically two or three clocks to transfer one word of data.
PC66.
PC66 refers to internal removable computer memory standard defined by the JEDEC. PC66 is Synchronous DRAM operating at a clock frequency of 66.66 MHz, on a 64-bit bus, at a voltage of 3.3 V. PC66 is available in 168-pin DIMM and 144-pin SO-DIMM form factors. The theoretical bandwidth is 533 MB/s. (1 MB/s = one million bytes per second)
This standard was used by Intel Pentium and AMD K6-based PCs. It also features in the Beige Power Mac G3, early iBooks and PowerBook G3s. It is also used in many early Intel Celeron systems with a 66 MHz FSB. It was superseded by the PC100 and PC133 standards.
PC100.
PC100 is a standard for internal removable computer random-access memory, defined by the JEDEC. PC100 refers to Synchronous DRAM operating at a clock frequency of 100 MHz, on a 64-bit-wide bus, at a voltage of 3.3 V. PC100 is available in 168-pin DIMM and 144-pin SO-DIMM form factors. PC100 is backward compatible with PC66 and was superseded by the PC133 standard.
A module built out of 100 MHz SDRAM chips is not necessarily capable of operating at 100 MHz. The PC100 standard specifies the capabilities of the memory module as a whole. PC100 is used in many older computers; PCs around the late 1990s were the most common computers with PC100 memory.
PC133.
PC133 is a computer memory standard defined by the JEDEC. PC133 refers to SDR SDRAM operating at a clock frequency of 133 MHz, on a 64-bit-wide bus, at a voltage of 3.3 V. PC133 is available in 168-pin DIMM and 144-pin SO-DIMM form factors. PC133 is the fastest and final SDR SDRAM standard ever approved by the JEDEC, and delivers a bandwidth of 1.066 GB per second ([133.33 MHz * 64/8]=1.066 GB/s). (1 GB/s = one billion bytes per second) PC133 is backward compatible with PC100 and PC66.
DDR.
While the access latency of DRAM is fundamentally limited by the DRAM array, DRAM has very high potential bandwidth because each internal read is actually a row of many thousands of bits. To make more of this bandwidth available to users, a double data rate interface was developed. This uses the same commands, accepted once per cycle, but reads or writes two words of data per clock cycle. The DDR interface accomplishes this by reading and writing data on both the rising and falling edges of the clock signal. In addition, some minor changes to the SDR interface timing were made in hindsight, and the supply voltage was reduced from 3.3 to 2.5 V. As a result, DDR SDRAM is not backwards compatible with SDR SDRAM.
DDR SDRAM (sometimes called "DDR1" for greater clarity) doubles the minimum read or write unit; every access refers to at least two consecutive words.
Typical DDR SDRAM clock rates are 133, 166 and 200 MHz (7.5, 6, and 5 ns/cycle), generally described as DDR-266, DDR-333 and DDR-400 (3.75, 3, and 2.5 ns per beat). Corresponding 184-pin DIMMs are known as PC-2100, PC-2700 and PC-3200. Performance up to DDR-550 (PC-4400) is available.
DDR2.
DDR2 SDRAM is very similar to DDR SDRAM, but doubles the minimum read or write unit again, to four consecutive words. The bus protocol was also simplified to allow higher performance operation. (In particular, the "burst terminate" command is deleted.) This allows the bus rate of the SDRAM to be doubled without increasing the clock rate of internal RAM operations; instead, internal operations are performed in units four times as wide as SDRAM. Also, an extra bank address pin (BA2) was added to allow eight banks on large RAM chips.
Typical DDR2 SDRAM clock rates are 200, 266, 333 or 400 MHz (periods of 5, 3.75, 3 and 2.5 ns), generally described as DDR2-400, DDR2-533, DDR2-667 and DDR2-800 (periods of 2.5, 1.875, 1.5 and 1.25 ns). Corresponding 240-pin DIMMs are known as PC2-3200 through PC2-6400. DDR2 SDRAM is now available at a clock rate of 533 MHz generally described as DDR2-1066 and the corresponding DIMMs are known as PC2-8500 (also named PC2-8600 depending on the manufacturer). Performance up to DDR2-1250 (PC2-10000) is available.
Note that because internal operations are at 1/2 the clock rate, DDR2-400 memory (internal clock rate 100 MHz) has somewhat higher latency than DDR-400 (internal clock rate 200 MHz).
DDR3.
DDR3 continues the trend, doubling the minimum read or write unit to eight consecutive words. This allows another doubling of bandwidth and external bus rate without having to change the clock rate of internal operations, just the width. To maintain 800–1600 M transfers/s (both edges of a 400–800 MHz clock), the internal RAM array has to perform 100–200 M fetches per second.
Again, with every doubling, the downside is the increased latency. As with all DDR SDRAM generations, commands are still restricted to one clock edge and command latencies are given in terms of clock cycles, which are half the speed of the usually quoted transfer rate (a CAS latency of 8 with DDR3-800 is 8/(400 MHz) = 20 ns, exactly the same latency of CAS2 on PC100 SDR SDRAM).
DDR3 memory chips are being made commercially, and computer systems using them were available from the second half of 2007, with significant usage from 2008 onwards. Initial clock rates were 400 and 533 MHz, which are described as DDR3-800 and DDR3-1066 (PC3-6400 and PC3-8500 modules), but 667 and 800 MHz, described as DDR3-1333 and DDR3-1600 (PC3-10600 and PC3-12800 modules) are now common. Performance up to DDR3-2800 (PC3 22400 modules) are available.
DDR4.
DDR4 SDRAM is the successor to DDR3 SDRAM. It was revealed at the Intel Developer Forum in San Francisco in 2008, and was due to be released to market during 2011. The timing varied considerably during its development - it was originally expected to be released in 2012, and later (during 2010) expected to be released in 2015, before samples were announced in early 2011 and manufacturers began to announce that commercial production and release to market was anticipated in 2012. DDR4 reached mass market adoption around 2015, which is comparable with the approximately five years taken for DDR3 to achieve mass market transition over DDR2.
The DDR4 chips run at 1.2 V or less, compared to the 1.5 V of DDR3 chips, and have in excess of 2 billion data transfers per second. They were expected to be introduced at frequency rates of 2133 MHz, estimated to rise to a potential 4266 MHz and lowered voltage of 1.05 V by 2013.
DDR4 did "not" double the internal prefetch width again, but uses the same 8"n" prefetch as DDR3. Thus, it will be necessary to interleave reads from several banks to keep the data bus busy.
In February 2009, Samsung validated 40 nm DRAM chips, considered a "significant step" towards DDR4 development since, as of 2009, current DRAM chips were only beginning to migrate to a 50 nm process. In January 2011, Samsung announced the completion and release for testing of a 30 nm 2048 MB DDR4 DRAM module. It has a maximum bandwidth of 2.13 Gbit/s at 1.2 V, uses pseudo open drain technology and draws 40% less power than an equivalent DDR3 module.
DDR5.
In March 2017, JEDEC announced a DDR5 standard is under development, but provided no details except for the goals of doubling the bandwidth of DDR4, reducing power consumption, and publishing the standard in 2018. The standard was released on 14 July 2020.
Failed successors.
In addition to DDR, there were several other proposed memory technologies to succeed SDR SDRAM.
Rambus DRAM (RDRAM).
RDRAM was a proprietary technology that competed against DDR. Its relatively high price and disappointing performance (resulting from high latencies and a narrow 16-bit data channel versus DDR's 64 bit channel) caused it to lose the race to succeed SDR SDRAM.
Synchronous-link DRAM (SLDRAM).
SLDRAM boasted higher performance and competed against RDRAM. It was developed during the late 1990s by the SLDRAM Consortium. The SLDRAM Consortium consisted of about 20 major DRAM and computer industry manufacturers. (The SLDRAM Consortium became incorporated as SLDRAM Inc. and then changed its name to Advanced Memory International, Inc.) SLDRAM was an open standard and did not require licensing fees. The specifications called for a 64-bit bus running at a 200, 300 or 400 MHz clock frequency. This is achieved by all signals being on the same line and thereby avoiding the synchronization time of multiple lines. Like DDR SDRAM, SLDRAM uses a double-pumped bus, giving it an effective speed of 400, 600, or 800 MT/s. (1 MT/s = 1000^2 transfers per second)
SLDRAM used an 11-bit command bus (10 command bits CA9:0 plus one start-of-command FLAG line) to transmit 40-bit command packets on 4 consecutive edges of a differential command clock (CCLK/CCLK#). Unlike SDRAM, there were no per-chip select signals; each chip was assigned an ID when reset, and the command contained the ID of the chip that should process it. Data was transferred in 4- or 8-word bursts across an 18-bit (per chip) data bus, using one of two differential data clocks (DCLK0/DCLK0# and DCLK1/DCLK1#). Unlike standard SDRAM, the clock was generated by the data source (the SLDRAM chip in the case of a read operation) and transmitted in the same direction as the data, greatly reducing data skew. To avoid the need for a pause when the source of the DCLK changes, each command specified which DCLK pair it would use.
The basic read/write command consisted of (beginning with CA9 of the first word):
Individual devices had 8-bit IDs. The 9th bit of the ID sent in commands was used to address multiple devices. Any aligned power-of-2 sized group could be addressed. If the transmitted msbit was set, all least-significant bits up to and including the least-significant 0 bit of the transmitted address were ignored for "is this addressed to me?" purposes. (If the ID8 bit is actually considered less significant than ID0, the unicast address matching becomes a special case of this pattern.)
A read/write command had the msbit clear:
A notable omission from the specification was per-byte write enables; it was designed for systems with caches and ECC memory, which always write in multiples of a cache line.
Additional commands (with CMD5 set) opened and closed rows without a data transfer, performed refresh operations, read or wrote configuration registers, and performed other maintenance operations. Most of these commands supported an additional 4-bit sub-ID (sent as 5 bits, using the same multiple-destination encoding as the primary ID) which could be used to distinguish devices that were assigned the same primary ID because they were connected in parallel and always read/written at the same time.
There were a number of 8-bit control registers and 32-bit status registers to control various device timing parameters.
Virtual channel memory (VCM) SDRAM.
VCM was a proprietary type of SDRAM that was designed by NEC, but released as an open standard with no licensing fees. It is pin-compatible with standard SDRAM, but the commands are different. The technology was a potential competitor of RDRAM because VCM was not nearly as expensive as RDRAM was. A Virtual Channel Memory (VCM) module is mechanically and electrically compatible with standard SDRAM, so support for both depends only on the capabilities of the memory controller. In the late 1990s, a number of PC northbridge chipsets (such as the popular VIA KX133 and KT133) included VCSDRAM support.
VCM inserts an SRAM cache of 16 "channel" buffers, each 1/4 row "segment" in size, between DRAM banks' sense amplifier rows and the data I/O pins. "Prefetch" and "restore" commands, unique to VCSDRAM, copy data between the DRAM's sense amplifier row and the channel buffers, while the equivalent of SDRAM's read and write commands specify a channel number to access. Reads and writes may thus be performed independent of the currently active state of the DRAM array, with the equivalent of four full DRAM rows being "open" for access at a time. This is an improvement over the two open rows possible in a standard two-bank SDRAM. (There is actually a 17th "dummy channel" used for some operations.)
To read from VCSDRAM, after the active command, a "prefetch" command is required to copy data from the sense amplifier array to the channel SDRAM. This command specifies a bank, two bits of column address (to select the segment of the row), and four bits of channel number. Once this is performed, the DRAM array may be precharged while read commands to the channel buffer continue. To write, first the data is written to a channel buffer (typically previous initialized using a Prefetch command), then a restore command, with the same parameters as the prefetch command, copies a segment of data from the channel to the sense amplifier array.
Unlike a normal SDRAM write, which must be performed to an active (open) row, the VCSDRAM bank must be precharged (closed) when the restore command is issued. An active command immediately after the restore command specifies the DRAM row completes the write to the DRAM array. There is, in addition, a 17th "dummy channel" which allows writes to the currently open row. It may not be read from, but may be prefetched to, written to, and restored to the sense amplifier array.
Although normally a segment is restored to the same memory address as it was prefetched from, the channel buffers may also be used for very efficient copying or clearing of large, aligned memory blocks. (The use of quarter-row segments is driven by the fact that DRAM cells are narrower than SRAM cells.) The SRAM bits are designed to be four DRAM bits wide, and are conveniently connected to one of the four DRAM bits they straddle.) Additional commands prefetch a pair of segments to a pair of channels, and an optional command combines prefetch, read, and precharge to reduce the overhead of random reads.
The above are the JEDEC-standardized commands. Earlier chips did not support the dummy channel or pair prefetch, and use a different encoding for precharge.
A 13-bit address bus, as illustrated here, is suitable for a device up to 128 Mbit. It has two banks, each containing 8,192 rows and 8,192 columns. Thus, row addresses are 13 bits, segment addresses are two bits, and eight column address bits are required to select one byte from the 2,048 bits (256 bytes) in a segment.
Synchronous Graphics RAM (SGRAM).
Synchronous graphics RAM (SGRAM) is a specialized form of SDRAM for graphics adaptors. It is designed for graphics-related tasks such as texture memory and framebuffers, found on video cards. It adds functions such as bit masking (writing to a specified bit plane without affecting the others) and block write (filling a block of memory with a single colour). Unlike VRAM and WRAM, SGRAM is single-ported. However, it can open two memory pages at once, which simulates the dual-port nature of other video RAM technologies.
The earliest known SGRAM memory are 8Mbit chips dating back to 1994: the Hitachi HM5283206, introduced in November 1994, and the NEC μPD481850, introduced in December 1994. The earliest known commercial device to use SGRAM is Sony's PlayStation (PS) video game console, starting with the Japanese SCPH-5000 model released in December 1995, using the NEC μPD481850 chip.
Graphics double data rate SDRAM (GDDR SDRAM).
Graphics double data rate SDRAM (GDDR SDRAM) is a type of specialized DDR SDRAM designed to be used as the main memory of graphics processing units (GPUs). GDDR SDRAM is distinct from commodity types of DDR SDRAM such as DDR3, although they share some core technologies. Their primary characteristics are higher clock frequencies for both the DRAM core and I/O interface, which provides greater memory bandwidth for GPUs. As of 2023, there are eight successive generations of GDDR: GDDR2, GDDR3, GDDR4, GDDR5, GDDR5X, GDDR6, GDDR6X and GDDR6W.
GDDR was initially known as DDR SGRAM. It was commercially introduced as a 16Mbit memory chip by Samsung Electronics in 1998.
High Bandwidth Memory (HBM).
High Bandwidth Memory (HBM) is a high-performance RAM interface for 3D-stacked SDRAM from Samsung, AMD and SK Hynix. It is designed to be used in conjunction with high-performance graphics accelerators and network devices. The first HBM memory chip was produced by SK Hynix in 2013.
Timeline.
SDRAM.
<section begin="SDRAM timeline"/>
SGRAM and HBM.
<section end="SDRAM timeline"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "10^{6}"
}
] | https://en.wikipedia.org/wiki?curid=93807 |
93817 | Data type | Attribute of data
In computer science and computer programming, a data type (or simply type) is a collection or grouping of data values, usually specified by a set of possible values, a set of allowed operations on these values, and/or a representation of these values as machine types. A data type specification in a program constrains the possible values that an expression, such as a variable or a function call, might take. On literal data, it tells the compiler or interpreter how the programmer intends to use the data. Most programming languages support basic data types of integer numbers (of varying sizes), floating-point numbers (which approximate real numbers), characters and Booleans.
Concept.
A data type may be specified for many reasons: similarity, convenience, or to focus the attention. It is frequently a matter of good organization
that aids the understanding of complex definitions. Almost all programming languages explicitly include the notion of data type, though the possible data types are often restricted by considerations of simplicity, computability, or regularity. An explicit data type declaration typically allows the compiler to choose an efficient machine representation, but the conceptual organization offered by data types should not be discounted.
Different languages may use different data types or similar types with different semantics. For example, in the Python programming language, codice_0 represents an arbitrary-precision integer which has the traditional numeric operations such as addition, subtraction, and multiplication. However, in the Java programming language, the type codice_0 represents the set of 32-bit integers ranging in value from −2,147,483,648 to 2,147,483,647, with arithmetic operations that wrap on overflow. In Rust this 32-bit integer type is denoted codice_2 and panics on overflow in debug mode.
Most programming languages also allow the programmer to define additional data types, usually by combining multiple elements of other types and defining the valid operations of the new data type. For example, a programmer might create a new data type named "complex number" that would include real and imaginary parts, or a color data type represented by three bytes denoting the amounts each of red, green, and blue, and a string representing the color's name.
Data types are used within type systems, which offer various ways of defining, implementing, and using them. In a type system, a data type represents a constraint placed upon the interpretation of data, describing representation, interpretation and structure of values or objects stored in computer memory. The type system uses data type information to check correctness of computer programs that access or manipulate the data. A compiler may use the static type of a value to optimize the storage it needs and the choice of algorithms for operations on the value. In many C compilers the data type, for example, is represented in 32 bits, in accord with the IEEE specification for single-precision floating point numbers. They will thus use floating-point-specific microprocessor operations on those values (floating-point addition, multiplication, etc.).
Most data types in statistics have comparable types in computer programming, and vice versa, as shown in the following table:
Definition.
identified five definitions of a "type" that were used—sometimes implicitly—in the literature:
The definition in terms of a representation was often done in imperative languages such as ALGOL and Pascal, while the definition in terms of a value space and behaviour was used in higher-level languages such as Simula and CLU. Types including behavior align more closely with object-oriented models, whereas a structured programming model would tend to not include code, and are called plain old data structures.
Classification.
Data types may be categorized according to several factors:
The terminology varies - in the literature, primitive, built-in, basic, atomic, and fundamental may be used interchangeably.
Examples.
Machine data types.
All data in computers based on digital electronics is represented as bits (alternatives 0 and 1) on the lowest level. The smallest addressable unit of data is usually a group of bits called a byte (usually an octet, which is 8 bits). The unit processed by machine code instructions is called a word (as of 2011[ [update]], typically 32 or 64 bits).
Machine data types "expose" or make available fine-grained control over hardware, but this can also expose implementation details that make code less portable. Hence machine types are mainly used in systems programming or low-level programming languages. In higher-level languages most data types are "abstracted" in that they do not have a language-defined machine representation. The C programming language, for instance, supplies types such as Booleans, integers, floating-point numbers, etc., but the precise bit representations of these types are implementation-defined. The only C type with a precise machine representation is the codice_3 type that represents a byte.
Boolean type.
The Boolean type represents the values true and false. Although only two values are possible, they are more often represented as a word rather as a single bit as it requires more machine instructions to store and retrieve an individual bit. Many programming languages do not have an explicit Boolean type, instead using an integer type and interpreting (for instance) 0 as false and other values as true.
Boolean data refers to the logical structure of how the language is interpreted to the machine language. In this case a Boolean 0 refers to the logic False. True is always a non zero, especially a one which is known as Boolean 1.
Numeric types.
Almost all programming languages supply one or more integer data types. They may either supply a small number of predefined subtypes restricted to certain ranges (such as codice_4 and codice_5 and their corresponding codice_6 variants in C/C++); or allow users to freely define subranges such as 1..12 (e.g. Pascal/Ada). If a corresponding native type does not exist on the target platform, the compiler will break them down into code using types that do exist. For instance, if a 32-bit integer is requested on a 16 bit platform, the compiler will tacitly treat it as an array of two 16 bit integers.
Floating point data types represent certain fractional values (rational numbers, mathematically). Although they have predefined limits on both their maximum values and their precision, they are sometimes misleadingly called reals (evocative of mathematical real numbers). They are typically stored internally in the form a × 2b (where a and b are integers), but displayed in familiar decimal form.
Fixed point data types are convenient for representing monetary values. They are often implemented internally as integers, leading to predefined limits.
For independence from architecture details, a Bignum or arbitrary precision codice_7 type might be supplied. This represents an integer or rational to a precision limited only by the available memory and computational resources on the system. Bignum implementations of arithmetic operations on machine-sized values are significantly slower than the corresponding machine operations.
Enumerations.
The enumerated type has distinct values, which can be compared and assigned, but which do not necessarily have any particular concrete representation in the computer's memory; compilers and interpreters can represent them arbitrarily. For example, the four suits in a deck of playing cards may be four enumerators named "CLUB", "DIAMOND", "HEART", "SPADE", belonging to an enumerated type named "suit". If a variable "V" is declared having "suit" as its data type, one can assign any of those four values to it. Some implementations allow programmers to assign integer values to the enumeration values, or even treat them as type-equivalent to integers.
String and text types.
Strings are a sequence of characters used to store words or plain text, most often textual markup languages representing formatted text. Characters may be a letter of some alphabet, a digit, a blank space, a punctuation mark, etc. Characters are drawn from a character set such as ASCII. Character and string types can have different subtypes according to the character encoding. The original 7-bit wide ASCII was found to be limited, and superseded by 8, 16 and 32-bit sets, which can encode a wide variety of non-Latin alphabets (such as Hebrew and Chinese) and other symbols. Strings may be of either variable length or fixed length, and some programming languages have both types. They may also be subtyped by their maximum size.
Since most character sets include the digits, it is possible to have a numeric string, such as codice_8. These numeric strings are usually considered distinct from numeric values such as codice_9, although some languages automatically convert between them.
Union types.
A union type definition will specify which of a number of permitted subtypes may be stored in its instances, e.g. "float or long integer". In contrast with a record, which could be defined to contain a float "and" an integer, a union may only contain one subtype at a time.
A tagged union (also called a variant, variant record, discriminated union, or disjoint union) contains an additional field indicating its current type for enhanced type safety.
Algebraic data types.
An algebraic data type (ADT) is a possibly recursive sum type of product types. A value of an ADT consists of a constructor tag together with zero or more field values, with the number and type of the field values fixed by the constructor. The set of all possible values of an ADT is the set-theoretic disjoint union (sum), of the sets of all possible values of its variants (product of fields). Values of algebraic types are analyzed with pattern matching, which identifies a value's constructor and extracts the fields it contains.
If there is only one constructor, then the ADT corresponds to a product type similar to a tuple or record. A constructor with no fields corresponds to the empty product (unit type). If all constructors have no fields then the ADT corresponds to an enumerated type.
One common ADT is the option type, defined in Haskell as .
Data structures.
Some types are very useful for storing and retrieving data and are called data structures. Common data structures include:
Abstract data types.
An abstract data type is a data type that does not specify the concrete representation of the data. Instead, a formal "specification" based on the data type's operations is used to describe it. Any "implementation" of a specification must fulfill the rules given. For example, a stack has push/pop operations that follow a Last-In-First-Out rule, and can be concretely implemented using either a list or an array. Abstract data types are used in formal semantics and program verification and, less strictly, in design.
Pointers and references.
The main non-composite, derived type is the pointer, a data type whose value refers directly to (or "points to") another value stored elsewhere in the computer memory using its address. It is a primitive kind of reference. (In everyday terms, a page number in a book could be considered a piece of data that refers to another one). Pointers are often stored in a format similar to an integer; however, attempting to dereference or "look up" a pointer whose value was never a valid memory address would cause a program to crash. To ameliorate this potential problem, pointers are considered a separate type to the type of data they point to, even if the underlying representation is the same.
Function types.
Functional programming languages treat functions as a distinct datatype and allow values of this type to be stored in variables and passed to functions. Some multi-paradigm languages such as JavaScript also have mechanisms for treating functions as data. Most contemporary type systems go beyond JavaScript's simple type "function object" and have a family of function types differentiated by argument and return types, such as the type codice_10 denoting functions taking an integer and returning a Boolean. In C, a function is not a first-class data type but function pointers can be manipulated by the program. Java and C++ originally did not have function values but have added them in C++11 and Java 8.
Type constructors.
A type constructor builds new types from old ones, and can be thought of as an operator taking zero or more types as arguments and producing a type. Product types, function types, power types and list types can be made into type constructors.
Quantified types.
Universally-quantified and existentially-quantified types are based on predicate logic. Universal quantification is written as formula_0 or codice_11 and is the intersection over all types codice_12 of the body codice_13, i.e. the value is of type codice_13 for every codice_12. Existential quantification written as formula_1 or codice_16 and is the union over all types codice_12 of the body codice_13, i.e. the value is of type codice_13 for some codice_12.
In Haskell, universal quantification is commonly used, but existential types must be encoded by transforming codice_21 to codice_22 or a similar type.
Refinement types.
A refinement type is a type endowed with a predicate which is assumed to hold for any element of the refined type. For instance, the type of natural numbers greater than 5 may be written as formula_2
Dependent types.
A dependent type is a type whose definition depends on a value. Two common examples of dependent types are dependent functions and dependent pairs. The return type of a dependent function may depend on the value (not just type) of one of its arguments. A dependent pair may have a second value of which the type depends on the first value.
Intersection types.
An intersection type is a type containing those values that are members of two specified types. For example, in Java the class implements both the and the interfaces. Therefore, an object of type is a member of the type . Considering types as sets of values, the intersection type formula_3 is the set-theoretic intersection of formula_4 and formula_5. It is also possible to define a dependent intersection type, denoted formula_6, where the type formula_5 may depend on the term variable formula_7.
Meta types.
Some programming languages represent the type information as data, enabling type introspection and reflection. In contrast, higher order type systems, while allowing types to be constructed from other types and passed to functions as values, typically avoid basing computational decisions on them.
Convenience types.
For convenience, high-level languages and databases may supply ready-made "real world" data types, for instance times, dates, and monetary values (currency). These may be built-in to the language or implemented as composite types in a library.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\forall x.f(x)"
},
{
"math_id": 1,
"text": "\\exists x.f(x)"
},
{
"math_id": 2,
"text": "\\{n\\in \\mathbb {N} \\,|\\,n>5\\}"
},
{
"math_id": 3,
"text": "\\sigma \\cap \\tau"
},
{
"math_id": 4,
"text": "\\sigma"
},
{
"math_id": 5,
"text": "\\tau"
},
{
"math_id": 6,
"text": "(x : \\sigma) \\cap \\tau"
},
{
"math_id": 7,
"text": "x"
}
] | https://en.wikipedia.org/wiki?curid=93817 |
9383513 | Pauli equation | Quantum mechanical equation of motion of charged particles in magnetic field
In quantum mechanics, the Pauli equation or Schrödinger–Pauli equation is the formulation of the Schrödinger equation for spin-1/2 particles, which takes into account the interaction of the particle's spin with an external electromagnetic field. It is the non-relativistic limit of the Dirac equation and can be used where particles are moving at speeds much less than the speed of light, so that relativistic effects can be neglected. It was formulated by Wolfgang Pauli in 1927. In its linearized form it is known as Lévy-Leblond equation.
Equation.
For a particle of mass formula_0 and electric charge formula_1, in an electromagnetic field described by the magnetic vector potential formula_2 and the electric scalar potential formula_3, the Pauli equation reads:
Pauli equation "(general)"
formula_4
Here formula_5 are the Pauli operators collected into a vector for convenience, and formula_6 is the momentum operator in position representation. The state of the system, formula_7 (written in Dirac notation), can be considered as a two-component spinor wavefunction, or a column vector (after choice of basis):
formula_8.
The Hamiltonian operator is a 2 × 2 matrix because of the Pauli operators.
formula_9
Substitution into the Schrödinger equation gives the Pauli equation. This Hamiltonian is similar to the classical Hamiltonian for a charged particle interacting with an electromagnetic field. See Lorentz force for details of this classical case. The kinetic energy term for a free particle in the absence of an electromagnetic field is just formula_10 where formula_11 is the "kinetic" momentum, while in the presence of an electromagnetic field it involves the minimal coupling formula_12, where now formula_13 is the kinetic momentum and formula_11 is the canonical momentum.
The Pauli operators can be removed from the kinetic energy term using the Pauli vector identity:
formula_14
Note that unlike a vector, the differential operator formula_15 has non-zero cross product with itself. This can be seen by considering the cross product applied to a scalar function formula_16:
formula_17
where formula_18 is the magnetic field.
For the full Pauli equation, one then obtains
Pauli equation "(standard form)"
formula_19
for which only a few analytic results are known, e.g., in the context of Landau quantization with homogenous magnetic fields or for an idealized, Coulomb-like, inhomogeneous magnetic field.
Weak magnetic fields.
For the case of where the magnetic field is constant and homogenous, one may expand formula_20 using the symmetric gauge formula_21, where formula_22 is the position operator and A is now an operator. We obtain
formula_23
where formula_24 is the particle angular momentum operator and we neglected terms in the magnetic field squared formula_25. Therefore, we obtain
Pauli equation "(weak magnetic fields)"
formula_26
where formula_27 is the spin of the particle. The factor 2 in front of the spin is known as the Dirac "g"-factor. The term in formula_28, is of the form formula_29 which is the usual interaction between a magnetic moment formula_30 and a magnetic field, like in the Zeeman effect.
For an electron of charge formula_31 in an isotropic constant magnetic field, one can further reduce the equation using the total angular momentum formula_32 and Wigner-Eckart theorem. Thus we find
formula_33
where formula_34 is the Bohr magneton and formula_35 is the magnetic quantum number related to formula_36. The term formula_37 is known as the Landé g-factor, and is given here by
formula_38
where formula_39 is the orbital quantum number related to formula_40 and formula_41 is the total orbital quantum number related to formula_42.
From Dirac equation.
The Pauli equation can be inferred from the non-relativistic limit of the Dirac equation, which is the relativistic quantum equation of motion for spin-1/2 particles.
Derivation.
Dirac equation can be written as:
formula_43
where formula_44 and formula_45 are two-component spinor, forming a bispinor.
Using the following ansatz:
formula_46
with two new spinors formula_47, the equation becomes
formula_48
In the non-relativistic limit, formula_49 and the kinetic and electrostatic energies are small with respect to the rest energy formula_50, leading to the Lévy-Leblond equation. Thusformula_51
Inserted in the upper component of Dirac equation, we find Pauli equation (general form):
formula_52
From a Foldy–Wouthuysen transformation.
The rigorous derivation of the Pauli equation follows from Dirac equation in an external field and performing a Foldy–Wouthuysen transformation considering terms up to order formula_53. Similarly, higher order corrections to the Pauli equation can be determined giving rise to spin-orbit and Darwin interaction terms, when expanding up to order formula_54 instead.
Pauli coupling.
Pauli's equation is derived by requiring minimal coupling, which provides a "g"-factor "g"=2. Most elementary particles have anomalous "g"-factors, different from 2. In the domain of relativistic quantum field theory, one defines a non-minimal coupling, sometimes called Pauli coupling, in order to add an anomalous factor
formula_55
where formula_56 is the four-momentum operator, formula_57 is the electromagnetic four-potential, formula_58 is proportional to the anomalous magnetic dipole moment, formula_59 is the electromagnetic tensor, and formula_60 are the Lorentzian spin matrices and the commutator of the gamma matrices formula_61. In the context of non-relativistic quantum mechanics, instead of working with the Schrödinger equation, Pauli coupling is equivalent to using the Pauli equation (or postulating Zeeman energy) for an arbitrary "g"-factor.
Footnotes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "m"
},
{
"math_id": 1,
"text": "q"
},
{
"math_id": 2,
"text": "\\mathbf{A}"
},
{
"math_id": 3,
"text": "\\phi"
},
{
"math_id": 4,
"text": "\\left[ \\frac{1}{2m}(\\boldsymbol{\\sigma}\\cdot(\\mathbf{\\hat{p}} - q \\mathbf{A}))^2 + q \\phi \\right] |\\psi\\rangle = i \\hbar \\frac{\\partial}{\\partial t} |\\psi\\rangle "
},
{
"math_id": 5,
"text": "\\boldsymbol{\\sigma} = (\\sigma_x, \\sigma_y, \\sigma_z)"
},
{
"math_id": 6,
"text": "\\mathbf{\\hat{p}} = -i\\hbar \\nabla"
},
{
"math_id": 7,
"text": "|\\psi\\rangle"
},
{
"math_id": 8,
"text": " |\\psi\\rangle = \\psi_+ |\\mathord\\uparrow\\rangle + \\psi_-|\\mathord\\downarrow\\rangle \\,\\stackrel{\\cdot}{=}\\, \\begin{bmatrix} \n\\psi_+ \\\\\n\\psi_-\n\\end{bmatrix}"
},
{
"math_id": 9,
"text": "\\hat{H} = \\frac{1}{2m} \\left[\\boldsymbol{\\sigma}\\cdot(\\mathbf{\\hat{p}} - q \\mathbf{A}) \\right]^2 + q \\phi"
},
{
"math_id": 10,
"text": "\\frac{\\mathbf{p}^2}{2m}"
},
{
"math_id": 11,
"text": "\\mathbf{p}"
},
{
"math_id": 12,
"text": "\\mathbf{\\Pi} = \\mathbf{p} - q\\mathbf{A}"
},
{
"math_id": 13,
"text": "\\mathbf{\\Pi}"
},
{
"math_id": 14,
"text": "(\\boldsymbol{\\sigma}\\cdot \\mathbf{a})(\\boldsymbol{\\sigma}\\cdot \\mathbf{b}) = \\mathbf{a}\\cdot\\mathbf{b} + i\\boldsymbol{\\sigma}\\cdot \\left(\\mathbf{a} \\times \\mathbf{b}\\right)"
},
{
"math_id": 15,
"text": "\\mathbf{\\hat{p}} - q\\mathbf{A} = -i \\hbar \\nabla - q \\mathbf{A}"
},
{
"math_id": 16,
"text": "\\psi"
},
{
"math_id": 17,
"text": "\\left[\\left(\\mathbf{\\hat{p}} - q\\mathbf{A}\\right) \\times \\left(\\mathbf{\\hat{p}} - q\\mathbf{A}\\right)\\right]\\psi = -q \\left[\\mathbf{\\hat{p}} \\times \\left(\\mathbf{A}\\psi\\right) + \\mathbf{A} \\times \\left(\\mathbf{\\hat{p}}\\psi\\right)\\right] = i q \\hbar \\left[\\nabla \\times \\left(\\mathbf{A}\\psi\\right) + \\mathbf{A} \\times \\left(\\nabla\\psi\\right)\\right] = i q \\hbar \\left[\\psi\\left(\\nabla \\times \\mathbf{A}\\right) - \\mathbf{A} \\times \\left(\\nabla\\psi\\right) + \\mathbf{A} \\times \\left(\\nabla\\psi\\right)\\right] = i q \\hbar \\mathbf{B} \\psi"
},
{
"math_id": 18,
"text": "\\mathbf{B} = \\nabla \\times \\mathbf{A}"
},
{
"math_id": 19,
"text": "\\hat{H} |\\psi\\rangle = \\left[\\frac{1}{2m}\\left[\\left(\\mathbf{\\hat{p}} - q \\mathbf{A}\\right)^2 - q \\hbar \\boldsymbol{\\sigma}\\cdot \\mathbf{B}\\right] + q \\phi\\right]|\\psi\\rangle = i \\hbar \\frac{\\partial}{\\partial t} |\\psi\\rangle"
},
{
"math_id": 20,
"text": "(\\mathbf{\\hat{p}}-q\\mathbf{A})^2"
},
{
"math_id": 21,
"text": "\\mathbf{\\hat{A}}=\\frac{1}{2}\\mathbf{B}\\times\\mathbf{\\hat{r}}"
},
{
"math_id": 22,
"text": "\\mathbf{r}"
},
{
"math_id": 23,
"text": "(\\mathbf \\hat{p}-q \\mathbf \\hat{A})^2 = |\\mathbf{\\hat{p}}|^{2} - q(\\mathbf{\\hat{r}}\\times\\mathbf \\hat{p})\\cdot \\mathbf{B} +\\frac{1}{4}q^2\\left(|\\mathbf{B}|^2|\\mathbf{\\hat{r}}|^2-|\\mathbf{B}\\cdot\\mathbf{\\hat{r}}|^2\\right) \\approx \\mathbf{\\hat{p}}^{2} - q\\mathbf \\hat{L}\\cdot\\mathbf B\\,, "
},
{
"math_id": 24,
"text": "\\mathbf{\\hat{L}}"
},
{
"math_id": 25,
"text": "B^2"
},
{
"math_id": 26,
"text": " \\left[\\frac{1}{2m}\\left[|\\mathbf{\\hat{p}}|^2 - q (\\mathbf{\\hat{L}}+2\\mathbf{\\hat{S}})\\cdot\\mathbf{B}\\right] + q \\phi\\right]|\\psi\\rangle = i \\hbar \\frac{\\partial}{\\partial t} |\\psi\\rangle"
},
{
"math_id": 27,
"text": "\\mathbf{S}=\\hbar\\boldsymbol{\\sigma}/2"
},
{
"math_id": 28,
"text": "\\mathbf{B}"
},
{
"math_id": 29,
"text": "-\\boldsymbol{\\mu}\\cdot\\mathbf{B}"
},
{
"math_id": 30,
"text": "\\boldsymbol{\\mu}"
},
{
"math_id": 31,
"text": "-e"
},
{
"math_id": 32,
"text": "\\mathbf{J}=\\mathbf{L}+\\mathbf{S}"
},
{
"math_id": 33,
"text": " \\left[\\frac{|\\mathbf{p}|^2}{2m} + \\mu_{\\rm B} g_J m_j|\\mathbf{B}| - e \\phi\\right]|\\psi\\rangle = i \\hbar \\frac{\\partial}{\\partial t} |\\psi\\rangle"
},
{
"math_id": 34,
"text": "\\mu_{\\rm B}=\\frac{e\\hbar }{2m}"
},
{
"math_id": 35,
"text": "m_j"
},
{
"math_id": 36,
"text": "\\mathbf{J}"
},
{
"math_id": 37,
"text": "g_J"
},
{
"math_id": 38,
"text": "g_J = \\frac{3}{2}+\\frac{\\frac{3}{4}-\\ell(\\ell+1)}{2j(j+1)},"
},
{
"math_id": 39,
"text": "\\ell"
},
{
"math_id": 40,
"text": "L^2"
},
{
"math_id": 41,
"text": "j"
},
{
"math_id": 42,
"text": "J^2"
},
{
"math_id": 43,
"text": "i \\hbar\\, \\partial_t \\begin{pmatrix} \\psi_1 \\\\ \\psi_2\\end{pmatrix} = c \\, \\begin{pmatrix} \\boldsymbol{ \\sigma}\\cdot \\boldsymbol \\Pi \\,\\psi_2 \\\\ \\boldsymbol{\\sigma}\\cdot \\boldsymbol \\Pi \\,\\psi_1\\end{pmatrix} + q\\, \\phi \\, \\begin{pmatrix} \\psi_1 \\\\ \\psi_2\\end{pmatrix} + mc^2\\, \\begin{pmatrix} \\psi_1 \\\\ -\\psi_2\\end{pmatrix} ,\n"
},
{
"math_id": 44,
"text": "\\partial_t=\\frac{\\partial}{\\partial t}"
},
{
"math_id": 45,
"text": "\\psi_1,\\psi_2"
},
{
"math_id": 46,
"text": "\\begin{pmatrix} \\psi_1 \\\\ \\psi_2 \\end{pmatrix} = e^{- i \\tfrac{mc^2t}{\\hbar}} \\begin{pmatrix} \\psi \\\\ \\chi \\end{pmatrix} ,"
},
{
"math_id": 47,
"text": "\\psi,\\chi"
},
{
"math_id": 48,
"text": "\n i \\hbar \\partial_t \\begin{pmatrix} \\psi \\\\ \\chi\\end{pmatrix} = c \\, \\begin{pmatrix} \\boldsymbol{ \\sigma}\\cdot \\boldsymbol \\Pi \\,\\chi\\\\ \\boldsymbol{\\sigma}\\cdot \\boldsymbol \\Pi \\,\\psi\\end{pmatrix} +q\\, \\phi \\, \\begin{pmatrix} \\psi\\\\ \\chi \\end{pmatrix} + \\begin{pmatrix} 0 \\\\ -2\\,mc^2\\, \\chi \\end{pmatrix} .\n"
},
{
"math_id": 49,
"text": "\\partial_t \\chi"
},
{
"math_id": 50,
"text": "mc^2"
},
{
"math_id": 51,
"text": "\\chi \\approx \\frac{\\boldsymbol \\sigma \\cdot \\boldsymbol{\\Pi}\\,\\psi}{2\\,mc}\\,."
},
{
"math_id": 52,
"text": "i \\hbar\\, \\partial_t \\, \\psi= \\left[\\frac{(\\boldsymbol \\sigma \\cdot \\boldsymbol \\Pi)^2}{2\\,m}\n+q\\, \\phi\\right] \\psi."
},
{
"math_id": 53,
"text": "\\mathcal{O}(1/mc)"
},
{
"math_id": 54,
"text": "\\mathcal{O}(1/(mc)^2)"
},
{
"math_id": 55,
"text": "\\gamma^{\\mu}p_\\mu\\to \\gamma^{\\mu}p_\\mu-q\\gamma^{\\mu}A_\\mu +a\\sigma_{\\mu\\nu}F^{\\mu\\nu}"
},
{
"math_id": 56,
"text": "p_\\mu"
},
{
"math_id": 57,
"text": "A_\\mu"
},
{
"math_id": 58,
"text": "a"
},
{
"math_id": 59,
"text": "F^{\\mu\\nu}=\\partial^{\\mu}A^{\\nu}-\\partial^{\\nu}A^{\\mu}"
},
{
"math_id": 60,
"text": "\\sigma_{\\mu\\nu}=\\frac{i}{2}[\\gamma_{\\mu},\\gamma_{\\nu}]"
},
{
"math_id": 61,
"text": "\\gamma^{\\mu}"
}
] | https://en.wikipedia.org/wiki?curid=9383513 |
9384714 | Basal area | Basal area is the cross-sectional area of trees at breast height (1.3m or 4.5 ft above ground). It is a common way to describe stand density. In forest management, basal area usually refers to merchantable timber and is given on a per hectare or per acre basis. If one cut down all the merchantable trees on an acre at off the ground and measured the square inches on the top of each stump (πr*r), added them all together and divided by square feet (144 sq inches per square foot), that would be the basal area on that acre. In forest ecology, basal area is used as a relatively easily-measured surrogate of total forest biomass and structural complexity, and change in basal area over time is an important indicator of forest recovery during succession
Estimation from diameter at breast height.
The basal area (BA) of a tree can be estimated from its diameter at breast height (DBH), the diameter of the trunk as measured 1.3m (4.5 ft) above the ground. DBH is converted to BA based on the formula for the area of a circle:
formula_0
If formula_1 was measured in cm, formula_2 will be in cm2. To convert to m2, divide by 10,000:
formula_3
If formula_1 is in inches, divide by 144 to convert to ft2:
formula_4
The formula for BA in ft2 may also be simplified as:
formula_5 in English system
formula_6 in Metric system
The basal area of a forest can be found by adding the basal areas (as calculated above) of all of the trees in an area and dividing by the area of land in which the trees were measured. Basal area is generally made for a plot and then scaled to m2/ha or ft2/acre to compare forest productivity and growth rate among multiple sites.
Estimation using a wedge prism.
A wedge prism can be used to quickly estimate basal area per hectare. To find basal area using this method, simply multiply your BAF (Basal Area Factor) by the number of "in" trees in your variable radius plot. The BAF will vary based on the prism used, common BAFs include 5/8/10, and all "in" trees are those trees, when viewed through your prism from plot centre, that appear to be in-line with the standing tree on the outside of the prism.
Worked example.
Suppose you carried out a survey using a variable radius plot with angle count sampling (wedge prism) and you selected a Basal Area Factor (BAF) of 4. If your first tree had a diameter at breast height (DBH) of 14cm, then the standard way of calculating how much of 1ha was covered by tree area (scaling up from that tree to the hectare) would be:
(BAF/((DBH+0.5)2 × π/4))) × 10,000
In this case this means in every Ha there is 242 m2 of tree area according to this sampled tree being taken as representative of all the unmeasured trees.
Fixed area plot.
It would also be possible to survey the trees in a Fixed Area Plot (FAP). Also called a Fixed Radius Plot. In the case that this plot was 100 m2. Then the formula would be
(DBH+0.5)2X π/4
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "BA =\\pi \\times (DBH/2)^2"
},
{
"math_id": 1,
"text": "DBH"
},
{
"math_id": 2,
"text": "BA"
},
{
"math_id": 3,
"text": "BA(m^2) = \\frac{\\pi \\times (DBH(cm)/2)^2}{10000}"
},
{
"math_id": 4,
"text": "BA(ft^2) = \\frac{\\pi \\times (DBH(inches)/2)^2}{144}"
},
{
"math_id": 5,
"text": " BA(ft^2) = 0.005454 \\times DBH(in)^2 "
},
{
"math_id": 6,
"text": " BA(m^2) = 0.00007854 \\times DBH(cm)^2 "
}
] | https://en.wikipedia.org/wiki?curid=9384714 |
938663 | Multi-task learning | Solving multiple machine learning tasks at the same time
Multi-task learning (MTL) is a subfield of machine learning in which multiple learning tasks are solved at the same time, while exploiting commonalities and differences across tasks. This can result in improved learning efficiency and prediction accuracy for the task-specific models, when compared to training the models separately.
Inherently, Multi-task learning is a multi-objective optimization problem having trade-offs between different tasks.
Early versions of MTL were called "hints".
In a widely cited 1997 paper, Rich Caruana gave the following characterization:Multitask Learning is an approach to inductive transfer that improves generalization by using the domain information contained in the training signals of related tasks as an inductive bias. It does this by learning tasks in parallel while using a shared representation; what is learned for each task can help other tasks be learned better.
In the classification context, MTL aims to improve the performance of multiple classification tasks by learning them jointly. One example is a spam-filter, which can be treated as distinct but related classification tasks across different users. To make this more concrete, consider that different people have different distributions of features which distinguish spam emails from legitimate ones, for example an English speaker may find that all emails in Russian are spam, not so for Russian speakers. Yet there is a definite commonality in this classification task across users, for example one common feature might be text related to money transfer. Solving each user's spam classification problem jointly via MTL can let the solutions inform each other and improve performance. Further examples of settings for MTL include multiclass classification and multi-label classification.
Multi-task learning works because regularization induced by requiring an algorithm to perform well on a related task can be superior to regularization that prevents overfitting by penalizing all complexity uniformly. One situation where MTL may be particularly helpful is if the tasks share significant commonalities and are generally slightly under sampled. However, as discussed below, MTL has also been shown to be beneficial for learning unrelated tasks.
Methods.
The key challenge in multi-task learning, is how to combine learning signals from multiple tasks into a single model. This may strongly depend on how well different task agree with each other, or contradict each other. There are several ways to address this challenge:
Task grouping and overlap.
Within the MTL paradigm, information can be shared across some or all of the tasks. Depending on the structure of task relatedness, one may want to share information selectively across the tasks. For example, tasks may be grouped or exist in a hierarchy, or be related according to some general metric. Suppose, as developed more formally below, that the parameter vector modeling each task is a linear combination of some underlying basis. Similarity in terms of this basis can indicate the relatedness of the tasks. For example, with sparsity, overlap of nonzero coefficients across tasks indicates commonality. A task grouping then corresponds to those tasks lying in a subspace generated by some subset of basis elements, where tasks in different groups may be disjoint or overlap arbitrarily in terms of their bases. Task relatedness can be imposed a priori or learned from the data. Hierarchical task relatedness can also be exploited implicitly without assuming a priori knowledge or learning relations explicitly. For example, the explicit learning of sample relevance across tasks can be done to guarantee the effectiveness of joint learning across multiple domains.
Exploiting unrelated tasks.
One can attempt learning a group of principal tasks using a group of auxiliary tasks, unrelated to the principal ones. In many applications, joint learning of unrelated tasks which use the same input data can be beneficial. The reason is that prior knowledge about task relatedness can lead to sparser and more informative representations for each task grouping, essentially by screening out idiosyncrasies of the data distribution. Novel methods which builds on a prior multitask methodology by favoring a shared low-dimensional representation within each task grouping have been proposed. The programmer can impose a penalty on tasks from different groups which encourages the two representations to be orthogonal. Experiments on synthetic and real data have indicated that incorporating unrelated tasks can result in significant improvements over standard multi-task learning methods.
Transfer of knowledge.
Related to multi-task learning is the concept of knowledge transfer. Whereas traditional multi-task learning implies that a shared representation is developed concurrently across tasks, transfer of knowledge implies a sequentially shared representation. Large scale machine learning projects such as the deep convolutional neural network GoogLeNet, an image-based object classifier, can develop robust representations which may be useful to further algorithms learning related tasks. For example, the pre-trained model can be used as a feature extractor to perform pre-processing for another learning algorithm. Or the pre-trained model can be used to initialize a model with similar architecture which is then fine-tuned to learn a different classification task.
Multiple non-stationary tasks.
Traditionally Multi-task learning and transfer of knowledge are applied to stationary learning settings. Their extension to non-stationary environments is termed "Group online adaptive learning" (GOAL). Sharing information could be particularly useful if learners operate in continuously changing environments, because a learner could benefit from previous experience of another learner to quickly adapt to their new environment. Such group-adaptive learning has numerous applications, from predicting financial time-series, through content recommendation systems, to visual understanding for adaptive autonomous agents.
Multi-task optimization.
Multitask optimization: In some cases, the simultaneous training of seemingly related tasks may hinder performance compared to single-task models. Commonly, MTL models employ task-specific modules on top of a joint feature representation obtained using a shared module. Since this joint representation must capture useful features across all tasks, MTL may hinder individual task performance if the different tasks seek conflicting representation, i.e., the gradients of different tasks point to opposing directions or differ significantly in magnitude. This phenomenon is commonly referred to as negative transfer. To mitigate this issue, various MTL optimization methods have been proposed. Commonly, the per-task gradients are combined into a joint update direction through various aggregation algorithms or heuristics. These methods include subtracting the projection of conflicted gradients, applying techniques from game theory, and using Bayesian modeling to get a distribution over gradients.
Mathematics.
Reproducing Hilbert space of vector valued functions (RKHSvv).
The MTL problem can be cast within the context of RKHSvv (a complete inner product space of vector-valued functions equipped with a reproducing kernel). In particular, recent focus has been on cases where task structure can be identified via a separable kernel, described below. The presentation here derives from Ciliberto et al., 2015.
RKHSvv concepts.
Suppose the training data set is formula_0, with formula_1, formula_2, where t indexes task, and formula_3. Let formula_4. In this setting there is a consistent input and output space and the same loss function formula_5 for each task: . This results in the regularized machine learning problem:
where formula_6 is a vector valued reproducing kernel Hilbert space with functions formula_7 having components formula_8.
The reproducing kernel for the space formula_6 of functions formula_9 is a symmetric matrix-valued function formula_10 , such that formula_11 and the following reproducing property holds:
The reproducing kernel gives rise to a representer theorem showing that any solution to equation 1 has the form:
Separable kernels.
The form of the kernel Γ induces both the representation of the feature space and structures the output across tasks. A natural simplification is to choose a "separable kernel," which factors into separate kernels on the input space X and on the tasks formula_12. In this case the kernel relating scalar components formula_13 and formula_14 is given by formula_15. For vector valued functions formula_16 we can write formula_17, where k is a scalar reproducing kernel, and A is a symmetric positive semi-definite formula_18 matrix. Henceforth denote formula_19 .
This factorization property, separability, implies the input feature space representation does not vary by task. That is, there is no interaction between the input kernel and the task kernel. The structure on tasks is represented solely by A. Methods for non-separable kernels Γ is a current field of research.
For the separable case, the representation theorem is reduced to formula_20. The model output on the training data is then KCA , where K is the formula_21 empirical kernel matrix with entries formula_22, and C is the formula_23 matrix of rows formula_24.
With the separable kernel, equation 1 can be rewritten as
where V is a (weighted) average of L applied entry-wise to Y and KCA. (The weight is zero if formula_25 is a missing observation).
Note the second term in P can be derived as follows:
formula_26
Known task structure.
Task structure representations.
There are three largely equivalent ways to represent task structure: through a regularizer; through an output metric, and through an output mapping.
Task structure examples.
Via the regularizer formulation, one can represent a variety of task structures easily.
Learning tasks together with their structure.
Learning problem P can be generalized to admit learning task matrix A as follows:
Choice of formula_45 must be designed to learn matrices "A" of a given type. See "Special cases" below.
Optimization of Q.
Restricting to the case of convex losses and coercive penalties Ciliberto "et al." have shown that although Q is not convex jointly in "C" and "A," a related problem is jointly convex.
Specifically on the convex set formula_46, the equivalent problem
is convex with the same minimum value. And if formula_47 is a minimizer for R then formula_48 is a minimizer for Q.
R may be solved by a barrier method on a closed set by introducing the following perturbation:
The perturbation via the barrier formula_49 forces the objective functions to be equal to formula_50 on the boundary of formula_51 .
S can be solved with a block coordinate descent method, alternating in "C" and "A." This results in a sequence of minimizers formula_52 in S that converges to the solution in R as formula_53, and hence gives the solution to Q.
Special cases.
Spectral penalties - Dinnuzo "et al" suggested setting "F" as the Frobenius norm formula_54. They optimized Q directly using block coordinate descent, not accounting for difficulties at the boundary of formula_55.
Clustered tasks learning - Jacob "et al" suggested to learn "A" in the setting where "T" tasks are organized in "R" disjoint clusters. In this case let formula_56 be the matrix with formula_57. Setting formula_58, and formula_59, the task matrix formula_60 can be parameterized as a function of formula_61: formula_62 , with terms that penalize the average, between clusters variance and within clusters variance respectively of the task predictions. M is not convex, but there is a convex relaxation formula_63. In this formulation, formula_64.
Generalizations.
Non-convex penalties - Penalties can be constructed such that A is constrained to be a graph Laplacian, or that A has low rank factorization. However these penalties are not convex, and the analysis of the barrier method proposed by Ciliberto et al. does not go through in these cases.
Non-separable kernels - Separable kernels are limited, in particular they do not account for structures in the interaction space between the input and output domains jointly. Future work is needed to develop models for these kernels.
Software package.
A Matlab package called Multi-Task Learning via StructurAl Regularization (MALSAR) implements the following multi-task learning algorithms: Mean-Regularized Multi-Task Learning, Multi-Task Learning with Joint Feature Selection, Robust Multi-Task Feature Learning, Trace-Norm Regularized Multi-Task Learning, Alternating Structural Optimization, Incoherent Low-Rank and Sparse Learning, Robust Low-Rank Multi-Task Learning, Clustered Multi-Task Learning, Multi-Task Learning with Graph Structures.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathcal{S}_t =\\{(x_i^t,y_i^t)\\}_{i=1}^{n_t}"
},
{
"math_id": 1,
"text": "x_i^t\\in\\mathcal{X}"
},
{
"math_id": 2,
"text": "y_i^t\\in\\mathcal{Y}"
},
{
"math_id": 3,
"text": "t \\in 1,...,T"
},
{
"math_id": 4,
"text": "n=\\sum_{t=1}^Tn_t "
},
{
"math_id": 5,
"text": " \\mathcal{L}:\\mathbb{R}\\times\\mathbb{R}\\rightarrow \\mathbb{R}_+ "
},
{
"math_id": 6,
"text": " \\mathcal{H} "
},
{
"math_id": 7,
"text": " f:\\mathcal X \\rightarrow \\mathcal{Y}^T "
},
{
"math_id": 8,
"text": " f_t:\\mathcal{X}\\rightarrow \\mathcal {Y} "
},
{
"math_id": 9,
"text": " f:\\mathcal X \\rightarrow \\mathbb{R}^T "
},
{
"math_id": 10,
"text": " \\Gamma :\\mathcal X\\times \\mathcal X \\rightarrow \\mathbb{R}^{T \\times T} "
},
{
"math_id": 11,
"text": " \\Gamma (\\cdot ,x)c\\in \\mathcal{H} "
},
{
"math_id": 12,
"text": " \\{1,...,T\\} "
},
{
"math_id": 13,
"text": " f_t "
},
{
"math_id": 14,
"text": " f_s "
},
{
"math_id": 15,
"text": " \\gamma((x_i,t),(x_j,s )) = k(x_i,x_j)k_T(s,t)=k(x_i,x_j)A_{s,t} "
},
{
"math_id": 16,
"text": " f\\in \\mathcal H "
},
{
"math_id": 17,
"text": "\\Gamma(x_i,x_j)=k(x_i,x_j)A"
},
{
"math_id": 18,
"text": "T\\times T"
},
{
"math_id": 19,
"text": " S_+^T=\\{\\text{PSD matrices} \\} \\subset \\mathbb R^{T \\times T} "
},
{
"math_id": 20,
"text": "f(x)=\\sum _{i=1} ^N k(x,x_i)Ac_i"
},
{
"math_id": 21,
"text": "n \\times n"
},
{
"math_id": 22,
"text": "K_{i,j}=k(x_i,x_j)"
},
{
"math_id": 23,
"text": "n \\times T"
},
{
"math_id": 24,
"text": "c_i"
},
{
"math_id": 25,
"text": " Y_i^t "
},
{
"math_id": 26,
"text": "\\begin{align}\n\\|f\\|^2_\\mathcal{H} &= \\left\\langle \\sum _{i=1} ^n k(\\cdot,x_i)Ac_i, \\sum _{j=1} ^n k(\\cdot ,x_j)Ac_j \\right\\rangle_{\\mathcal H }\n\\\\\n&= \\sum _{i,j=1} ^n \\langle k(\\cdot,x_i)A c_i, k(\\cdot ,x_j)Ac_j\\rangle_{\\mathcal H } & \\text{(bilinearity)}\n\\\\\n&= \\sum _{i,j=1} ^n \\langle k(x_i,x_j)A c_i, c_j\\rangle_{\\mathbb R^T } & \\text{(reproducing property)}\n\\\\\n&= \\sum _{i,j=1} ^n k(x_i,x_j) c_i^\\top A c_j=tr(KCAC^\\top ) \n\\end{align}"
},
{
"math_id": 27,
"text": "A^\\dagger = \\gamma I_T + ( \\gamma - \\lambda)\\frac {1} T \\mathbf{1}\\mathbf{1}^\\top "
},
{
"math_id": 28,
"text": "I_T "
},
{
"math_id": 29,
"text": "\\mathbf{1}\\mathbf{1}^\\top "
},
{
"math_id": 30,
"text": "\\sum_t || f_t - \\bar f|| _{\\mathcal H_k} "
},
{
"math_id": 31,
"text": "\\frac 1 T \\sum_t f_t "
},
{
"math_id": 32,
"text": "n_t"
},
{
"math_id": 33,
"text": " A^\\dagger = \\alpha I_T +(\\alpha - \\lambda )M "
},
{
"math_id": 34,
"text": " M_{t,s} = \\frac 1 {|G_r|} \\mathbb I(t,s\\in G_r) "
},
{
"math_id": 35,
"text": " \\alpha "
},
{
"math_id": 36,
"text": " \\sum _{r} \\sum _{t \\in G_r } ||f_t - \\frac 1 {|G_r|} \\sum _{s\\in G_r)} f_s|| "
},
{
"math_id": 37,
"text": " |G_r| "
},
{
"math_id": 38,
"text": " \\mathbb I "
},
{
"math_id": 39,
"text": " A^\\dagger = \\delta I_T + (\\delta -\\lambda)L "
},
{
"math_id": 40,
"text": " L=D-M"
},
{
"math_id": 41,
"text": " M_{t,s} "
},
{
"math_id": 42,
"text": "\\delta "
},
{
"math_id": 43,
"text": " \\sum _{t,s}||f_t - f_s ||_{\\mathcal H _k }^2 M_{t,s} "
},
{
"math_id": 44,
"text": "\\lambda \\sum_t ||f|| _{\\mathcal H_k} ^2 "
},
{
"math_id": 45,
"text": "F:S_+^T\\rightarrow \\mathbb R_+"
},
{
"math_id": 46,
"text": " \\mathcal C=\\{(C,A)\\in \\mathbb R^{n \\times T}\\times S_+^T | Range(C^\\top KC)\\subseteq Range(A)\\}"
},
{
"math_id": 47,
"text": " (C_R, A_R)"
},
{
"math_id": 48,
"text": " (C_R A^\\dagger _R, A_R)"
},
{
"math_id": 49,
"text": "\\delta ^2 tr(A^\\dagger)"
},
{
"math_id": 50,
"text": "+\\infty"
},
{
"math_id": 51,
"text": " R^{n \\times T}\\times S_+^T"
},
{
"math_id": 52,
"text": " (C_m,A_m)"
},
{
"math_id": 53,
"text": " \\delta_m \\rightarrow 0"
},
{
"math_id": 54,
"text": " \\sqrt{tr(A^\\top A)}"
},
{
"math_id": 55,
"text": "\\mathbb R^{n\\times T} \\times S_+^T"
},
{
"math_id": 56,
"text": " E\\in \\{0,1\\}^{T\\times R}"
},
{
"math_id": 57,
"text": " E_{t,r}=\\mathbb I (\\text{task }t\\in \\text{group }r)"
},
{
"math_id": 58,
"text": " M = I - E^\\dagger E^T"
},
{
"math_id": 59,
"text": " U = \\frac 1 T \\mathbf{11}^\\top "
},
{
"math_id": 60,
"text": " A^\\dagger "
},
{
"math_id": 61,
"text": " M "
},
{
"math_id": 62,
"text": " A^\\dagger(M) = \\epsilon _M U+\\epsilon_B (M-U)+\\epsilon (I-M) "
},
{
"math_id": 63,
"text": " \\mathcal S_c = \\{M\\in S_+^T:I-M\\in S_+^T \\land tr(M) = r \\} "
},
{
"math_id": 64,
"text": " F(A)=\\mathbb I(A(M)\\in \\{A:M\\in \\mathcal S_C\\}) "
}
] | https://en.wikipedia.org/wiki?curid=938663 |
9386948 | Directed infinity | A directed infinity is a type of infinity in the complex plane that has a defined complex argument "θ" but an infinite absolute value "r". For example, the limit of 1/"x" where "x" is a positive real number approaching zero is a directed infinity with argument 0; however, 1/0 is not a directed infinity, but a complex infinity. Some rules for manipulation of directed infinities (with all variables finite) are:
Here, sgn("z") = is the complex signum function.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "z\\infty = \\sgn(z)\\infty \\text{ if } z\\ne 0"
},
{
"math_id": 1,
"text": "0\\infty\\text{ is undefined, as is }\\frac{z\\infty}{w\\infty}"
},
{
"math_id": 2,
"text": "a z\\infty = \\begin{cases} \\sgn(z)\\infty & \\text{if }a > 0, \\\\ -\\sgn(z)\\infty & \\text{if }a < 0. \\end{cases} "
},
{
"math_id": 3,
"text": "w\\infty z\\infty = \\sgn(w z)\\infty"
}
] | https://en.wikipedia.org/wiki?curid=9386948 |
9387775 | Kármán–Howarth equation | Mathematical equation
In isotropic turbulence the Kármán–Howarth equation (after Theodore von Kármán and Leslie Howarth 1938), which is derived from the Navier–Stokes equations, is used to describe the evolution of non-dimensional longitudinal autocorrelation.
Mathematical description.
Consider a two-point velocity correlation tensor for homogeneous turbulence
formula_0
For isotropic turbulence, this correlation tensor can be expressed in terms of two scalar functions, using the invariant theory of full rotation group, first derived by Howard P. Robertson in 1940,
formula_1
where formula_2 is the root mean square turbulent velocity and formula_3 are turbulent velocity in all three directions. Here, formula_4 is the longitudinal correlation and formula_5 is the lateral correlation of velocity at two different points. From continuity equation, we have
formula_6
Thus formula_7 uniquely determines the two-point correlation function. Theodore von Kármán and Leslie Howarth derived the evolution equation for formula_7 from Navier–Stokes equation as
formula_8
where formula_9 uniquely determines the triple correlation tensor
formula_10
Loitsianskii's invariant.
L.G. Loitsianskii derived an integral invariant for the decay of the turbulence by taking the fourth moment of the Kármán–Howarth equation in 1939, i.e.,
formula_11
If formula_4 decays faster than formula_12 as formula_13 and also in this limit, if we assume that formula_14 vanishes, we have the quantity,
formula_15
which is invariant. Lev Landau and Evgeny Lifshitz showed that this invariant is equivalent to conservation of angular momentum. However, Ian Proudman and W.H. Reid showed that this invariant does not hold always since formula_16 is not in general zero, at least, in the initial period of the decay. In 1967, Philip Saffman showed that this integral depends on the initial conditions and the integral can diverge under certain conditions.
Decay of turbulence.
For the viscosity dominated flows, during the decay of turbulence, the Kármán–Howarth equation reduces to a heat equation once the triple correlation tensor is neglected, i.e.,
formula_17
With suitable boundary conditions, the solution to above equation is given by
formula_18
so that,
formula_19
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "R_{ij}(\\mathbf{r},t) = \\overline{u_i(\\mathbf{x},t) u_j(\\mathbf{x} + \\mathbf{r},t)}."
},
{
"math_id": 1,
"text": "R_{ij}(\\mathbf{r},t) = u'^2 \\left\\{ [f(r,t)-g(r,t)]\\frac{r_ir_j}{r^2} + g(r,t) \\delta_{ij}\\right\\}, \\quad f(r,t) = \\frac{R_{11}}{u'^2}, \\quad g(r,t) = \\frac{R_{22}}{u'^2}"
},
{
"math_id": 2,
"text": "u'"
},
{
"math_id": 3,
"text": "u_1,\\ u_2, \\ u_3"
},
{
"math_id": 4,
"text": "f(r)"
},
{
"math_id": 5,
"text": "g(r)"
},
{
"math_id": 6,
"text": "\\frac{\\partial R_{ij}}{\\partial r_j}=0 \\quad \\Rightarrow \\quad g(r,t) = f(r,t) + \\frac{r}{2} \\frac{\\partial}{\\partial r}f(r,t)"
},
{
"math_id": 7,
"text": "f(r,t)"
},
{
"math_id": 8,
"text": "\\frac \\partial {\\partial t} (u'^2 f) - \\frac{u'^3}{r^4} \\frac \\partial {\\partial r} (r^4 h) = \\frac{2\\nu u'^2}{r^4} \\frac\\partial {\\partial r} \\left(r^4 \\frac{\\partial f}{\\partial r}\\right)"
},
{
"math_id": 9,
"text": "h(r,t)"
},
{
"math_id": 10,
"text": "S_{ij} = {} \\frac{\\partial }{\\partial r_k} \\left( \\overline{u_i(\\mathbf{x},t) u_k(\\mathbf{x},t)u_j(\\mathbf{x}+\\mathbf{r},t)}-\\overline{u_i(\\mathbf{x},t) u_k(\\mathbf{x}+\\mathbf{r},t)u_j(\\mathbf{x}+\\mathbf{r},t)}\\right)."
},
{
"math_id": 11,
"text": "\\frac \\partial {\\partial t} \\left(u'^2 \\int_0^\\infty r^4 f\\ dr\\right) = \\left[2\\nu u'^2 r^4 \\frac{\\partial f}{\\partial r} + u'^3 r^4 h\\right]_0^\\infty."
},
{
"math_id": 12,
"text": "r^{-3}"
},
{
"math_id": 13,
"text": "r\\rightarrow\\infty"
},
{
"math_id": 14,
"text": "r^4 h"
},
{
"math_id": 15,
"text": "\\Lambda = u'^2 \\int_0^\\infty r^4 f\\ dr = \\mathrm{constant}"
},
{
"math_id": 16,
"text": "\\lim_{r\\rightarrow\\infty} (r^4 h)"
},
{
"math_id": 17,
"text": "\\frac \\partial {\\partial t} (u'^2 f) = \\frac{2\\nu u'^2}{r^4} \\frac\\partial {\\partial r} \\left(r^4 \\frac{\\partial f}{\\partial r}\\right)."
},
{
"math_id": 18,
"text": "f(r,t) = e^{-r^2/8\\nu t}, \\quad u'^2 = \\mathrm{const.}\\times (\\nu t)^{-5/2}"
},
{
"math_id": 19,
"text": "R_{ij}(r,t) \\sim (\\nu t)^{-5/2} e^{-r^2/8\\nu t}."
}
] | https://en.wikipedia.org/wiki?curid=9387775 |
9390507 | Windkessel effect | Mechanism that maintains blood pressure between heart beats
Windkessel effect (German: Windkesseleffekt) is a term used in medicine to account for the shape of the arterial blood pressure waveform in terms of the interaction between the stroke volume and the compliance of the aorta and large elastic arteries (Windkessel vessels) and the resistance of the smaller arteries and arterioles. Windkessel when loosely translated from German to English means 'air chamber', but is generally taken to imply an "elastic reservoir". The walls of large elastic arteries (e.g. aorta, common carotid, subclavian, and pulmonary arteries and their larger branches) contain elastic fibers, formed of elastin. These arteries distend when the blood pressure rises during systole and recoil when the blood pressure falls during diastole. Since the rate of blood entering these elastic arteries exceeds that leaving them via the peripheral resistance, there is a net storage of blood in the aorta and large arteries during systole, which discharges during diastole. The compliance (or distensibility) of the aorta and large elastic arteries is therefore analogous to a capacitor (employing the hydraulic analogy); to put it another way, these arteries collectively act as a hydraulic accumulator.
The Windkessel effect helps in damping the fluctuation in blood pressure (pulse pressure) over the cardiac cycle and assists in the maintenance of organ perfusion during diastole when cardiac ejection ceases. The idea of the Windkessel was alluded to by Giovanni Borelli, although Stephen Hales articulated the concept more clearly and drew the analogy with an air chamber used in fire engines in the 18th century. Otto Frank, an influential German physiologist, developed the concept and provided a firm mathematical foundation. Frank's model is sometimes called a two-element Windkessel to distinguish it from more recent and more elaborate Windkessel models (e.g. three- or four-element and non-linear Windkessel models).
Model types.
Modeling of a Windkessel.
Windkessel physiology remains a relevant yet dated description of important clinical interest. The historic mathematical definition of systole and diastole in the model are obviously not novel but are here elementally staged to four degrees. Reaching five would be original work.
Two-element.
It is assumed that the ratio of pressure to volume is constant and that outflow from the Windkessel is proportional to the fluid pressure. Volumetric inflow must equal the sum of the volume stored in the capacitive element and volumetric outflow through the resistive element. This relationship is described by a differential equation:
formula_0
"I(t)" is volumetric inflow due to the pump (heart) and is measured in volume per unit time, while "P(t)" is the pressure with respect to time measured in force per unit area, "C" is the ratio of volume to pressure for the Windkessel, and "R" is the resistance relating outflow to fluid pressure. This model is identical to the relationship between current, "I(t)", and electrical potential, "P(t)", in an electrical circuit equivalent of the two-element Windkessel model.
In the blood circulation, the passive elements in the circuit are assumed to represent elements in the cardiovascular system. The resistor, "R", represents the total peripheral resistance and the capacitor, "C", represents total arterial compliance.
During diastole there is no blood inflow since the aortic (or pulmonary valve) is closed, so the Windkessel can be solved for "P(t)" since "I(t) = 0:"
formula_1
where "td" is the time of the start of diastole and "P(td)" is the blood pressure at the start of diastole. This model is only a rough approximation of the arterial circulation; more realistic models incorporate more elements, provide more realistic estimates of the blood pressure waveform and are discussed below.
Three-element.
The three-element Windkessel improves on the two-element model by incorporating another resistive element to simulate resistance to blood flow due to the characteristic resistance of the aorta (or pulmonary artery). The differential equation for the 3-element model is:
formula_2
where "R1" is the characteristic resistance (this is assumed to be equivalent to the characteristic impedance), while "R2" represents the peripheral resistance. This model is widely used as an acceptable model of the circulation. For example it has been employed to evaluate blood pressure and flow in the aorta of a chick embryo and the pulmonary artery in a pig as well as providing the basis for construction of physical models of the circulation providing realistic loads for experimental studies of isolated hearts.
Four-element.
The three-element model overestimates the compliance and underestimates the characteristic impedance of the circulation. The four-element model includes an inductor, "L", which has units of mass per length, (formula_3), into the proximal component of the circuit to account for the inertia of blood flow. This is neglected in the two- and three- element models. The relevant equation is:
formula_4
Applications.
These models relate blood flow to blood pressure through parameters of "R, C ("and, in the case of the four-element model, "L)". These equations can be easily solved (e.g. by employing MATLAB and its supplement SIMULINK) to either find the values of pressure given flow and "R, C, L" parameters, or find values of "R, C, L" given flow and pressure. An example for the two-element model is shown below, where "I(t)" is depicted as an input signal during systole and diastole. Systole is represented by the "sin" function, while flow during diastole is zero. "s" represents the duration of the cardiac cycle, while "Ts" represents the duration of systole, and "Td" represents the duration of diastole (e.g. in seconds).
formula_5
formula_6
In physiology and disease.
The 'Windkessel effect' becomes diminished with age as the elastic arteries become less compliant, termed "hardening of the arteries" or arteriosclerosis, probably secondary to fragmentation and loss of elastin. The reduction in the Windkessel effect results in increased pulse pressure for a given stroke volume. The increased pulse pressure results in elevated systolic pressure (hypertension) which increases the risk of myocardial infarction, stroke, heart failure and a variety of other cardiovascular diseases.
Limitations.
Although the Windkessel is a simple and convenient concept, it has been largely superseded by more modern approaches that interpret arterial pressure and flow waveforms in terms of wave propagation and reflection. Recent attempts to integrate wave propagation and Windkessel approaches through a reservoir concept, have been criticized and a recent consensus document highlighted the wave-like nature of the reservoir.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "I(t)={P(t)\\over R}+C{dP(t)\\over dt}"
},
{
"math_id": 1,
"text": "P(t)=P(t_d)e^{-(t-t_d)\\over (RC)}"
},
{
"math_id": 2,
"text": "(1+{R_1\\over R_2})I(t)+CR_1{dI(t)\\over dt}= {P(t)\\over R_2}+C{dP(t)\\over dt}"
},
{
"math_id": 3,
"text": "{M\\over l^4}"
},
{
"math_id": 4,
"text": "(1+{R_1\\over R_2})I(t)+(R_1C+{L\\over R_2}){dI(t)\\over dt}+LC{d^2I(t)\\over dt^2}={P(t)\\over R_2}+C{dP(t)\\over dt}"
},
{
"math_id": 5,
"text": "I(t)=I_o\\sin[{(\\pi*{t\\over s})\\over Ts}] \\text{ for } {t\\over s}\\leq Ts\n"
},
{
"math_id": 6,
"text": "I(t)=0\\text{ for } Ts< (Td+Ts)"
}
] | https://en.wikipedia.org/wiki?curid=9390507 |
9391495 | Multivariate interpolation | Interpolation on functions of more than one variable
In numerical analysis, multivariate interpolation is interpolation on functions of more than one variable ("multivariate functions"); when the variates are spatial coordinates, it is also known as spatial interpolation.
The function to be interpolated is known at given points formula_0 and the interpolation problem consists of yielding values at arbitrary points formula_1.
Multivariate interpolation is particularly important in geostatistics, where it is used to create a digital elevation model from a set of points on the Earth's surface (for example, spot heights in a topographic survey or depths in a hydrographic survey).
Regular grid.
For function values known on a regular grid (having predetermined, not necessarily uniform, spacing), the following methods are available.
2 dimensions.
Bitmap resampling is the application of 2D multivariate interpolation in image processing.
Three of the methods applied on the same dataset, from 25 values located at the black dots. The colours represent the interpolated values.
See also Padua points, for polynomial interpolation in two variables.
3 dimensions.
See also bitmap resampling.
Tensor product splines for "N" dimensions.
Catmull-Rom splines can be easily generalized to any number of dimensions.
The cubic Hermite spline article will remind you that formula_2 for some 4-vector formula_3 which is a function of "x" alone, where formula_4 is the value at formula_5 of the function to be interpolated.
Rewrite this approximation as
formula_6
This formula can be directly generalized to N dimensions:
formula_7
Note that similar generalizations can be made for other types of spline interpolations, including Hermite splines.
In regards to efficiency, the general formula can in fact be computed as a composition of successive formula_8-type operations for any type of tensor product splines, as explained in the tricubic interpolation article.
However, the fact remains that if there are formula_9 terms in the 1-dimensional formula_10-like summation, then there will be formula_11 terms in the formula_12-dimensional summation.
Irregular grid (scattered data).
Schemes defined for scattered data on an irregular grid are more general.
They should all work on a regular grid, typically reducing to another known method.
"Gridding" is the process of converting irregularly spaced data to a regular grid (gridded data). | [
{
"math_id": 0,
"text": "(x_i, y_i, z_i, \\dots)"
},
{
"math_id": 1,
"text": "(x,y,z,\\dots)"
},
{
"math_id": 2,
"text": "\\mathrm{CINT}_x(f_{-1}, f_0, f_1, f_2) = \\mathbf{b}(x) \\cdot \\left( f_{-1} f_0 f_1 f_2 \\right)"
},
{
"math_id": 3,
"text": "\\mathbf{b}(x)"
},
{
"math_id": 4,
"text": "f_j"
},
{
"math_id": 5,
"text": "j"
},
{
"math_id": 6,
"text": "\n\\mathrm{CR}(x) = \\sum_{i=-1}^2 f_i b_i(x)\n"
},
{
"math_id": 7,
"text": "\n\\mathrm{CR}(x_1,\\dots,x_N) = \\sum_{i_1,\\dots,i_N=-1}^2 f_{i_1\\dots i_N} \\prod_{j=1}^N b_{i_j}(x_j)\n"
},
{
"math_id": 8,
"text": "\\mathrm{CINT}"
},
{
"math_id": 9,
"text": "n"
},
{
"math_id": 10,
"text": "\\mathrm{CR}"
},
{
"math_id": 11,
"text": "n^N"
},
{
"math_id": 12,
"text": "N"
}
] | https://en.wikipedia.org/wiki?curid=9391495 |
939333 | Universal generalization | In predicate logic, generalization (also universal generalization, universal introduction, GEN, UG) is a valid inference rule. It states that if formula_0 has been derived, then formula_1 can be derived.
Generalization with hypotheses.
The full generalization rule allows for hypotheses to the left of the turnstile, but with restrictions. Assume formula_2 is a set of formulas, formula_3 a formula, and formula_4 has been derived. The generalization rule states that formula_5 can be derived if formula_6 is not mentioned in formula_2 and formula_7 does not occur in formula_3.
These restrictions are necessary for soundness. Without the first restriction, one could conclude formula_8 from the hypothesis formula_9. Without the second restriction, one could make the following deduction:
This purports to show that formula_14 which is an unsound deduction. Note that formula_15 is permissible if formula_6 is not mentioned in formula_2 (the second restriction need not apply, as the semantic structure of formula_16 is not being changed by the substitution of any variables).
Example of a proof.
Prove: formula_17 is derivable from formula_18 and formula_19.
Proof:
In this proof, universal generalization was used in step 8. The deduction theorem was applicable in steps 10 and 11 because the formulas being moved have no free variables.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\vdash \\!P(x)"
},
{
"math_id": 1,
"text": "\\vdash \\!\\forall x \\, P(x)"
},
{
"math_id": 2,
"text": "\\Gamma"
},
{
"math_id": 3,
"text": "\\varphi"
},
{
"math_id": 4,
"text": "\\Gamma \\vdash \\varphi(y)"
},
{
"math_id": 5,
"text": "\\Gamma \\vdash \\forall x \\, \\varphi(x)"
},
{
"math_id": 6,
"text": "y"
},
{
"math_id": 7,
"text": "x"
},
{
"math_id": 8,
"text": "\\forall x P(x)"
},
{
"math_id": 9,
"text": "P(y)"
},
{
"math_id": 10,
"text": "\\exists z \\, \\exists w \\, ( z \\not = w) "
},
{
"math_id": 11,
"text": "\\exists w \\, (y \\not = w) "
},
{
"math_id": 12,
"text": "y \\not = x"
},
{
"math_id": 13,
"text": "\\forall x \\, (x \\not = x)"
},
{
"math_id": 14,
"text": "\\exists z \\, \\exists w \\, ( z \\not = w) \\vdash \\forall x \\, (x \\not = x),"
},
{
"math_id": 15,
"text": "\\Gamma \\vdash \\forall y \\, \\varphi(y)"
},
{
"math_id": 16,
"text": "\\varphi(y)"
},
{
"math_id": 17,
"text": " \\forall x \\, (P(x) \\rightarrow Q(x)) \\rightarrow (\\forall x \\, P(x) \\rightarrow \\forall x \\, Q(x)) "
},
{
"math_id": 18,
"text": " \\forall x \\, (P(x) \\rightarrow Q(x)) "
},
{
"math_id": 19,
"text": " \\forall x \\, P(x) "
}
] | https://en.wikipedia.org/wiki?curid=939333 |
939425 | 999 (number) | 999 (nine hundred [and] ninety-nine or nine-nine-nine) is a natural number following 998 and preceding 1000.Natural number
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "999=3^{3}\\times 37"
},
{
"math_id": 1,
"text": " \\frac{999}{(9+9+9)} = 37"
}
] | https://en.wikipedia.org/wiki?curid=939425 |
9394324 | Potential game | Game class in game theory
In game theory, a game is said to be a potential game if the incentive of all players to change their strategy can be expressed using a single global function called the potential function. The concept originated in a 1996 paper by Dov Monderer and Lloyd Shapley.
The properties of several types of potential games have since been studied. Games can be either "ordinal" or "cardinal" potential games. In cardinal games, the difference in individual payoffs for each player from individually changing one's strategy, other things equal, has to have the same value as the difference in values for the potential function. In ordinal games, only the signs of the differences have to be the same.
The potential function is a useful tool to analyze equilibrium properties of games, since the incentives of all players are mapped into one function, and the set of pure Nash equilibria can be found by locating the local optima of the potential function. Convergence and finite-time convergence of an iterated game towards a Nash equilibrium can also be understood by studying the potential function.
Potential games can be studied as repeated games with state so that every round played has a direct consequence on game's state in the next round. This approach has applications in distributed control such as distributed resource allocation, where players without a central correlation mechanism can cooperate to achieve a globally optimal resource distribution.
Definition.
Let formula_0 be the number of players, formula_1 the set of action profiles over the action sets formula_2 of each player and formula_3 be the payoff function for player formula_4.
Given a game formula_5, we say that formula_6 is a potential game with an exact (weighted, ordinal, generalized ordinal, best response) potential function if formula_7 is an exact (weighted, ordinal, generalized ordinal, best response, respectively) potential function for formula_6. Here, formula_8 is called
formula_10
That is: when player formula_11 switches from action formula_12 to action formula_13, the change in the potential formula_8 equals the change in the utility of that player.
formula_16 That is: when a player switches action, the change in formula_8 equals the change in the player's utility, times a positive player-specific weight. Every exact PF is a weighted PF with "wi"=1 for all "i".
formula_17 That is: when a player switches action, the "sign" of the change in formula_8 equals the "sign" of the change in the player's utility, whereas the magnitude of change may differ. Every weighted PF is an ordinal PF.
formula_19 That is: when a player switches action, if the player's utility increases, then the potential increases (but the opposite is not necessarily true). Every ordinal PF is a generalized-ordinal PF.
formula_21 where formula_22 is the best action for player formula_11 given formula_23.
Note that while there are formula_0 utility functions, one for each player, there is only one potential function. Thus, through the lens of potential functions, the players become interchangeable (in the sense of one of the definitions above). Because of this "symmetry" of the game, decentralized algorithms based on the shared potential function often lead to convergence (in some of sense) to a Nash equilibria.
A simple example.
In "a" 2-player, 2-action game with externalities, individual players' payoffs are given by the function "u""i"("a""i", "a""j")
"b""i" "a""i" + "w" "a""i" "a""j", where "a""i" is players i's action, "a""j" is the opponent's action, and "w" is "a" positive externality from choosing the same action. The action choices are +1 and −1, as seen in the payoff matrix in Figure 1.
This game has "a" potential function P("a"1, "a"2)
"b"1 "a"1 + "b"2 "a"2 + "w" "a"1 "a"2.
If player 1 moves from −1 to +1, the payoff difference is Δ"u"1 = "u"1(+1, "a"2) – "u"1(–1, "a"2)
2 "b"1 + 2 "w" "a"2.
The change in potential is ΔP = P(+1, "a"2) – P(–1, "a"2)
("b"1 + "b"2 "a"2 + "w" "a"2) – (–"b"1 + "b"2 "a"2 – "w" "a"2)
2 "b"1 + 2 "w" "a"2 = Δ"u"1.
The solution for player 2 is equivalent. Using numerical values "b"1 = 2, "b"2 = −1, "w" = 3, this example transforms into "a" simple battle of the sexes, as shown in Figure 2. The game has two pure Nash equilibria, (+1, +1) and (−1, −1). These are also the local maxima of the potential function (Figure 3). The only stochastically stable equilibrium is (+1, +1), the global maximum of the potential function.
A 2-player, 2-action game cannot be "a" potential game unless
formula_24
Potential games and congestion games.
Exact potential games are equivalent to congestion games: Rosenthal proved that every congestion game has an exact potential; Monderer and Shapley proved the opposite direction: every game with an exact potential function is a congestion game.
Potential games and improvement paths.
An improvement path (also called Nash dynamics) is a sequence of strategy-vectors, in which each vector is attained from the previous vector by a single player switching his strategy to a strategy that strictly increases his utility. If a game has a generalized-ordinal-potential function formula_8, then formula_8 is strictly increasing in every improvement path, so every improvement path is acyclic. If, in addition, the game has finitely many strategies, then every improvement path must be finite. This property is called the finite improvement property (FIP). We have just proved that every finite generalized-ordinal-potential game has the FIP. The opposite is also true: every finite game has the FIP has a generalized-ordinal-potential function. The terminal state in every finite improvement path is a Nash equilibrium, so FIP implies the existence of a pure-strategy Nash equilibrium. Moreover, it implies that a Nash equlibrium can be computed by a distributed process, in which each agent only has to improve his own strategy.
A best-response path is a special case of an improvement path, in which each vector is attained from the previous vector by a single player switching his strategy to a best-response strategy. The property that every best-response path is finite is called the finite best-response property (FBRP). FBRP is weaker than FIP, and it still implies the existence of a pure-strategy Nash equilibrium. It also implies that a Nash equlibrium can be computed by a distributed process, but the computational burden on the agents is higher than with FIP, since they have to compute a best-response.
An even weaker property is weak-acyclicity (WA). It means that, for any initial strategy-vector, "there exists" a finite best-response path starting at that vector. Weak-acyclicity is not sufficient for existence of a potential function (since some improvement-paths may be cyclic), but it is sufficient for the existence of pure-strategy Nash equilibirum. It implies that a Nash equilibrium can be computed almost-surely by a stochastic distributed process, in which at each point, a player is chosen at random, and this player chooses a best-strategy at random.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "N"
},
{
"math_id": 1,
"text": "A"
},
{
"math_id": 2,
"text": "A_{i}"
},
{
"math_id": 3,
"text": "u_i:A \\to \\mathbb{R}"
},
{
"math_id": 4,
"text": "1\\le i\\le N"
},
{
"math_id": 5,
"text": "G=(N,A=A_{1}\\times\\ldots\\times A_{N}, u: A \\rightarrow \\reals^N) "
},
{
"math_id": 6,
"text": "G"
},
{
"math_id": 7,
"text": "\\Phi: A \\rightarrow \\reals"
},
{
"math_id": 8,
"text": "\\Phi"
},
{
"math_id": 9,
"text": " \\forall i, \\forall {a_{-i}\\in A_{-i}},\\ \\forall {a'_{i},\\ a''_{i}\\in A_{i}}"
},
{
"math_id": 10,
"text": " \\Phi(a'_{i},a_{-i})-\\Phi(a''_{i},a_{-i}) = u_{i}(a'_{i},a_{-i})-u_{i}(a''_{i},a_{-i})"
},
{
"math_id": 11,
"text": "i"
},
{
"math_id": 12,
"text": "a'"
},
{
"math_id": 13,
"text": "a''"
},
{
"math_id": 14,
"text": "w \\in \\reals_{++}^N"
},
{
"math_id": 15,
"text": " \\forall i,\\forall {a_{-i}\\in A_{-i}},\\ \\forall {a'_{i},\\ a''_{i}\\in A_{i}}"
},
{
"math_id": 16,
"text": " \\Phi(a'_{i},a_{-i})-\\Phi(a''_{i},a_{-i}) = w_{i}(u_{i}(a'_{i},a_{-i})-u_{i}(a''_{i},a_{-i}))"
},
{
"math_id": 17,
"text": " u_{i}(a'_{i},a_{-i})-u_{i}(a''_{i},a_{-i})>0 \\Leftrightarrow\n \\Phi(a'_{i},a_{-i})-\\Phi(a''_{i},a_{-i})>0"
},
{
"math_id": 18,
"text": "\\forall i, \\forall {a_{-i}\\in A_{-i}},\\ \\forall {a'_{i},\\ a''_{i}\\in A_{i}}"
},
{
"math_id": 19,
"text": " u_{i}(a'_{i},a_{-i})-u_{i}(a''_{i},a_{-i})>0 \\Rightarrow\n \\Phi(a'_{i},a_{-i})-\\Phi(a''_{i},a_{-i}) >0 "
},
{
"math_id": 20,
"text": "\\forall i\\in N,\\ \\forall {a_{-i}\\in A_{-i}}"
},
{
"math_id": 21,
"text": "b_i(a_{-i})=\\arg\\max_{a_i\\in A_i} \\Phi(a_i,a_{-i})"
},
{
"math_id": 22,
"text": "b_i(a_{-i})"
},
{
"math_id": 23,
"text": "a_{-i}"
},
{
"math_id": 24,
"text": "\n[u_{1}(+1,-1)+u_1(-1,+1)]-[u_1(+1,+1)+u_1(-1,-1)] =\n[u_{2}(+1,-1)+u_2(-1,+1)]-[u_2(+1,+1)+u_2(-1,-1)] \n"
}
] | https://en.wikipedia.org/wiki?curid=9394324 |
9394749 | Temperate deciduous forest | Deciduous forest in the temperate regions
Temperate deciduous or temperate broad-leaf forests are a variety of temperate forest 'dominated' by deciduous trees that lose their leaves each winter. They represent one of Earth's major biomes, making up 9.69% of global land area. These forests are found in areas with distinct seasonal variation that cycle through warm, moist summers, cold winters, and moderate fall and spring seasons. They are most commonly found in the Northern Hemisphere, with particularly large regions in eastern North America, East Asia, and a large portion of Europe, though smaller regions of temperate deciduous forests are also located in South America. Examples of trees typically growing in the Northern Hemisphere's deciduous forests include oak, maple, basswood, beech and elm, while in the Southern Hemisphere, trees of the genus "Nothofagus" dominate this type of forest. Temperate deciduous forests provide several unique ecosystem services, including habitats for diverse wildlife, and they face a set of natural and human-induced disturbances that regularly alter their structure.
Geography.
Located below the northern boreal forests, temperate deciduous forests make up a significant portion of the land between the Tropic of Cancer (23formula_0°N) and latitudes of 50° North, in addition to areas south of the Tropic of Capricorn (23formula_0°S). Canada, the United States, China, and several European countries have the largest land area covered by temperate deciduous forests, with smaller portions present throughout South America, specifically Chile and Argentina.
Climate.
Temperate conditions refer to the cycle through four distinct seasons that occurs in areas between the polar regions and tropics. In these regions where temperate deciduous forest are found, warm and cold air circulation accounts for the biome's characteristic seasonal variation.
Temperature.
The average annual temperature tends to be around 10 °Celsius, though this is dependent on the region. Due to shading from the canopy, the microclimate of temperate deciduous forests tends to be about 2.1 °Celsius cooler than the surroundings, whereas winter temperatures are from 0.4 to 0.9 °Celsius warmer within forests as a result of insulation from vegetation strata.
Precipitation.
Annually, temperate deciduous forests experience approximately 750 to 1,500 millimeters of precipitation. As there is no distinct rainy season, precipitation is spread relatively evenly throughout the year. Snow makes up a portion of the precipitation present in temperate deciduous forests in the winter. Tree branches can intercept up to 80% of snowfall, affecting the amount of snow that ultimately reaches and melts on the forest floor.
Seasonal variation.
A defining factor of temperate deciduous forests is their leaf loss during the transition from fall to winter, an adaptation that arose as a solution for the low sunlight conditions and bitter cold temperatures. In these forests, winter is a time of dormancy for plants, when broadleaf deciduous trees conserve energy and prevent water loss, and many animal species hibernate or migrate. Preceding winter is fruit-bearing autumn, a time when leaves change color to various shades of red, yellow, and orange as chlorophyll breakdown gives rise to anthocyanin, carotene, and xanthophyl pigments.
Besides the characteristic colorful autumns and leafless winters, temperate deciduous forests have a lengthy growing season during the spring and summer months that tends to last anywhere from 120 to 250 days. Spring in temperate deciduous forests is a period of ground vegetation and seasonal herb growth, a process that starts early in the season before trees have regrown their leaves and when ample sunlight is available. Once a suitable temperature is reached in mid- to late spring, budding and flowering of tall deciduous trees also begins. In the summer, when fully-developed leaves occupy all trees, a moderately-dense canopy creates shade, increasing the humidity of forested areas.
Characteristics.
Soil.
Though there is latitudinal variation in soil quality of temperate deciduous forests, with those at central latitudes having a higher soil productivity than those more north or south, soil in this biome is overall highly fertile. The fallen leaves from deciduous trees introduce detritus to the forest floor, increasing levels of nutrients and organic matter in the soil. The high soil productivity of temperate deciduous forests puts them at a high risk of conversion to agricultural land for human use.
Flora.
Temperate deciduous forests are characterized by a variety of temperate deciduous tree species that vary based on region. Most tree species present in temperate deciduous forests are broadleaf trees that lose their leaves in the fall, though some coniferous trees such as pines ("Pinus") are present in northern temperate deciduous forests. Europe's temperate deciduous forests are rich with oaks of the genus "Quercus", European beech trees ("Fagus sylvatica"), and hornbeams "(Fagus grandifolia"), while those in Asia tend to have maples of the genus "Acer", a variety of ash trees ("Fraxinus"), and basswoods ("Tilia"). Similarly to Asia, North American forests have maples, especially "Acer saccharum," and basswoods, in addition to hickories "(Carya") and American chestnuts ("Castanea dentata"). Southern beech "(Nothofagus)" trees are prevalent in the temperate deciduous forests of South America. Elm trees ("Ulmus") and willows ("Salix") can also be found dispersed throughout the temperate deciduous forests of the world. While a wide variety of tree species can be found throughout the temperate deciduous forest biome, tree species richness is typically moderate in each individual ecosystem, with only 3 to 4 tree species per square kilometer.
Besides the old-growth trees that, with their domed tree crowns, form a canopy that lets little light filter through, a sub-canopy of shrubs such as mountain laurel and azaleas is present. These other plant species found in the canopy layers below the 35- to 40-meter mature trees are either adapted to low-light conditions or follow a seasonal schedule of growth that allows them to thrive before the formation of the canopy from mid-spring through mid-fall. Mosses and lichens make up significant ground cover, though they are also found growing on trees.
Fauna.
In addition to characteristic flora, temperate deciduous forests are home to several animal species that rely on the trees and other plant life for shelter and resources, such as squirrels, rabbits, skunks, birds, mountain lions, bobcats, timber wolves, foxes, and black bears. Deer are also present in large populations, though they are clearing rather than true forest animals. Large deer populations have deleterious effects on tree regeneration overall, and grazing also has significant negative effects on the number and kind of herbaceous flowering plants. The continuous increase of deer populations and killing of top carnivores suggests that overgrazing by deer will continue.
Ecosystem services.
Temperate deciduous forests provide several provisioning, regulating, supporting, and cultural ecosystem services. With a higher biodiversity than boreal forests, temperate deciduous forests maintain their genetic diversity by providing the supporting service of habitat availability for a variety of plants and animal species dependent on shade. These forests play a role in the regulation of air and soil quality by preventing soil erosion and flooding, while also storing carbon in their soil. Provisioning services provided by temperate deciduous forests include access to sources of drinking water, oxygen, food, timber, and biomass. Humans depend on temperate deciduous forests for cultural services, using them as spaces for recreation and spiritual practices.
Disturbances.
Natural disturbances cause regular renewal of temperate deciduous forests and create a healthy, heterogeneous environment with constantly changing structures and populations. Weather events like snow, storms, and wind can cause varying degrees of change to the structure of forest canopies, creating log habitats for small animals and spaces for less shade-tolerant species to grow where fallen trees once stood. Other abiotic sources of disturbances to temperate deciduous forests include droughts, waterlogging, and fires. Natural surface fire patterns are especially important in pine reproduction. Biotic factors affecting forests take the form of fungal outbreaks in addition to mountain pine beetle and bark beetle infestations. These beetles are particularly prevalent in North America and kill trees by clogging their vascular tissue. Temperate deciduous forests tend to be resilient after minor weather-related disturbances, though major insect infestations, widespread anthropogenic disturbances, and catastrophic weather events can cause century-long succession or even the permanent conversion of the forest into a grassland.
Climate change.
Rising temperatures and increased dryness in temperate deciduous forests have been noted in recent years as the climate changes. As a result, temperate deciduous forests have been experiencing an earlier onset to spring, as well as a global increase in the frequency and intensity of disturbances. They have been experiencing lower ecological resilience in the face of increasing mega-fires, longer droughts, and severe storms. Damaged wood from increased storm disturbance events provides nesting habitats for beetles, concurrently increasing bark beetle damage. Forest cover decreases with continuous severe disturbances, causing habitat loss and lower biodiversity.
Human use and impact.
Humans rely on wood from temperate deciduous forests for use in the timber industry as well as paper and charcoal production. Logging practices emit high levels of carbon while also causing erosion because fewer tree roots are present to provide soil support. During the European colonization of North America, potash made from tree ashes was exported back to Europe as fertilizer. At this time in history, clearcutting of the original temperate deciduous forests was also performed to make space for agricultural land use, so many forests now present are second-growth. Over 50% of temperate deciduous forests are affected by fragmentation, resulting in small fragments dissected by fields and roads; these islands of green often differ substantially from the original forests and cause challenges for species migration. Seminatural temperate deciduous forests with developed trail systems serve as sites for tourism and recreational activities, such as hiking and hunting. In addition to fragmentation, human use of land adjacent to temperate deciduous forests is associated with pollution that can stunt the growth rate of trees. Invasive species that outcompete native species and alter forest nutrient cycles, such as common buckthorn "(Rhamnus cathartica)", are also introduced by humans. The introduction of exotic diseases, especially, continues to be a threat to forest trees and, hence, the forest.
Conservation.
A method for preserving temperate deciduous forests that has been used in the past is fire suppression. The process of preventing fires is associated with the build-up of biomass that, ultimately, increases the intensity of incidental fires. As an alternative, prescribed burning has been put into practice, in which regular, managed fires are administered to forest ecosystems to imitate the natural disturbances that play a significant role in preserving biodiversity. To combat the effects of deforestation, reforestation has been employed.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\tfrac{1}{2}"
}
] | https://en.wikipedia.org/wiki?curid=9394749 |
9394772 | Barnes–Hut simulation | Approximation algorithm for the n-body problem
The Barnes–Hut simulation (named after Josh Barnes and Piet Hut) is an approximation algorithm for performing an "n"-body simulation. It is notable for having order O("n" log "n") compared to a direct-sum algorithm which would be O("n"2).
The simulation volume is usually divided up into cubic cells via an octree (in a three-dimensional space), so that only particles from nearby cells need to be treated individually, and particles in distant cells can be treated as a single large particle centered at the cell's center of mass (or as a low-order multipole expansion). This can dramatically reduce the number of particle pair interactions that must be computed.
Some of the most demanding high-performance computing projects perform computational astrophysics using the Barnes–Hut treecode algorithm,
such as DEGIMA.
Algorithm.
The Barnes–Hut tree.
In a three-dimensional "n"-body simulation, the Barnes–Hut algorithm recursively divides the "n" bodies into groups by storing them in an octree (or a quad-tree in a 2D simulation). Each node in this tree represents a region of the three-dimensional space.
The topmost node represents the whole space, and its eight children represent the eight octants of the space. The space is recursively subdivided into octants until each subdivision contains 0 or 1 bodies (some regions do not have bodies in all of their octants).
There are two types of nodes in the octree: internal and external nodes. An external node has no children and is either empty or represents a single body. Each internal node represents the group of bodies beneath it, and stores the center of mass and the total mass of all its children bodies.
Calculating the force acting on a body.
To calculate the net force on a particular body, the nodes of the tree are traversed, starting from the root. If the center of mass of an internal node is sufficiently far from the body, the bodies contained in that part of the tree are treated as a single particle whose position and mass is respectively the center of mass and total mass of the internal node. If the internal node is sufficiently close to the body, the process is repeated for each of its children.
Whether a node is or isn't sufficiently far away from a body, depends on the quotient formula_0, where "s" is the width of the region represented by the internal node, and "d" is the distance between the body and the node's center of mass. The node is sufficiently far away when this ratio is smaller than a threshold value "θ". The parameter "θ" determines the accuracy of the simulation; larger values of "θ" increase the speed of the simulation but decreases its accuracy. If "θ" = 0, no internal node is treated as a single body and the algorithm degenerates to a direct-sum algorithm.
References and sources.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "s/d"
}
] | https://en.wikipedia.org/wiki?curid=9394772 |
9395279 | Grothendieck inequality | In mathematics, the Grothendieck inequality states that there is a universal constant formula_0 with the following property. If "M""ij" is an "n" × "n" (real or complex) matrix with
formula_1
for all (real or complex) numbers "s""i", "t""j" of absolute value at most 1, then
formula_2
for all vectors "S""i", "T""j" in the unit ball "B"("H") of a (real or complex) Hilbert space "H", the constant formula_0 being independent of "n". For a fixed Hilbert space of dimension "d", the smallest constant that satisfies this property for all "n" × "n" matrices is called a Grothendieck constant and denoted formula_3. In fact, there are two Grothendieck constants, formula_4 and formula_5, depending on whether one works with real or complex numbers, respectively.
The Grothendieck inequality and Grothendieck constants are named after Alexander Grothendieck, who proved the existence of the constants in a paper published in 1953.
Motivation and the operator formulation.
Let formula_6 be an formula_7 matrix. Then formula_8 defines a linear operator between the normed spaces formula_9 and formula_10 for formula_11. The formula_12-norm of formula_8 is the quantity
formula_13
If formula_14, we denote the norm by formula_15.
One can consider the following question: For what value of formula_16 and formula_17 is formula_18 maximized? Since formula_8 is linear, then it suffices to consider formula_16 such that formula_19 contains as many points as possible, and also formula_17 such that formula_20 is as large as possible. By comparing formula_21 for formula_22, one sees that formula_23 for all formula_11.
One way to compute formula_24 is by solving the following quadratic integer program:
formula_25
To see this, note that formula_26, and taking the maximum over formula_27 gives formula_28. Then taking the maximum over formula_29 gives formula_24 by the convexity of formula_30 and by the triangle inequality. This quadratic integer program can be relaxed to the following semidefinite program:
formula_31
It is known that exactly computing formula_18 for formula_32 is NP-hard, while exacting computing formula_15 is NP-hard for formula_33.
One can then ask the following natural question: How well does an optimal solution to the semidefinite program approximate formula_24? The Grothendieck inequality provides an answer to this question: There exists a fixed constant formula_34 such that, for any formula_35, for any formula_7 matrix formula_8, and for any Hilbert space formula_36,
formula_37
Bounds on the constants.
The sequences formula_4 and formula_5 are easily seen to be increasing, and Grothendieck's result states that they are bounded, so they have limits.
Grothendieck proved that formula_38 where formula_39 is defined to be formula_40.
improved the result by proving that formula_41, conjecturing that the upper bound is tight. However, this conjecture was disproved by .
Grothendieck constant of order "d".
Boris Tsirelson showed that the Grothendieck constants formula_4 play an essential role in the problem of quantum nonlocality: the Tsirelson bound of any full correlation bipartite Bell inequality for a quantum system of dimension "d" is upperbounded by formula_42.
Lower bounds.
Some historical data on best known lower bounds of formula_4 is summarized in the following table.
Upper bounds.
Some historical data on best known upper bounds of formula_4:
Applications.
Cut norm estimation.
Given an formula_7 real matrix formula_6, the cut norm of formula_8 is defined by
formula_43
The notion of cut norm is essential in designing efficient approximation algorithms for dense graphs and matrices. More generally, the definition of cut norm can be generalized for symmetric measurable functions formula_44 so that the cut norm of formula_45 is defined by
formula_46
This generalized definition of cut norm is crucial in the study of the space of graphons, and the two definitions of cut norm can be linked via the adjacency matrix of a graph.
An application of the Grothendieck inequality is to give an efficient algorithm for approximating the cut norm of a given real matrix formula_8; specifically, given an formula_7 real matrix, one can find a number formula_47 such that
formula_48
where formula_49 is an absolute constant. This approximation algorithm uses semidefinite programming.
We give a sketch of this approximation algorithm. Let formula_50 be formula_51 matrix defined by
formula_52
One can verify that formula_53 by observing, if formula_54 form a maximizer for the cut norm of formula_55, then
formula_56
form a maximizer for the cut norm of formula_8. Next, one can verify that formula_57, where
formula_58
Although not important in this proof, formula_59 can be interpreted to be the norm of formula_55 when viewed as a linear operator from formula_60 to formula_61.
Now it suffices to design an efficient algorithm for approximating formula_24. We consider the following semidefinite program:
formula_62
Then formula_63. The Grothedieck inequality implies that formula_64. Many algorithms (such as interior-point methods, first-order methods, the bundle method, the augmented Lagrangian method) are known to output the value of a semidefinite program up to an additive error formula_65 in time that is polynomial in the program description size and formula_66. Therefore, one can output formula_67 which satisfies
formula_68
Szemerédi's regularity lemma.
Szemerédi's regularity lemma is a useful tool in graph theory, asserting (informally) that any graph can be partitioned into a controlled number of pieces that interact with each other in a pseudorandom way. Another application of the Grothendieck inequality is to produce a partition of the vertex set that satisfies the conclusion of Szemerédi's regularity lemma, via the cut norm estimation algorithm, in time that is polynomial in the upper bound of Szemerédi's regular partition size but independent of the number of vertices in the graph.
It turns out that the main "bottleneck" of constructing a Szemeredi's regular partition in polynomial time is to determine in polynomial time whether or not a given pair formula_69 is close to being formula_65-regular, meaning that for all formula_70 with formula_71, we have
formula_72
where formula_73 for all formula_74 and formula_75 are the vertex and edge sets of the graph, respectively. To that end, we construct an formula_76 matrix formula_77, where formula_78, defined by
formula_79
Then for all formula_70,
formula_80
Hence, if formula_69 is not formula_65-regular, then formula_81. It follows that using the cut norm approximation algorithm together with the rounding technique, one can find in polynomial time formula_70 such that
formula_82
Then the algorithm for producing a Szemerédi's regular partition follows from the constructive argument of Alon et al.
Variants of the Grothendieck inequality.
Grothendieck inequality of a graph.
The Grothendieck inequality of a graph states that for each formula_83 and for each graph formula_84 without self loops, there exists a universal constant formula_85 such that every formula_76 matrix formula_6 satisfies that
formula_86
The Grothendieck constant of a graph formula_87, denoted formula_88, is defined to be the smallest constant formula_89 that satisfies the above property.
The Grothendieck inequality of a graph is an extension of the Grothendieck inequality because the former inequality is the special case of the latter inequality when formula_87 is a bipartite graph with two copies of formula_90 as its bipartition classes. Thus,
formula_91
For formula_92, the formula_93-vertex complete graph, the Grothendieck inequality of formula_87 becomes
formula_94
It turns out that formula_95. On one hand, we have formula_96. Indeed, the following inequality is true for any formula_76 matrix formula_6, which implies that formula_96 by the Cauchy-Schwarz inequality:
formula_97
On the other hand, the matching lower bound formula_98 is due to Alon, Makarychev, Makarychev and Naor in 2006.
The Grothendieck inequality formula_88 of a graph formula_87 depends upon the structure of formula_87. It is known that
formula_99
and
formula_100
where formula_101 is the clique number of formula_87, i.e., the largest formula_102 such that there exists formula_103 with formula_104 such that formula_105 for all distinct formula_106, and
formula_107
The parameter formula_108 is known as the Lovász theta function of the complement of formula_87.
L^p Grothendieck inequality.
In the application of the Grothendieck inequality for approximating the cut norm, we have seen that the Grothendieck inequality answers the following question: How well does an optimal solution to the semidefinite program formula_109 approximate formula_24, which can be viewed as an optimization problem over the unit cube? More generally, we can ask similar questions over convex bodies other than the unit cube.
For instance, the following inequality is due to Naor and Schechtman and independently due to Guruswami et al: For every formula_76 matrix formula_6 and every formula_110,
formula_111
where
formula_112
The constant formula_113 is sharp in the inequality. Stirling's formula implies that formula_114 as formula_115.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "K_G"
},
{
"math_id": 1,
"text": "\\Big| \\sum_{i,j} M_{ij} s_i t_j \\Big| \\le 1"
},
{
"math_id": 2,
"text": "\\Big| \\sum_{i,j} M_{ij} \\langle S_i, T_j \\rangle \\Big| \\le K_G"
},
{
"math_id": 3,
"text": "K_G(d)"
},
{
"math_id": 4,
"text": "K_G^{\\mathbb R}(d)"
},
{
"math_id": 5,
"text": "K_G^{\\mathbb C}(d)"
},
{
"math_id": 6,
"text": "A = (a_{ij})"
},
{
"math_id": 7,
"text": "m \\times n"
},
{
"math_id": 8,
"text": "A"
},
{
"math_id": 9,
"text": "(\\mathbb R^m, \\| \\cdot \\|_p)"
},
{
"math_id": 10,
"text": "(\\mathbb R^n, \\| \\cdot \\|_q)"
},
{
"math_id": 11,
"text": "1 \\leq p, q \\leq \\infty"
},
{
"math_id": 12,
"text": "(p \\to q)"
},
{
"math_id": 13,
"text": "\\| A \\|_{p \\to q} = \\max_{x \\in \\mathbb R^n : \\| x \\|_p = 1} \\| Ax \\|_q."
},
{
"math_id": 14,
"text": "p = q"
},
{
"math_id": 15,
"text": "\\| A \\|_p"
},
{
"math_id": 16,
"text": "p"
},
{
"math_id": 17,
"text": "q"
},
{
"math_id": 18,
"text": "\\| A \\|_{p \\to q}"
},
{
"math_id": 19,
"text": "\\{ x \\in \\mathbb R^n : \\| x \\|_p \\leq 1 \\}"
},
{
"math_id": 20,
"text": "\\| Ax \\|_q"
},
{
"math_id": 21,
"text": "\\| x \\|_p"
},
{
"math_id": 22,
"text": "p = 1, 2, \\ldots, \\infty"
},
{
"math_id": 23,
"text": "\\| A \\|_{\\infty \\to 1} \\geq \\| A \\|_{p \\to q}"
},
{
"math_id": 24,
"text": "\\| A \\|_{\\infty \\to 1}"
},
{
"math_id": 25,
"text": "\\begin{align} \\max & \\qquad \\sum_{i, j} A_{ij} x_i y_j \\\\ \\text{s.t.} & \\qquad (x, y) \\in \\{ -1, 1 \\}^{m + n} \\end{align}"
},
{
"math_id": 26,
"text": "\\sum_{i, j} A_{ij} x_i y_j = \\sum_i (Ay)_i x_i"
},
{
"math_id": 27,
"text": "x \\in \\{ -1, 1 \\}^m"
},
{
"math_id": 28,
"text": "\\| Ay \\|_1"
},
{
"math_id": 29,
"text": "y \\in \\{ -1, 1 \\}^n"
},
{
"math_id": 30,
"text": "\\{ x \\in \\mathbb R^m : \\| x \\|_\\infty = 1 \\}"
},
{
"math_id": 31,
"text": "\\begin{align} \\max & \\qquad \\sum_{i, j} A_{ij} \\langle x^{(i)}, y^{(j)} \\rangle \\\\ \\text{s.t.} & \\qquad x^{(1)}, \\ldots, x^{(m)}, y^{(1)}, \\ldots, y^{(n)} \\text{ are unit vectors in } (\\mathbb R^d, \\| \\cdot \\|_2) \\end{align}"
},
{
"math_id": 32,
"text": "1 \\leq q < p \\leq \\infty"
},
{
"math_id": 33,
"text": "p \\not \\in \\{ 1, 2, \\infty \\}"
},
{
"math_id": 34,
"text": "C > 0"
},
{
"math_id": 35,
"text": "m, n \\geq 1"
},
{
"math_id": 36,
"text": "H"
},
{
"math_id": 37,
"text": "\\max_{x^{(i)}, y^{(i)} \\in H \\text{ unit vectors}} \\sum_{i, j} A_{ij} \\left\\langle x^{(i)}, y^{(j)} \\right\\rangle_H \\leq C \\| A \\|_{\\infty \\to 1}."
},
{
"math_id": 38,
"text": "1.57 \\approx \\frac{\\pi}{2} \\leq K_G^{\\mathbb R} \\leq \\operatorname{sinh}\\frac{\\pi}{2} \\approx 2.3,"
},
{
"math_id": 39,
"text": "K_G^{\\mathbb R}"
},
{
"math_id": 40,
"text": "\\sup_d K_G^{\\mathbb R}(d)"
},
{
"math_id": 41,
"text": "K_G^{\\mathbb R} \\le \\frac{\\pi}{2 \\ln(1 + \\sqrt{2})} \\approx 1.7822"
},
{
"math_id": 42,
"text": "K_G^{\\mathbb R}(2d^2)"
},
{
"math_id": 43,
"text": "\\| A \\|_\\square = \\max_{S \\subset [m], T \\subset [n]} \\left| \\sum_{i \\in S, j \\in T} a_{ij} \\right|."
},
{
"math_id": 44,
"text": "W : [0, 1]^2 \\to \\mathbb R "
},
{
"math_id": 45,
"text": "W "
},
{
"math_id": 46,
"text": "\\| W \\|_\\square = \\sup_{S, T \\subset [0, 1]} \\left| \\int_{S \\times T} W \\right|. "
},
{
"math_id": 47,
"text": "\\alpha"
},
{
"math_id": 48,
"text": "\\| A \\|_\\square \\leq \\alpha \\leq C \\| A \\|_\\square,"
},
{
"math_id": 49,
"text": "C"
},
{
"math_id": 50,
"text": "B = (b_{ij})"
},
{
"math_id": 51,
"text": "(m + 1) \\times (n + 1)"
},
{
"math_id": 52,
"text": "\\begin{pmatrix} a_{11} & a_{12} & \\ldots & a_{1n} & -\\sum_{k = 1}^n a_{1k} \\\\ a_{21} & a_{22} & \\ldots & a_{2n} & -\\sum_{k = 1}^n a_{2k} \\\\ \\vdots & \\vdots & \\ddots & \\vdots & \\vdots \\\\ a_{m1} & a_{m2} & \\ldots & a_{mn} & -\\sum_{k = 1}^n a_{mk} \\\\ -\\sum_{\\ell = 1}^m a_{\\ell 1} & -\\sum_{\\ell = 1}^m a_{\\ell 2} & \\ldots & -\\sum_{\\ell = 1}^m a_{\\ell n} & \\sum_{k = 1}^n \\sum_{\\ell = 1}^m a_{\\ell k} \\end{pmatrix}."
},
{
"math_id": 53,
"text": "\\| A \\|_\\square = \\| B \\|_\\square"
},
{
"math_id": 54,
"text": "S \\in [m + 1], T \\in [n + 1]"
},
{
"math_id": 55,
"text": "B"
},
{
"math_id": 56,
"text": "S^* = \\begin{cases} S, & \\text{if } m + 1 \\not \\in S, \\\\ {[m]} \\setminus S, & \\text{otherwise}, \\end{cases} \\qquad\nT^* = \\begin{cases} T, & \\text{if } n + 1 \\not \\in T, \\\\ {[n]} \\setminus S, & \\text{otherwise}, \\end{cases} \\qquad"
},
{
"math_id": 57,
"text": "\\| B \\|_\\square = \\| B \\|_{\\infty \\to 1}/4"
},
{
"math_id": 58,
"text": "\\| B \\|_{\\infty \\to 1} = \\max \\left\\{ \\sum_{i = 1}^{m + 1} \\sum_{j = 1}^{n + 1} b_{ij} \\varepsilon_i \\delta_j : \\varepsilon_1, \\ldots, \\varepsilon_{m + 1} \\in \\{ -1, 1 \\}, \\delta_1, \\ldots, \\delta_{n + 1} \\in \\{ -1, 1 \\} \\right\\}."
},
{
"math_id": 59,
"text": "\\| B \\|_{\\infty \\to 1}"
},
{
"math_id": 60,
"text": "\\ell_\\infty^m"
},
{
"math_id": 61,
"text": "\\ell_1^m"
},
{
"math_id": 62,
"text": "\\text{SDP}(A) = \\max \\left\\{ \\sum_{i = 1}^m \\sum_{j = 1}^n a_{ij} \\left\\langle x_i, y_j \\right\\rangle : x_1, \\ldots, x_m, y_1, \\ldots, y_n \\in S^{n + m - 1} \\right\\}."
},
{
"math_id": 63,
"text": "\\text{SDP}(A) \\geq \\| A \\|_{\\infty \\to 1}"
},
{
"math_id": 64,
"text": "\\text{SDP}(A) \\leq K_G^{\\mathbb R} \\| A \\|_{\\infty \\to 1}"
},
{
"math_id": 65,
"text": "\\varepsilon"
},
{
"math_id": 66,
"text": "\\log (1/\\varepsilon)"
},
{
"math_id": 67,
"text": "\\alpha = \\text{SDP}(B)"
},
{
"math_id": 68,
"text": "\\| A \\|_\\square \\leq \\alpha \\leq C \\| A \\|_\\square \\qquad \\text{with} \\qquad C = K_G^{\\mathbb R}. "
},
{
"math_id": 69,
"text": "(X, Y)"
},
{
"math_id": 70,
"text": "S \\subset X, T \\subset Y"
},
{
"math_id": 71,
"text": "|S| \\geq \\varepsilon |X|, |T| \\geq \\varepsilon |Y|"
},
{
"math_id": 72,
"text": "\\left| \\frac{e(S, T)}{|S||T|} - \\frac{e(X, Y)}{|X||Y|} \\right| \\leq \\varepsilon,"
},
{
"math_id": 73,
"text": "e(X', Y') = |\\{ (u, v) \\in X' \\times Y' : uv \\in E \\}|"
},
{
"math_id": 74,
"text": "X', Y' \\subset V"
},
{
"math_id": 75,
"text": "V, E"
},
{
"math_id": 76,
"text": "n \\times n"
},
{
"math_id": 77,
"text": "A = (a_{xy})_{(x, y) \\in X \\times Y}"
},
{
"math_id": 78,
"text": "n = |V|"
},
{
"math_id": 79,
"text": "a_{xy} = \\begin{cases} 1 - \\frac{e(X, Y)}{|X||Y|}, & \\text{if } xy \\in E, \\\\ -\\frac{e(X, Y)}{|X||Y|}, & \\text{otherwise}. \\end{cases}"
},
{
"math_id": 80,
"text": "\\left| \\sum_{x \\in S, y \\in T} a_{xy} \\right| = |S||T| \\left| \\frac{e(S, T)}{|S||T|} - \\frac{e(X, Y)}{|X||Y|} \\right|."
},
{
"math_id": 81,
"text": "\\| A \\|_\\square \\geq \\varepsilon^3 n^2"
},
{
"math_id": 82,
"text": "\\min\\left\\{ n|S|, n|T|, n^2 \\left| \\frac{e(S, T)}{|S||T|} - \\frac{e(X, Y)}{|X||Y|} \\right| \\right\\} \\geq \\left|\\sum_{x \\in S, y \\in T} a_{xy}\\right| \\geq \\frac{1}{K_G^{\\mathbb R}} \\varepsilon^3 n^2 \\geq \\frac{1}{2} \\varepsilon^3 n^2."
},
{
"math_id": 83,
"text": "n \\in \\mathbb N"
},
{
"math_id": 84,
"text": "G = (\\{ 1, \\ldots, n \\}, E)"
},
{
"math_id": 85,
"text": "K > 0"
},
{
"math_id": 86,
"text": "\\max_{x_1, \\ldots, x_n \\in S^{n - 1}} \\sum_{ij \\in E} a_{ij} \\left\\langle x_i, x_j \\right\\rangle \\leq K \\max_{\\varepsilon_1, \\ldots, \\varepsilon_n \\in \\{ -1, 1 \\}} \\sum_{ij \\in E} a_{ij} \\varepsilon_1 \\varepsilon_n."
},
{
"math_id": 87,
"text": "G"
},
{
"math_id": 88,
"text": "K(G)"
},
{
"math_id": 89,
"text": "K"
},
{
"math_id": 90,
"text": "\\{ 1, \\ldots, n \\}"
},
{
"math_id": 91,
"text": "K_G = \\sup_{n \\in \\mathbb N} \\{ K(G) : G \\text{ is an } n \\text{-vertex bipartite graph} \\}."
},
{
"math_id": 92,
"text": "G = K_n"
},
{
"math_id": 93,
"text": "n"
},
{
"math_id": 94,
"text": "\\max_{x_1, \\ldots, x_n \\in S^{n - 1}} \\sum_{i, j \\in \\{ 1, \\ldots, n \\}, i \\neq j} a_{ij} \\left\\langle x_i, x_j \\right\\rangle \\leq K(K_n) \\max_{\\varepsilon_1, \\ldots, \\varepsilon_n \\in \\{ -1, 1 \\}} \\sum_{i, j \\in \\{ 1, \\ldots, n \\}, i \\neq j} a_{ij} \\varepsilon_i \\varepsilon_j."
},
{
"math_id": 95,
"text": "K(K_n) \\asymp \\log n"
},
{
"math_id": 96,
"text": "K(K_n) \\lesssim \\log n"
},
{
"math_id": 97,
"text": "\\max_{x_1, \\ldots, x_n \\in S^{n - 1}} \\sum_{i, j \\in \\{ 1, \\ldots, n \\}, i \\neq j} a_{ij} \\left\\langle x_i, x_j \\right\\rangle \\leq \\log\\left(\\frac{\\sum_{i \\in \\{ 1, \\ldots, n \\}} \\sum_{j \\in \\{ 1, \\ldots, n \\} \\setminus \\{ i \\}} |a_{ij}|}{\\sqrt{\\sum_{i \\in \\{ 1, \\ldots, n \\}} \\sum_{j \\in \\{ 1, \\ldots, n \\} \\setminus \\{ i \\}} a_{ij}^2}}\\right) \\max_{\\varepsilon_1, \\ldots, \\varepsilon_n \\in \\{ -1, 1 \\}} \\sum_{i, j \\in \\{ 1, \\ldots, n \\}, i \\neq j} a_{ij} \\varepsilon_1 \\varepsilon_n."
},
{
"math_id": 98,
"text": "K(K_n) \\gtrsim \\log n"
},
{
"math_id": 99,
"text": "\\log \\omega \\lesssim K(G) \\lesssim \\log \\vartheta,"
},
{
"math_id": 100,
"text": "K(G) \\leq \\frac{\\pi}{2\\log\\left(\\frac{1 + \\sqrt{(\\vartheta - 1)^2 + 1}}{\\vartheta - 1}\\right)},"
},
{
"math_id": 101,
"text": "\\omega"
},
{
"math_id": 102,
"text": "k \\in \\{ 2, \\ldots, n \\}"
},
{
"math_id": 103,
"text": "S \\subset \\{ 1, \\ldots, n \\}"
},
{
"math_id": 104,
"text": "|S| = k"
},
{
"math_id": 105,
"text": "ij \\in E"
},
{
"math_id": 106,
"text": "i, j \\in S"
},
{
"math_id": 107,
"text": "\\vartheta = \\min \\left\\{ \\max_{i \\in \\{ 1, \\ldots, n \\}} \\frac{1}{\\langle x_i, y \\rangle} : x_1, \\ldots, x_n, y \\in S^n, \\left\\langle x_i, x_j \\right\\rangle = 0 \\;\\forall ij \\in E \\right\\}."
},
{
"math_id": 108,
"text": "\\vartheta"
},
{
"math_id": 109,
"text": "\\text{SDP}(A)"
},
{
"math_id": 110,
"text": "p \\geq 2"
},
{
"math_id": 111,
"text": "\\max_{x_1, \\ldots, x_n \\in \\mathbb R^n, \\sum_{k = 1}^n \\| x_k \\|_2^p \\leq 1} \\sum_{i = 1}^n \\sum_{j = 1}^n a_{ij} \\left\\langle x_i, x_j \\right\\rangle \\leq \\gamma_p^2 \\max_{t_1, \\ldots, t_n \\in \\mathbb R, \\sum_{k = 1}^n | t_k |^p \\leq 1} \\sum_{i = 1}^n \\sum_{j = 1}^n a_{ij} t_i t_j,"
},
{
"math_id": 112,
"text": "\\gamma_p = \\sqrt{2} \\left(\\frac{\\Gamma((p + 1)/2)}{\\sqrt{\\pi}}\\right)^{1/p}."
},
{
"math_id": 113,
"text": "\\gamma_p^2"
},
{
"math_id": 114,
"text": "\\gamma_p^2 = p/e + O(1)"
},
{
"math_id": 115,
"text": "p \\to \\infty"
}
] | https://en.wikipedia.org/wiki?curid=9395279 |
9397319 | Gauss–Lucas theorem | Geometric relation between the roots of a polynomial and those of its derivative
In complex analysis, a branch of mathematics, the Gauss–Lucas theorem gives a geometric relation between the roots of a polynomial P and the roots of its derivative P'. The set of roots of a real or complex polynomial is a set of points in the complex plane. The theorem states that the roots of P' all lie within the convex hull of the roots of P, that is the smallest convex polygon containing the roots of P. When P has a single root then this convex hull is a single point and when the roots lie on a line then the convex hull is a segment of this line. The Gauss–Lucas theorem, named after Carl Friedrich Gauss and Félix Lucas, is similar in spirit to Rolle's theorem.
Formal statement.
If P is a (nonconstant) polynomial with complex coefficients, all zeros of P' belong to the convex hull of the set of zeros of P.
Special cases.
It is easy to see that if formula_0 is a second degree polynomial, the zero of formula_1 is the average of the roots of P. In that case, the convex hull is the line segment with the two roots as endpoints and it is clear that the average of the roots is the middle point of the segment.
For a third degree complex polynomial P (cubic function) with three distinct zeros, Marden's theorem states that the zeros of P' are the foci of the Steiner inellipse which is the unique ellipse tangent to the midpoints of the triangle formed by the zeros of P.
For a fourth degree complex polynomial P (quartic function) with four distinct zeros forming a concave quadrilateral, one of the zeros of P lies within the convex hull of the other three; all three zeros of P' lie in two of the three triangles formed by the interior zero of P and two others zeros of P.
In addition, if a polynomial of degree n of real coefficients has n distinct real zeros formula_2 we see, using Rolle's theorem, that the zeros of the derivative polynomial are in the interval formula_3 which is the convex hull of the set of roots.
The convex hull of the roots of the polynomial
formula_4
particularly includes the point
formula_5
Proof.
<templatestyles src="Math_proof/styles.css" />Proof
By the fundamental theorem of algebra, formula_6 is a product of linear factors as
formula_7
where the complex numbers formula_8 are the – not necessarily distinct – zeros of the polynomial P, the complex number α is the leading coefficient of P and n is the degree of P.
For any root formula_9 of formula_10, if it is also a root of formula_6, then the theorem is trivially true. Otherwise, we have for the logarithmic derivative
formula_11
Hence
formula_12.
Taking their conjugates, and dividing, we obtain formula_9 as a convex sum of the roots of formula_6:
formula_13
See also.
<templatestyles src="Div col/styles.css"/>
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "P(x) = ax^2+bx+c"
},
{
"math_id": 1,
"text": "P'(x) = 2ax+b"
},
{
"math_id": 2,
"text": "x_1<x_2<\\cdots <x_n,"
},
{
"math_id": 3,
"text": "[x_1,x_n]"
},
{
"math_id": 4,
"text": " p_n x^n+p_{n-1}x^{n-1}+\\cdots +p_0 "
},
{
"math_id": 5,
"text": "-\\frac{p_{n-1}}{n\\cdot p_n}."
},
{
"math_id": 6,
"text": "P"
},
{
"math_id": 7,
"text": " P(z)= \\alpha \\prod_{i=1}^n (z-a_i) "
},
{
"math_id": 8,
"text": "a_1, a_2, \\ldots, a_n"
},
{
"math_id": 9,
"text": "z"
},
{
"math_id": 10,
"text": "P'"
},
{
"math_id": 11,
"text": "0 = \\frac{P^\\prime(z)}{P(z)} = \\sum_{i=1}^n \\frac{1}{z-a_i} = \\sum_{i=1}^n \\frac{\\overline{z}-\\overline{a_i}} {|z-a_i|^2}. "
},
{
"math_id": 12,
"text": "\n\\sum_{i=1}^n \\frac{\\overline{z} } {|z-a_i|^2} = \\sum_{i=1}^n \\frac{\\overline{a_i} } {|z-a_i|^2} "
},
{
"math_id": 13,
"text": "z = \\sum_{i=1}^n \\frac{|z-a_i|^{-2}}{ \\sum_{j=1}^n |z-a_j|^{-2}} a_i"
}
] | https://en.wikipedia.org/wiki?curid=9397319 |
939742 | Round-robin tournament | Type of sports tournament
A round-robin tournament or all-play-all tournament is a competition format in which each contestant meets every other participant, usually in turn. A round-robin contrasts with an elimination tournament, wherein participants are eliminated after a certain number of wins or losses.
Terminology.
The term "round-robin" is derived from the French term ('ribbon'). Over time, the term became idiomized to "robin".
In a "single round-robin" schedule, each participant plays every other participant once. If each participant plays all others twice, this is frequently called a "double round-robin". The term is rarely used when all participants play one another more than twice, and is never used when one participant plays others an unequal number of times, as is the case in almost all of the major North American professional sports leagues.
In the United Kingdom, a round-robin tournament has been called an American tournament in sports such as tennis or billiards which usually have single-elimination (or "knockout") tournaments, although this is now rarely done.
A round-robin tournament with four players is sometimes called "quad" or "foursome".
Applications.
In sports with a large number of competitive matches per season, double round-robins are common. Most association football leagues in the world are organized on a double round-robin basis, in which every team plays all others in its league once at home and once away. This system is also used in qualification for major tournaments such as the FIFA World Cup and the continental tournaments (e.g. UEFA European Championship, CONCACAF Gold Cup, AFC Asian Cup, CONMEBOL Copa América and CAF Cup of Nations). There are also round-robin bridge, chess, draughts, go, ice hockey, curling, and Scrabble tournaments. The World Chess Championship decided in 2005 and in 2007 on an eight-player double round-robin tournament where each player faces every other player once as white and once as black.
In a more extreme example, the KBO League in baseball plays a 16-fold round robin, with each of the 10 teams playing each other 16 times for a total of 144 games per team.
LIDOM (Baseball Winter League in the Dominican Republic) plays an 18-fold round robin as a semi final tournament between four classified teams.
Group tournaments rankings usually go by number of matches won and drawn, with any of a variety of tiebreaker criteria.
Frequently, pool stages within a wider tournament are conducted on a round-robin basis. Examples with single round-robin scheduling include the FIFA World Cup, UEFA European Football Championship, and UEFA Cup (2004–2009) in football, Super Rugby (rugby union) in the Southern Hemisphere during its past iterations as Super 12 and Super 14 (but "not" in its later 15- and 18-team formats), the Cricket World Cup along with Indian Premier League, major Twenty-20 Cricket tournament, and many American football college conferences, such as the Conference USA (which currently has 9 members). The group phases of the UEFA club competitions and Copa Libertadores are contested as a double round-robin, as are most basketball leagues outside the United States, including the regular season of the EuroLeague (as well as its former Top 16 phase); the United Football League has used a double round-robin for both its 2009 and 2010 seasons.
Season ending tennis tournaments also use a round robin format prior to the semi on stages.
Evaluation.
Advantages.
The champion in a round-robin tournament is the contestant that wins the most games, except when draws are possible.
In theory, a round-robin tournament is the fairest way to determine the champion from among a known and fixed number of contestants. Each contestant, whether player or team, has equal chances against all other opponents because there is no prior seeding of contestants that will preclude a match between any given pair. The element of luck is seen to be reduced as compared to a knockout system since one or two bad performances need not ruin a competitor's chance of ultimate victory. Final records of participants are more accurate, in the sense that they represent the results over a longer period against the same opposition.
The system is also better for ranking all participants, not just determining the winner. This is helpful to determine the final rank of all competitors, from strongest to weakest, for purposes of qualification for another stage or competition as well as for prize money.
In team sports, the round-robin major league champions are generally regarded as the "best" team in the land, rather than the cup winners, whose tournaments usually follow a single-elimination format.
Moreover, in tournaments such as the FIFA or ICC World Cups, a first round stage consisting of a number of mini round robins between groups of 4 teams guards against the possibility of a team travelling possibly thousands of miles only to be eliminated after just one poor performance in a straight knockout system. The top one, two, or occasionally three teams in these groups then proceed to a straight knockout stage for the remainder of the tournament.
In the circle of death it is possible that no champion emerges from a round-robin tournament, even if there is no draw, but most sports have tie-breaker systems which resolve this.
Disadvantages.
Round-robins can suffer from being too long compared to other tournament types, and with later scheduled games potentially not having any substantial meaning. They may also require tie-breaking procedures.
Swiss system tournaments attempt to combine elements of the round-robin and elimination formats, to provide a worthy champion using fewer rounds than a round-robin, while allowing draws and losses.
Tournament length.
The main disadvantage of a round robin tournament is the time needed to complete it. Unlike a knockout tournament where half of the participants are eliminated after each round, a round robin requires one round less than the number of participants. For instance, a tournament of 16 teams can be completed in just 4 rounds (i.e. 15 matches) in a knockout format; a double elimination tournament format requires 30 (or 31) matches, but a round-robin would require 15 rounds (i.e. 120 matches) to finish if each competitor faces each other once.
Other issues stem from the difference between the theoretical fairness of the round robin format and practice in a real event. Since the victor is gradually arrived at through multiple rounds of play, teams who perform poorly, who might have been quickly eliminated from title contention, are forced to play out their remaining games. Thus games are played late in the competition between competitors with no remaining chance of success. Moreover, some later matches will pair one competitor who has something left to play for against another who does not. It may also be possible for a competitor to play the strongest opponents in a round robin in quick succession while others play them intermittently with weaker opposition. This asymmetry means that playing the same opponents is not necessarily completely equitable.
There is also no scheduled showcase final match unless (by coincidence) two competitors meet in the last match of the tournament, with the result of that match determining the championship. A notable instance of such an event was the 1950 FIFA World Cup match between Uruguay and Brazil.
Qualified teams.
Further issues arise where a round-robin is used as a qualifying round within a larger tournament. A competitor already qualified for the next stage before its last game may either not try hard (in order to conserve resources for the next phase) or even deliberately lose (if the scheduled next-phase opponent for a lower-placed qualifier is perceived to be easier than for a higher-placed one).
Four pairs in the 2012 Olympics Women's doubles badminton, having qualified for the next round, were ejected from the competition for attempting to lose in the round robin stage to avoid compatriots and better ranked opponents. The round robin stage at the Olympics was a new introduction, and these potential problems were readily known prior to the tournament; changes were made prior to the next Olympics to prevent a repeat of these events.
Circle of death.
Another disadvantage, especially in smaller round-robins, is the "circle of death", where teams cannot be separated on a head-to-head record. In a three-team round-robin, where A defeats B, B defeats C, and C defeats A, all three competitors will have a record of one win and one loss, and a tiebreaker will need to be used to separate the teams. This famously happened during the 1994 FIFA World Cup Group E, where all four teams finished with a record of one win, one draw, and one loss. This phenomenon is analogous to the Condorcet paradox in voting theory.
Scheduling algorithm.
If formula_0 is the number of competitors, a pure round robin tournament requires formula_1 games. If formula_0 is even, then in each of formula_2 rounds, formula_3 games can be run concurrently, provided there exist sufficient resources (e.g. courts for a tennis tournament). If formula_0 is odd, there will be formula_0 rounds, each with formula_4 games, and one competitor having no game in that round.
Circle method.
The circle method is a simple algorithm to create a schedule for a round-robin tournament. All competitors are assigned to numbers, and then paired in the first round:
Next, one of the competitors in the first or last column of the table is fixed (number one in this example) and the others rotated clockwise one position:
This is repeated until when the next iteration would lead back to the initial pairings:
With an even number formula_0 of competitors this algorithm realizes every possible combination of them (equivalently, that all pairs realized are pairwise different).
First, the algorithm obviously realizes every pair of competitors if one of them equals formula_5 (the non-moving competitor).
Next, for pairs of non-formula_5 competitors, let their distance be the number formula_6 of times the rotation has to be carried out in order that one competitor arrives at the position the other had.
In the example given (formula_7), formula_8 has distance formula_5 to formula_9 and to formula_10 and it has distance formula_11 to formula_12 and to formula_13.
In a round, a non-leftmost position (not including formula_5) can only be taken by competitors of a fixed distance. In round formula_5 of the example, in the second position competitor formula_8 plays against formula_14, their distance is formula_8. In round formula_8, this position is held by competitors formula_10 and formula_15, also having distance formula_8, etc. Similarly, the next position (formula_9 against formula_15 in round formula_5, formula_8 against formula_16 in round formula_8, etc.) can only hold distance-formula_17 competitors.
For every formula_6, there are exactly formula_18 pairs of distance formula_19. There are formula_18 rounds and they all realize one distance-formula_19 pair at the same position. Clearly, these pairs are pairwise different. The conclusion is that every distance-formula_19 pair is realized.
This holds for every formula_19, hence, every pair is realized.
If there are an odd number of competitors, a dummy competitor can be added, whose scheduled opponent in a given round does not play and has a bye. The schedule can therefore be computed as though the dummy were an ordinary player, either fixed or rotating.
Instead of rotating one position, any number relatively prime to formula_20 will generate a complete schedule. The upper and lower rows can indicate home/away in sports, white/black in chess, etc.; to ensure fairness, this must alternate between rounds since competitor 1 is always on the first row. If, say, competitors 3 and 8 were unable to fulfil their fixture in the third round, it would need to be rescheduled outside the other rounds, since both competitors would already be facing other opponents in those rounds. More complex scheduling constraints may require more complex algorithms. This schedule is applied in chess and draughts tournaments of rapid games, where players physically move round a table. In France this is called the Carousel-Berger system (Système Rutch-Berger).
The schedule can also be used for "asynchronous" round-robin tournaments where all games take place at different times (for example, because there is only one venue). The games are played from left to right in each round, and from the first round to the last. When the number of competitors is even, this schedule performs well with respect to quality and fairness measures such as the amount of rest between games. On the other hand, when the number of competitors is odd, it does not perform so well and a different schedule is superior with respect to these measures.
Berger tables.
Alternatively Berger tables, named after the Austrian chess master Johann Berger, are widely used in the planning of tournaments. Berger published the pairing tables in his two "Schach-Jahrbücher" (Chess Annals), with due reference to its inventor Richard Schurig.
This constitutes a schedule where player 14 has a fixed position, and all other players are rotated counterclockwise formula_21 positions. This schedule is easily generated manually. To construct the next round, the last player, number 8 in the first round, moves to the head of the table, followed by player 9 against player 7, player 10 against 6, until player 1 against player 2. Arithmetically, this equates to adding formula_21 to the previous row, with the exception of player formula_0. When the result of the addition is greater than formula_20, then subtract formula_20 from the sum.
This schedule can also be represented as a (n-1, n-1) table, expressing a round in which players meets each other. For example, player 7 plays against player 11 in round 4. If a player meets itself, then this shows a bye or a game against player n. All games in a round constitutes a diagonal in the table.
The above schedule can also be represented by a graph, as shown below:
Both the graph and the schedule were reported by Édouard Lucas in as a recreational mathematics puzzle. Lucas, who describes the method as "simple and ingenious", attributes the solution to Felix Walecki, a teacher at Lycée Condorcet. Lucas also included an alternative solution by means of a sliding puzzle.
Mnemonic.
To easily remember this method, the following mnemonic can be used. Starting from the first round,
the next round is constructed:
and then,
If the number of players is odd, the player in the first venue gets a bye. If the number is even, an added player (ω) becomes the opponent.
Original construction of pairing tables by Richard Schurig (1886).
For an even number formula_0 or an odd number formula_22 of competitors, Schurig builds a table with formula_23 vertical rows and formula_18 horizontal rows. Then he populates it starting from the top left corner by repeating the sequence of numbers from 1 up to formula_18. Here is an example table for 7 or 8 competitors:
Then to get the opponents a second table is constructed. Every horizontal row formula_24 is populated with the same numbers as row formula_25 in the previous table (the last row is populated with numbers from the first row in the original table), but in the reverse order (from right to left).
By merging above tables:
Then the first column is updated: if the number of competitors is even, player number formula_0 is alternatingly substituted for the first and second positions, whereas if the number of competitors is odd a bye is used instead.
The pairing tables were published as an annex concerning the arrangements for the holding of master tournaments. Schurig did not provide a proof nor a motivation for his algorithm.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "n"
},
{
"math_id": 1,
"text": "\\begin{matrix} \\frac{n}{2} \\end{matrix}(n - 1)"
},
{
"math_id": 2,
"text": "(n - 1)"
},
{
"math_id": 3,
"text": "\\begin{matrix} \\frac{n}{2} \\end{matrix}"
},
{
"math_id": 4,
"text": "\\begin{matrix} \\frac{n - 1}{2} \\end{matrix}"
},
{
"math_id": 5,
"text": "1"
},
{
"math_id": 6,
"text": "k<\\frac{n}{2}"
},
{
"math_id": 7,
"text": "n=14"
},
{
"math_id": 8,
"text": "2"
},
{
"math_id": 9,
"text": "3"
},
{
"math_id": 10,
"text": "14"
},
{
"math_id": 11,
"text": "6"
},
{
"math_id": 12,
"text": "8"
},
{
"math_id": 13,
"text": "9"
},
{
"math_id": 14,
"text": "13"
},
{
"math_id": 15,
"text": "12"
},
{
"math_id": 16,
"text": "11"
},
{
"math_id": 17,
"text": "4"
},
{
"math_id": 18,
"text": "n-1"
},
{
"math_id": 19,
"text": "k"
},
{
"math_id": 20,
"text": "(n-1)"
},
{
"math_id": 21,
"text": " \\frac{n}{2}"
},
{
"math_id": 22,
"text": "n - 1"
},
{
"math_id": 23,
"text": "n/2"
},
{
"math_id": 24,
"text": "x"
},
{
"math_id": 25,
"text": "x + 1"
}
] | https://en.wikipedia.org/wiki?curid=939742 |
9399072 | Vector measure | In mathematics, a vector measure is a function defined on a family of sets and taking vector values satisfying certain properties. It is a generalization of the concept of finite measure, which takes nonnegative real values only.
Definitions and first consequences.
Given a field of sets formula_0 and a Banach space formula_1 a finitely additive vector measure (or measure, for short) is a function formula_2 such that for any two disjoint sets formula_3 and formula_4 in formula_5 one has
formula_6
A vector measure formula_7 is called countably additive if for any sequence formula_8 of disjoint sets in formula_9 such that their union is in formula_9 it holds that
formula_10
with the series on the right-hand side convergent in the norm of the Banach space formula_11
It can be proved that an additive vector measure formula_7 is countably additive if and only if for any sequence formula_8 as above one has
where formula_12 is the norm on formula_11
Countably additive vector measures defined on sigma-algebras are more general than finite measures, finite signed measures, and complex measures, which are countably additive functions taking values respectively on the real interval formula_13 the set of real numbers, and the set of complex numbers.
Examples.
Consider the field of sets made up of the interval formula_14 together with the family formula_9 of all Lebesgue measurable sets contained in this interval. For any such set formula_15 define
formula_16
where formula_17 is the indicator function of formula_18 Depending on where formula_7 is declared to take values, two different outcomes are observed.
Both of these statements follow quite easily from the criterion (*) stated above.
The variation of a vector measure.
Given a vector measure formula_23 the variation formula_24 of formula_7 is defined as
formula_25
where the supremum is taken over all the partitions
formula_26
of formula_3 into a finite number of disjoint sets, for all formula_3 in formula_27 Here, formula_12 is the norm on formula_11
The variation of formula_7 is a finitely additive function taking values in formula_28 It holds that
formula_29
for any formula_3 in formula_27 If formula_30 is finite, the measure formula_7 is said to be of bounded variation. One can prove that if formula_7 is a vector measure of bounded variation, then formula_7 is countably additive if and only if formula_24 is countably additive.
Lyapunov's theorem.
In the theory of vector measures, "Lyapunov's theorem" states that the range of a (non-atomic) finite-dimensional vector measure is closed and convex. In fact, the range of a non-atomic vector measure is a "zonoid" (the closed and convex set that is the limit of a convergent sequence of zonotopes). It is used in economics, in ("bang–bang") control theory, and in statistical theory.
Lyapunov's theorem has been proved by using the Shapley–Folkman lemma, which has been viewed as a discrete analogue of Lyapunov's theorem.
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(\\Omega, \\mathcal F)"
},
{
"math_id": 1,
"text": "X,"
},
{
"math_id": 2,
"text": "\\mu:\\mathcal {F} \\to X"
},
{
"math_id": 3,
"text": "A"
},
{
"math_id": 4,
"text": "B"
},
{
"math_id": 5,
"text": "\\mathcal{F}"
},
{
"math_id": 6,
"text": "\\mu(A\\cup B) =\\mu(A) + \\mu (B)."
},
{
"math_id": 7,
"text": "\\mu"
},
{
"math_id": 8,
"text": "(A_i)_{i=1}^{\\infty}"
},
{
"math_id": 9,
"text": "\\mathcal F"
},
{
"math_id": 10,
"text": "\\mu{\\left(\\bigcup_{i=1}^\\infty A_i\\right)} = \\sum_{i=1}^{\\infty}\\mu(A_i)"
},
{
"math_id": 11,
"text": "X."
},
{
"math_id": 12,
"text": "\\|\\cdot\\|"
},
{
"math_id": 13,
"text": "[0, \\infty),"
},
{
"math_id": 14,
"text": "[0, 1]"
},
{
"math_id": 15,
"text": "A,"
},
{
"math_id": 16,
"text": "\\mu(A) = \\chi_A"
},
{
"math_id": 17,
"text": "\\chi"
},
{
"math_id": 18,
"text": "A."
},
{
"math_id": 19,
"text": "\\mu,"
},
{
"math_id": 20,
"text": "L^p"
},
{
"math_id": 21,
"text": "L^\\infty([0, 1]),"
},
{
"math_id": 22,
"text": "L^1([0, 1]),"
},
{
"math_id": 23,
"text": "\\mu : \\mathcal{F} \\to X,"
},
{
"math_id": 24,
"text": "|\\mu|"
},
{
"math_id": 25,
"text": "|\\mu|(A)=\\sup \\sum_{i=1}^n \\|\\mu(A_i)\\|"
},
{
"math_id": 26,
"text": "A = \\bigcup_{i=1}^n A_i"
},
{
"math_id": 27,
"text": "\\mathcal{F}."
},
{
"math_id": 28,
"text": "[0, \\infty]."
},
{
"math_id": 29,
"text": "\\|\\mu(A)\\| \\leq |\\mu|(A)"
},
{
"math_id": 30,
"text": "|\\mu|(\\Omega)"
}
] | https://en.wikipedia.org/wiki?curid=9399072 |
939985 | XOR cipher | Encryption algorithm
In cryptography, the simple XOR cipher is a type of "additive cipher", an encryption algorithm that operates according to the principles:
A formula_0 0 = A,
A formula_0 A = 0,
A formula_0 B = B formula_0 A,
(A formula_0 B) formula_0 C = A formula_0 (B formula_0 C),
(B formula_0 A) formula_0 A = B formula_0 0 = B,
For example where formula_0 denotes the exclusive disjunction (XOR) operation. This operation is sometimes called modulus 2 addition (or subtraction, which is identical). With this logic, a string of text can be encrypted by applying the bitwise XOR operator to every character using a given key. To decrypt the output, merely reapplying the XOR function with the key will remove the cipher.
Example.
The string "Wiki" (01010111 01101001 01101011 01101001 in 8-bit ASCII) can be encrypted with the repeating key 11110011 as follows:
And conversely, for decryption:
Use and security.
The XOR operator is extremely common as a component in more complex ciphers. By itself, using a constant repeating key, a simple XOR cipher can trivially be broken using frequency analysis. If the content of any message can be guessed or otherwise known then the key can be revealed. Its primary merit is that it is simple to implement, and that the XOR operation is computationally inexpensive. A simple repeating XOR (i.e. using the same key for xor operation on the whole data) cipher is therefore sometimes used for hiding information in cases where no particular security is required. The XOR cipher is often used in computer malware to make reverse engineering more difficult.
If the key is random and is at least as long as the message, the XOR cipher is much more secure than when there is key repetition within a message. When the keystream is generated by a pseudo-random number generator, the result is a stream cipher. With a key that is truly random, the result is a one-time pad, which is unbreakable in theory.
The XOR operator in any of these ciphers is vulnerable to a known-plaintext attack, since "plaintext" formula_0 "ciphertext" = "key".
It is also trivial to flip arbitrary bits in the decrypted plaintext by manipulating the ciphertext.
This is called malleability.
Usefulness in cryptography.
The primary reason XOR is so useful in cryptography is because it is "perfectly balanced"; for a given plaintext input 0 or 1, the ciphertext result is equally likely to be either 0 or 1 for a truly random key bit.
The table below shows all four possible pairs of plaintext and key bits. It is clear that if nothing is known about the key or plaintext, nothing can be determined from the ciphertext alone.
Other logical operations such and AND or OR do not have such a mapping (for example, AND would produce three 0's and one 1, so knowing that a given ciphertext bit is a 0 implies that there is a 2/3 chance that the original plaintext bit was a 0, as opposed to the ideal 1/2 chance in the case of XOR)
Example implementation.
Example using the Python programming language.
from os import urandom
def genkey(length: int) -> bytes:
"""Generate key."""
return urandom(length)
def xor_strings(s, t) -> bytes:
"""Concate xor two strings together."""
if isinstance(s, str):
# Text strings contain single characters
return "".join(chr(ord(a) ^ b) for a, b in zip(s, t)).encode('utf8')
else:
# Bytes objects contain integer values in the range 0-255
return bytes([a ^ b for a, b in zip(s, t)])
message = 'This is a secret message'
print('Message:', message)
key = genkey(len(message))
print('Key:', key)
cipherText = xor_strings(message.encode('utf8'), key)
print('cipherText:', cipherText)
print('decrypted:', xor_strings(cipherText, key).decode('utf8'))
if xor_strings(cipherText, key).decode('utf8') == message:
print('Unit test passed')
else:
print('Unit test failed')
A shorter example using the R programming language, based on a puzzle posted on Instagram by GCHQ.
secret_key <- c(0xc6, 0xb5, 0xca, 0x01) |> as.raw()
secret_message <- "I <3 Wikipedia" |>
charToRaw() |>
xor(secret_key) |>
base64enc::base64encode()
secret_message_bytes <- secret_message |>
base64enc::base64decode()
xor(secret_message_bytes, secret_key) |> rawToChar()
References.
Notes.
<templatestyles src="Reflist/styles.css" />
Citations.
<templatestyles src="Reflist/styles.css" />
Sources.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\oplus"
}
] | https://en.wikipedia.org/wiki?curid=939985 |
9400139 | Tesseractic honeycomb | In four-dimensional euclidean geometry, the tesseractic honeycomb is one of the three regular space-filling tessellations (or honeycombs), represented by Schläfli symbol {4,3,3,4}, and consisting of a packing of tesseracts (4-hypercubes).
Its vertex figure is a 16-cell. Two tesseracts meet at each cubic cell, four meet at each square face, eight meet on each edge, and sixteen meet at each vertex.
It is an analog of the square tiling, {4,4}, of the plane and the cubic honeycomb, {4,3,4}, of 3-space. These are all part of the hypercubic honeycomb family of tessellations of the form {4,3...,3,4}. Tessellations in this family are self-dual.
Coordinates.
Vertices of this honeycomb can be positioned in 4-space in all integer coordinates (i,j,k,l).
Sphere packing.
Like all regular hypercubic honeycombs, the tesseractic honeycomb corresponds to a sphere packing of edge-length-diameter spheres centered on each vertex, or (dually) inscribed in each cell instead. In the hypercubic honeycomb of 4 dimensions, vertex-centered 3-spheres and cell-inscribed 3-spheres will both fit at once, forming the unique regular body-centered cubic lattice of equal-sized spheres (in any number of dimensions). Since the tesseract is radially equilateral, there is exactly enough space in the hole between the 16 vertex-centered 3-spheres for another edge-length-diameter 3-sphere. (This 4-dimensional body centered cubic lattice is actually the union of two tesseractic honeycombs, in dual positions.)
This is the same densest known regular 3-sphere packing, with kissing number 24, that is also seen in the other two regular tessellations of 4-space, the 16-cell honeycomb and the 24-cell-honeycomb. Each tesseract-inscribed 3-sphere kisses a surrounding shell of 24 3-spheres, 16 at the vertices of the tesseract and 8 inscribed in the adjacent tesseracts. These 24 kissing points are the vertices of a 24-cell of radius (and edge length) 1/2.
Constructions.
There are many different Wythoff constructions of this honeycomb. The most symmetric form is regular, with Schläfli symbol {4,3,3,4}. Another form has two alternating tesseract facets (like a checkerboard) with Schläfli symbol {4,3,31,1}. The lowest symmetry Wythoff construction has 16 types of facets around each vertex and a prismatic product Schläfli symbol {∞}4. One can be made by stericating another.
Related polytopes and tessellations.
The [4,3,31,1], , Coxeter group generates 31 permutations of uniform tessellations, 23 with distinct symmetry and 4 with distinct geometry. There are two alternated forms: the alternations (19) and (24) have the same geometry as the 16-cell honeycomb and snub 24-cell honeycomb respectively.
The 24-cell honeycomb is similar, but in addition to the vertices at integers (i,j,k,l), it has vertices at half integers (i+1/2,j+1/2,k+1/2,l+1/2) of odd integers only. It is a half-filled body centered cubic (a checkerboard in which the red 4-cubes have a central vertex but the black 4-cubes do not).
The tesseract can make a regular tessellation of the 4-sphere, with three tesseracts per face, with Schläfli symbol {4,3,3,3}, called an "order-3 tesseractic honeycomb". It is topologically equivalent to the regular polytope penteract in 5-space.
The tesseract can make a regular tessellation of 4-dimensional hyperbolic space, with 5 tesseracts around each face, with Schläfli symbol {4,3,3,5}, called an order-5 tesseractic honeycomb.
The Ammann–Beenker tiling is an aperiodic tiling in 2 dimensions obtained by cut-and-project on the tesseractic honeycomb along an eightfold rotational axis of symmetry.
Birectified tesseractic honeycomb.
A birectified tesseractic honeycomb, , contains all rectified 16-cell (24-cell) facets and is the Voronoi tessellation of the D4* lattice. Facets can be identically colored from a doubled formula_0×2, 4,3,3,4 symmetry, alternately colored from formula_0, [4,3,3,4] symmetry, three colors from formula_1, [4,3,31,1] symmetry, and 4 colors from formula_2, [31,1,1,1] symmetry.
See also.
Regular and uniform honeycombs in 4-space:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "{\\tilde{C}}_4"
},
{
"math_id": 1,
"text": "{\\tilde{B}}_4"
},
{
"math_id": 2,
"text": "{\\tilde{D}}_4"
}
] | https://en.wikipedia.org/wiki?curid=9400139 |
940033 | Evolution strategy | In computer science, an evolution strategy (ES) is an optimization technique based on ideas of evolution. It belongs to the general class of evolutionary computation or artificial evolution methodologies.
History.
The 'evolution strategy' optimization technique was created in the early 1960s and developed further in the 1970s and later by Ingo Rechenberg, Hans-Paul Schwefel and their co-workers.
Methods.
Evolution strategies use natural problem-dependent representations, so problem space and search space are identical. In common with evolutionary algorithms, the operators are applied in a loop. An iteration of the loop is called a generation. The sequence of generations is continued until a termination criterion is met.
The special feature of the ES is the self-adaptation of mutation step sizes and the coevolution associated with it. The ES is briefly presented using the standard form, pointing out that there are many variants. The real-valued chromosome contains, in addition to the formula_0 decision variables, formula_1 mutation step sizes formula_2, where: formula_3. Often one mutation step size is used for all decision variables or each has its own step size. Mate selection to produce formula_4 offspring is random, i.e. independent of fitness. First, new mutation step sizes are generated per mating by intermediate recombination of the parental formula_5 with subsequent mutation as follows:
formula_6
where formula_7 is a normally distributed random variable with mean formula_8 and standard deviation formula_9. formula_7 applies to all formula_10, while formula_11 is newly determined for each formula_10. Next, discrete recombination of the decision variables is followed by a mutation using the new mutation step sizes as standard deviations of the normal distribution. The new decision variables formula_12 are calculated as follows:
formula_13
This results in an evolutionary search on two levels: First, at the problem level itself and second, at the mutation step size level. In this way, it can be ensured that the ES searches for its target in ever finer steps. However, there is also the danger of being able to skip larger invalid areas in the search space only with difficulty.
The ES knows two variants of best selection for the generation of the next parent population: In the formula_14-ES, only the formula_15 best offspring are used, whereas in the elitist formula_16-ES, the formula_15 best are selected from parents and children.
Bäck and Schwefel recommend that the value of formula_4 should be seven times the population size formula_15, whereby formula_15 must not be chosen too small because of the strong selection pressure. Suitable values for formula_15 are application-dependent and must be determined experimentally.
Individual step sizes for each coordinate, or correlations between coordinates, which are essentially defined by an underlying covariance matrix, are controlled in practice either by self-adaptation or by covariance matrix adaptation (CMA-ES). When the mutation step is drawn from a multivariate normal distribution using an evolving covariance matrix, it has been hypothesized that this adapted matrix approximates the inverse Hessian of the search landscape. This hypothesis has been proven for a static model relying on a quadratic approximation.
The selection of the next generation in evolution strategies is deterministic and only based on the fitness rankings, not on the actual fitness values. The resulting algorithm is therefore invariant with respect to monotonic transformations of the objective function. The simplest evolution strategy operates on a population of size two: the current point (parent) and the result of its mutation. Only if the mutant's fitness is at least as good as the parent one, it becomes the parent of the next generation. Otherwise the mutant is disregarded. This is a formula_17"-ES". More generally, formula_4 mutants can be generated and compete with the parent, called "formula_18-ES". In formula_19-ES the best mutant becomes the parent of the next generation while the current parent is always disregarded. For some of these variants, proofs of linear convergence (in a stochastic sense) have been derived on unimodal objective functions.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "n"
},
{
"math_id": 1,
"text": "n'"
},
{
"math_id": 2,
"text": "{\\sigma}_j"
},
{
"math_id": 3,
"text": "1\\leq j\\leq n'\\leq n"
},
{
"math_id": 4,
"text": "\\lambda"
},
{
"math_id": 5,
"text": "{\\sigma }_{j}"
},
{
"math_id": 6,
"text": " {\\sigma}'_j = \\sigma_j \\cdot e^{(\\mathcal{N}(0,1)-\\mathcal{N}_j(0,1))} "
},
{
"math_id": 7,
"text": "\\mathcal{N}(0,1)"
},
{
"math_id": 8,
"text": "0"
},
{
"math_id": 9,
"text": "1"
},
{
"math_id": 10,
"text": "{\\sigma}'_j"
},
{
"math_id": 11,
"text": "\\mathcal{N}_j(0,1)"
},
{
"math_id": 12,
"text": " x_j' "
},
{
"math_id": 13,
"text": "x_j'=x_j+\\mathcal{N}_j(0,{\\sigma}_j')"
},
{
"math_id": 14,
"text": "(\\mu ,\\lambda )"
},
{
"math_id": 15,
"text": "\\mu"
},
{
"math_id": 16,
"text": "(\\mu +\\lambda )"
},
{
"math_id": 17,
"text": "\\mathit{(1+1)}"
},
{
"math_id": 18,
"text": "\\mathit{(1+\\lambda )}"
},
{
"math_id": 19,
"text": "(1,\\lambda )"
}
] | https://en.wikipedia.org/wiki?curid=940033 |
9401560 | Weighted matroid | Objective function for greedy algorithms
In combinatorics, a branch of mathematics, a weighted matroid is a matroid endowed with a function that assigns a weight to each element. Formally, let formula_0 be a matroid, where "E" is the set of elements and "I" is the family of independent set. A weighted matroid has a "weight function" formula_1 for assigns a strictly positive weight to each element of formula_2. We extend the function to subsets of formula_2 by summation; formula_3 is the sum of formula_4 over formula_5 in formula_6.
Finding a maximum-weight independent set.
A basic problem regarding weighted matroids is to find an independent set with a maximum total weight. This problem can be solved using the following simple greedy algorithm:
This algorithm does not need to know anything about the matroid structure; it just needs an independence oracle for the matroid - a subroutine for testing whether a set is independent.
Jack Edmonds proved that this simple algorithm indeed finds an independent set with maximum weight. Denote the set found by the algorithm by e1...,ek. By the matroid properties, it is clear that k=rank(M), otherwise the set could be extended. Assume by contradiction that there is another set with a higher weight. Without loss of generality, it is possible to assume that this set has rank(M) elements too; denote it by f1...,fk. Order these items such that w(f1) ≥ ... ≥ w(fk). Let "j" be the first index for which w(fj) > w(ej). Apply the augmentation property to the sets {f1...,fj} and {e1...,ej-1}; we conclude that there must be some "i" ≤ j such that fi could be added to {e1...,ej-1} while keeping it independent. But w(fi) ≥ w(fj) > w(ej), so fi should have been chosen in step j instead of ej - a contradiction.
Example: spanning forest algorithms.
As a simple example, say we wish to find the maximum spanning forest of a graph. That is, given a graph and a weight for each edge, find a forest containing every vertex and maximizing the total weight of the edges in the tree. This problem arises in some clustering applications. It can be solved by Kruskal's algorithm, which can be seen as the special case of the above greedy algorithm to a graphical matroid.
If we look at the definition of the forest matroid, we see that the maximum spanning forest is simply the independent set with largest total weight — such a set must span the graph, for otherwise we can add edges without creating cycles. But how do we find it?
Finding a basis.
There is a simple algorithm for finding a basis:
The result is clearly an independent set. It is a maximal independent set because if formula_9 is not independent for some subset formula_10 of formula_6, then formula_8 is not independent either (the contrapositive follows from the hereditary property). Thus if we pass up an element, we'll never have an opportunity to use it later. We will generalize this algorithm to solve a harder problem.
Extension to optimal.
An independent set of largest total weight is called an "optimal" set. Optimal sets are always bases, because if an edge can be added, it should be; this only increases the total weight. As it turns out, there is a trivial greedy algorithm for computing an optimal set of a weighted matroid. It works as follows:
This algorithm finds a basis, since it is a special case of the above algorithm. It always chooses the element of largest weight that it can while preserving independence (thus the term "greedy"). This always produces an optimal set: suppose that it produces formula_12 and that formula_13. Now for any formula_14 with formula_15, consider open sets formula_16 and formula_17. Since formula_18 is smaller than formula_19, there is some element of formula_19 which can be put into formula_18 with the result still being independent. However formula_20 is an element of maximal weight that can be added to formula_18 to maintain independence. Thus formula_20 is of no smaller weight than some element of formula_19, and hence formula_20 is of at least a large a weight as formula_21. As this is true for all formula_14, formula_6 is weightier than formula_22.
Complexity analysis.
The easiest way to traverse the members of formula_7 in the desired order is to sort them. This requires formula_23 time using a comparison sorting algorithm. We also need to test for each formula_5 whether formula_11 is independent; assuming independence tests require formula_24 time, the total time for the algorithm is formula_25.
If we want to find a minimum spanning tree instead, we simply "invert" the weight function by subtracting it from a large constant. More specifically, let formula_26, where formula_27 exceeds the total weight over all graph edges. Many more optimization problems about all sorts of matroids and weight functions can be solved in this trivial way, although in many cases more efficient algorithms can be found that exploit more specialized properties.
Matroid requirement.
Note also that if we take a set formula_28 of "independent" sets which is a down-set but not a matroid, then the greedy algorithm will not always work. For then there are independent sets formula_29 and formula_30 with formula_31, but such that for no formula_32 is formula_33 independent.
Pick an formula_34 and formula_35 such that formula_36. Weight the elements of formula_37 in the range formula_38 to formula_39, the elements of formula_40 in the range formula_41 to formula_42, the elements of formula_43 in the range formula_44 to formula_41, and the rest in the range formula_45 to formula_46. The greedy algorithm will select the elements of formula_29, and then cannot pick any elements of formula_43. Therefore, the independent set it constructs will be of weight at most formula_47, which is smaller than the weight of formula_30.
Characterization.
This optimization algorithm may be used to characterize matroids: if a family "F" of sets, closed under taking subsets, has the property that, no matter how the sets are weighted, the greedy algorithm finds a maximum-weight set in the family, then "F" must be the family of independent sets of a matroid.
Generalizations.
The notion of matroid has been generalized to allow for other types of sets on which a greedy algorithm gives optimal solutions; see greedoid and matroid embedding for more information. Korte and Lovász would generalize these ideas to objects called "greedoids", which allow even larger classes of problems to be solved by greedy algorithms.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " M = (E, I) "
},
{
"math_id": 1,
"text": " w : E \\rightarrow \\mathbb{R}^+ "
},
{
"math_id": 2,
"text": " E "
},
{
"math_id": 3,
"text": " w(A) "
},
{
"math_id": 4,
"text": " w(x) "
},
{
"math_id": 5,
"text": "x"
},
{
"math_id": 6,
"text": "A"
},
{
"math_id": 7,
"text": "E"
},
{
"math_id": 8,
"text": "A \\cup \\{ x\\}"
},
{
"math_id": 9,
"text": "B \\cup \\{ x\\}"
},
{
"math_id": 10,
"text": " B "
},
{
"math_id": 11,
"text": " A \\cup \\{ x \\} "
},
{
"math_id": 12,
"text": "A=\\{e_1,e_2,\\ldots,e_r\\}"
},
{
"math_id": 13,
"text": "B=\\{f_1,f_2,\\ldots,f_r\\}"
},
{
"math_id": 14,
"text": "k"
},
{
"math_id": 15,
"text": "1\\le k\\le r"
},
{
"math_id": 16,
"text": "O_1=\\{e_1,\\ldots,e_{k-1}\\}"
},
{
"math_id": 17,
"text": "O_2=\\{f_1,\\ldots,f_k\\}"
},
{
"math_id": 18,
"text": "O_1"
},
{
"math_id": 19,
"text": "O_2"
},
{
"math_id": 20,
"text": "e_k"
},
{
"math_id": 21,
"text": "f_k"
},
{
"math_id": 22,
"text": "B"
},
{
"math_id": 23,
"text": "O(|E|\\log|E|)"
},
{
"math_id": 24,
"text": " O(f(|E|)) "
},
{
"math_id": 25,
"text": " O(|E|\\log|E| + |E|f(|E|)) "
},
{
"math_id": 26,
"text": " w_{\\text{min}}(x) = W - w(x) "
},
{
"math_id": 27,
"text": "W"
},
{
"math_id": 28,
"text": "I"
},
{
"math_id": 29,
"text": "I_1"
},
{
"math_id": 30,
"text": "I_2"
},
{
"math_id": 31,
"text": "|I_1|<|I_2|"
},
{
"math_id": 32,
"text": "e\\in I_2\\setminus I_1"
},
{
"math_id": 33,
"text": "I_1\\cup e"
},
{
"math_id": 34,
"text": "\\epsilon>0"
},
{
"math_id": 35,
"text": "\\tau>0"
},
{
"math_id": 36,
"text": "(1+2\\epsilon)|I_1|+\\tau|E|<|I_2|"
},
{
"math_id": 37,
"text": "I_1\\cup I_2"
},
{
"math_id": 38,
"text": "2"
},
{
"math_id": 39,
"text": "2+2\\epsilon"
},
{
"math_id": 40,
"text": "I_1\\setminus I_2"
},
{
"math_id": 41,
"text": "1+\\epsilon"
},
{
"math_id": 42,
"text": "1+2\\epsilon"
},
{
"math_id": 43,
"text": "I_2\\setminus I_1"
},
{
"math_id": 44,
"text": "1"
},
{
"math_id": 45,
"text": "0"
},
{
"math_id": 46,
"text": "\\tau"
},
{
"math_id": 47,
"text": "(1+2\\epsilon)|I_1|+\\tau|E|+|I_1\\cup I_2|"
}
] | https://en.wikipedia.org/wiki?curid=9401560 |
9402045 | Pinwheel tiling | Non-periodic tiling in geometry
In geometry, pinwheel tilings are non-periodic tilings defined by Charles Radin and based on a construction due to John Conway.
They are the first known non-periodic tilings to each have the property that their tiles appear in infinitely many orientations.
Definition.
Let formula_0 be the right triangle with side length formula_1, formula_2 and formula_3.
Conway noticed that formula_0 can be divided in five isometric copies of its image by the dilation of factor formula_4.
The pinwheel tiling is obtained by repeatedly inflating formula_0 by a factor of formula_3 and then subdividing each tile in this manner. Conversely, the tiles of the pinwheel tiling can be grouped into groups of five that form a larger pinwheel tiling. In this tiling, isometric copies of formula_0 appear in infinitely many orientations because the small angle of formula_0, formula_5, is not a rational multiple of formula_6. Radin found a collection of five prototiles, each of which is a marking of formula_0, so that the matching rules on these tiles and their reflections enforce the pinwheel tiling. All of the vertices have rational coordinates, and tile orientations are uniformly distributed around the circle.
Generalizations.
Radin and Conway proposed a three-dimensional analogue which was dubbed the quaquaversal tiling. There are other variants and generalizations of the original idea.
One gets a fractal by iteratively dividing formula_0 in five isometric copies, following the Conway construction, and discarding the middle triangle ("ad infinitum"). This "pinwheel fractal" has Hausdorff dimension formula_7.
Use in architecture.
Federation Square, a building complex in Melbourne, Australia, features the pinwheel tiling. In the project, the tiling pattern is used to create the structural sub-framing for the facades, allowing for the facades to be fabricated off-site, in a factory and later erected to form the facades. The pinwheel tiling system was based on the single triangular element, composed of zinc, perforated zinc, sandstone or glass (known as a tile), which was joined to 4 other similar tiles on an aluminum frame, to form a "panel". Five panels were affixed to a galvanized steel frame, forming a "mega-panel", which were then hoisted onto support frames for the facade. The rotational positioning of the tiles gives the facades a more random, uncertain compositional quality, even though the process of its construction is based on pre-fabrication and repetition. The same pinwheel tiling system is used in the development of the structural frame and glazing for the "Atrium" at Federation Square, although in this instance, the pin-wheel grid has been made "3-dimensional" to form a portal frame structure. | [
{
"math_id": 0,
"text": "T"
},
{
"math_id": 1,
"text": "1"
},
{
"math_id": 2,
"text": "2"
},
{
"math_id": 3,
"text": "\\sqrt{5}"
},
{
"math_id": 4,
"text": "1/\\sqrt{5}"
},
{
"math_id": 5,
"text": "\\arctan\\frac{1}{2}"
},
{
"math_id": 6,
"text": "\\pi"
},
{
"math_id": 7,
"text": " d = \\frac{\\ln 4}{\\ln \\sqrt 5} = \\log_5(16) \\approx 1.7227"
}
] | https://en.wikipedia.org/wiki?curid=9402045 |
940296 | Ground loop (electricity) | Electrical configuration allowing electricity to cross between grounded devices
In an electrical system, a ground loop or earth loop occurs when two points of a circuit are intended to have the same ground reference potential but instead have a different potential between them. This is typically caused when enough current is flowing in the connection between the two ground points to produce a voltage drop and cause two points to be at different potentials. Current may be produced in a circular ground connection (ground loop) by electromagnetic induction.
Ground loops are a major cause of noise, hum, and interference in audio, video, and computer systems. Wiring practices that protect against ground loops include ensuring that all vulnerable signal circuits are referenced to one point as ground. The use of differential signaling can provide rejection of ground-induced interference. Removal of safety ground connections to equipment in an effort to eliminate ground loops also eliminates the protection the safety ground connection is intended to provide.
Description.
A ground loop is caused by the interconnection of electrical devices that results in multiple paths to ground, thereby forming closed conductive loops through the ground connections. A common example is two electrical devices each connected to a mains power outlet by a three-conductor cable and plug containing a protective ground conductor for safety. When signal cables are connected between both devices, the shield of the signal cable is typically connected to the grounded chassis of both devices. This forms a closed loop through the ground conductors of the power cords, which are connected through the building wiring.
In the vicinity of electric power wiring there will always be stray magnetic fields, particularly from utility lines oscillating at 50 or 60 hertz. These ambient magnetic fields passing through the ground loop will induce a current in the loop by electromagnetic induction. The ground loop acts as a single-turn secondary winding of a transformer, the primary being the summation of all current-carrying conductors nearby. The amount of current induced will depend on the magnitude and proximity of nearby currents. The presence of high-power equipment such as industrial motors or transformers can increase the interference. Since the conductors comprising the ground loop usually have very low resistance, often below one ohm, even weak magnetic fields can induce significant currents.
Since the ground conductor of the signal cable linking the two devices is part of the signal path of the cable, the alternating ground current flowing through the cable can introduce electrical interference in the signal. The induced alternating current flowing through the resistance of the cable ground conductor will cause a small AC voltage drop across the cable ground. This is added to the signal applied to the input of the next stage. In audio equipment, the 50 or 60 Hz interference may be heard as a hum in the speakers. In a video system it may cause distortion or syncing problems. In computer cables it can cause slowdowns or failures of data transfer.
Ground loops can also exist within the internal circuits of electronic equipment, as design flaws.
Addition of signal interconnection cables to a system where equipment enclosures are already required to be bonded to ground can create ground loops. Proper design of such a system will satisfy both safety grounding requirements and signal integrity. For this reason, in some large professional installations such as recording studios, it is sometimes the practice to provide two completely separate ground connections to equipment bays. One is the normal safety ground that connects to exposed metalwork, the other is a technical ground for cable screens and the like.
Representative circuit.
The circuit diagram illustrates a simple ground loop. Circuit 1 (left) and circuit 2 (right) share a common path to ground of resistance formula_0. Ideally, this ground conductor would have no resistance (formula_1), yielding no voltage drop across it (formula_2), keeping the connection point between the circuits at a constant ground potential. In that case, the output of circuit 2 is simply formula_3.
However, if this ground conductor has some resistance (formula_4), then it forms a voltage divider with formula_5. As a result, if a current (formula_6) is flowing through formula_0 from circuit 1, then a voltage drop across formula_0 of formula_7 occurs, causing the shared ground connection to no longer be at the actual ground potential. This voltage across the ground conductor is applied to circuit 2 and added to its output:formula_8
Thus the two circuits are no longer isolated from each other and circuit 1 can introduce interference into the output of circuit 2. If circuit 2 is an audio system and circuit 1 has large AC currents flowing in it, the interference may be heard as a 50 or 60 Hz hum in the speakers. Also, both circuits have voltage formula_9 on their grounded parts that may be exposed to contact, possibly presenting a shock hazard. This is true even if circuit 2 is turned off.
Although ground loops occur most often in the ground conductors of electrical equipment, similar loops can occur wherever two or more circuits share a common current path, which can cause a similar problematic voltage drop along the conductor if enough current flows.
Common ground loops.
A common type of ground loop is due to faulty interconnections between electronic components, such as laboratory or recording studio equipment, or home component audio, video, and computer systems. This creates inadvertent closed loops in the ground wiring circuit, which can allow stray 50/60 Hz AC current to be induced and flow through the ground conductors of signal cables. The voltage drops in the ground system caused by these currents are added to the signal path, introducing noise and hum into the output. The loops can include the building's utility wiring ground system when more than one component is grounded through the protective earth (third wire) in their power cords.
Ground currents on signal cables.
The symptoms of a ground loop, ground noise and hum in electrical equipment, are caused by current flowing in the ground or shield conductor of a cable. Fig. 1 shows a signal cable "S" linking two electronic components, including the typical line driver and receiver amplifiers "(triangles)". The cable has a ground or shield conductor which is connected to the chassis ground of each component. The driver amplifier in component 1 "(left)" applies signal "V"1 between the signal and ground conductors of the cable. At the destination end "(right)", the signal and ground conductors are connected to a differential amplifier. This produces the signal input to component 2 by subtracting the shield voltage from the signal voltage to eliminate common-mode noise picked up by the cable
formula_10
If a current "I" from a separate source is flowing through the ground conductor, the resistance "R" of the conductor will create a voltage drop along the cable ground of "IR", so the destination end of the ground conductor will be at a different potential than the source end
formula_11
Since the differential amplifier has high impedance, little current flows in the signal wire, therefore there is no voltage drop across it: formula_12 The ground voltage appears to be in series with the signal voltage "V"1 and adds to it
formula_13
formula_14
If "I" is an AC current this can result in noise added to the signal path in component 2.
Sources of ground current.
The diagrams in this section show a typical ground loop caused by a signal cable "S" connecting two grounded electronic components "C1" and "C2". The loop consists of the signal cable's ground conductor, which is connected through the components' metal chassis to the ground wires "P" in their power cords, which are plugged into outlet grounds which are connected through the building's utility ground wire system "G".
Such loops in the ground path can cause currents in signal cable grounds by two main mechanisms:
Solutions.
The solution to ground loop noise is to break the ground loop, or otherwise prevent the current from flowing. Several approaches are available.
A hazardous technique sometimes used by amateurs is to break the "third wire" ground conductor "P" in one of the component's power cords, by removing the ground pin on the plug, or using a cheater plug. This creates an electric shock hazard by leaving one of the components ungrounded.
Balanced lines.
A more comprehensive solution is to use equipment that employs differential signaling. Ground noise can only get into the signal path in single-ended signaling, in which the ground or shield conductor serves as one side of the signal path. When the signal is sent as a differential signal along a pair of wires, neither of which are connected to ground, any noise from the ground system induced in the signal lines is a common-mode signal, identical in both wires. Since the line receiver at the destination end only responds to differential signals, a difference in voltage between the two lines, the common-mode noise is canceled out. Thus these systems are very immune to electrical noise, including ground noise. Professional and scientific equipment often uses differential signaling with balanced lines.
In low frequency audio and instrumentation systems.
If, for example, a domestic HiFi system has a grounded turntable and a grounded preamplifier connected by a thin screened cable (or cables, in a stereo system) using phono connectors, the cross-section of copper in the cable screen(s) is likely to be less than that of the protective ground conductors for the turntable and the preamplifier. So, when a current is induced in the loop, there will be a voltage drop along the signal ground return. This is directly additive to the wanted signal and will result in objectionable hum. For instance, if a current formula_15 of 1 mA at the local power frequency is induced in the ground loop, and the resistance formula_16 of the screen of the signal cable is 100 mΩ, the voltage drop will be formula_17 = 100 μV. This is a significant fraction of the output voltage of a moving coil pickup cartridge, and imposes an objectionable hum on the cartridge output.
In a more complex situation, such as sound reinforcement systems, public address systems, music instrument amplifiers, recording studio and broadcast studio equipment, there are many signal sources in mains-powered equipment feeding many inputs on other equipment and interconnection may result in hum problems. Attempting to cure these problems by removing the protective ground conductor creates a shock hazard. Solving hum problems must be done in the signal interconnections, and this is done in two main ways, which may be combined.
Isolation.
Isolation is the quickest, quietest and most foolproof method of resolving hum problems. The signal is isolated by a small transformer, such that the source and destination equipment each retain their own protective ground connections, but there is no through connection from one to the other in the signal path. By transformer isolating all unbalanced connections, the unbalanced connections are converted to balanced connections. In analog applications such as audio, the physical limitations of the transformers cause some signal degradation, by limiting bandwidth and adding some distortion.
Balanced interconnection.
Balanced connections see the spurious noise due to ground loop current as common-mode interference while the signal is differential, enabling them to be separated at the destination by circuits having a high common-mode rejection ratio. This rejection can be accomplished with transformers or semiconductor output drivers and line receivers.
With the increasing trend towards digital processing and transmission of audio signals, the full range of isolation by small pulse transformers, optocouplers or fiber optics become more useful. Standard protocols such as S/PDIF, AES3 or TOSLINK are available in relatively inexpensive equipment and allow full isolation, so ground loops need not arise, especially when connecting between audio systems and computers.
In instrumentation systems, the use of differential inputs with high common-mode rejection ratio, to minimize the effects of induced AC signals on the parameter to be measured, is widespread. It may also be possible to introduce narrow notch filters at the power frequency and its lower harmonics; however, this can not be done in audio systems due to the objectionable audible effects on the wanted signal.
In analog video systems.
In analog video, mains hum can be seen as hum bars (bands of slightly different brightness) scrolling vertically up the screen. These are frequently seen with video projectors where the display device has its case grounded via a 3-prong plug, and the other components have a floating ground connected to the CATV coax. In this situation the video cable is grounded at the projector end to the home electrical system, and at the other end to the cable TV's ground, inducing a current through the cable which distorts the picture. The problem is best solved with an isolation transformer in the CATV RF feed, a feature included in some CATV box designs.
Ground loop issues with television coaxial cable can affect any connected audio device such as a receiver. Even if all of the audio and video equipment in, for example, a home theatre system is plugged into the same power outlet, and thus all share the same ground, the coaxial cable entering the TV may be grounded by the cable company to a different point than that of the house's electrical ground creating a ground loop, and causing undesirable mains hum in the system's speakers.
In digital and RF systems.
In digital systems, which commonly transmit data serially (RS-232, RS-485, USB, FireWire, DVI, HDMI etc.) the signal voltage is often much larger than induced power frequency AC on the connecting cable screens. Of those protocols listed, only RS-232 is single-ended with ground return, but it is a large signal, typically + and - 12V, all the others being differential.
Differential signaling must use a balanced line to ensure that the signal does not radiate and that induced noise from a ground loop is a common-mode signal and can be removed at the differential receiver.
Many data communications systems such as Ethernet 10BASE-T, 100BASE-TX and 1000BASE-T, use DC-balanced encoding such as Manchester code. The ground loop(s) which would occur in most installations are avoided by using signal-isolating transformers.
Other systems break the ground loop at data frequencies by fitting small ferrite cores around the connecting cables near each end or just inside the equipment boundary. These form a common-mode choke which inhibits unbalanced current flow, without affecting the differential signal.
Coaxial cables used at radio frequencies may be wound several times through a ferrite core to add a useful amount of common-mode inductance. This limits the flow of unwanted high-frequency common-mode current along the cable shield.
Where no power need be transmitted, only digital data, the use of fiber optics can remove many ground loop problems, and sometimes safety problems too. Optical isolators or optocouplers are frequently used to provide ground loop isolation, and often safety isolation and can help prevent fault propagation.
Internal ground loops in equipment.
Generally, the analog and digital parts of the circuit are in separate areas of the PCB, with their own ground planes to obtain the necessary low inductance grounding and avoid ground bounce. These are tied together at a carefully chosen star point. Where analog-to-digital converters (ADCs) are in use, the star point may have to be at or very close to the ground terminals of the ADC(s). Phase lock loop circuits are particularly vulnerable because the VCO loop filter circuit is working with sub-microvolt signals when the loop is locked, and any disturbance will cause frequency jitter and possible loss of lock.
In circuit design.
Grounding and the potential for ground loops are also important considerations in circuit design. In many circuits, large currents may exist through the ground plane, leading to voltage differences of the ground reference in different parts of the circuit, which can lead to hum and other problems. Techniques exist to avoid ground loops, and otherwise, guarantee good grounding:
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
External links.
<templatestyles src="Citation/styles.css"/> | [
{
"math_id": 0,
"text": "\\scriptstyle R_G"
},
{
"math_id": 1,
"text": "\\scriptstyle R_G = 0"
},
{
"math_id": 2,
"text": "\\scriptstyle V_G = 0"
},
{
"math_id": 3,
"text": "\\scriptstyle V_\\text{out} = V_2"
},
{
"math_id": 4,
"text": "\\scriptstyle R_G > 0"
},
{
"math_id": 5,
"text": "\\scriptstyle R_1"
},
{
"math_id": 6,
"text": "\\scriptstyle I_1"
},
{
"math_id": 7,
"text": "\\scriptstyle V_G\\; =\\; I_1 R_G"
},
{
"math_id": 8,
"text": "V_\\text{out} = V_2 - V_G = V_2 - \\frac{R_G}{R_G + R_1}V_1.\\,"
},
{
"math_id": 9,
"text": "\\scriptstyle V_G"
},
{
"math_id": 10,
"text": "V_2 = V_\\text{S2} - V_\\text{G2} \\,"
},
{
"math_id": 11,
"text": "V_\\text{G2} = V_\\text{G1} - IR \\,"
},
{
"math_id": 12,
"text": "V_\\text{S2} = V_\\text{S1} \\,"
},
{
"math_id": 13,
"text": "V_2 = V_\\text{S1} - (V_\\text{G1} - IR)\\,"
},
{
"math_id": 14,
"text": "V_2 = V_1 + IR\\,"
},
{
"math_id": 15,
"text": "I"
},
{
"math_id": 16,
"text": "R"
},
{
"math_id": 17,
"text": "V = I \\cdot R"
}
] | https://en.wikipedia.org/wiki?curid=940296 |
9403144 | Tricorn (mathematics) | Mandelbar Set
In mathematics, the tricorn, sometimes called the Mandelbar set, is a fractal defined in a similar way to the Mandelbrot set, but using the mapping formula_0 instead of formula_1 used for the Mandelbrot set. It was introduced by W. D. Crowe, R. Hasson, P. J. Rippon, and P. E. D. Strain-Clark. John Milnor found tricorn-like sets as a prototypical configuration in the parameter space of real cubic polynomials, and in various other families of rational maps.
The characteristic three-cornered shape created by this fractal repeats with variations at different scales, showing the same sort of self-similarity as the Mandelbrot set. In addition to smaller tricorns, smaller versions of the Mandelbrot set are also contained within the tricorn fractal.
Formal definition.
The tricorn formula_2 is defined by a family of quadratic antiholomorphic polynomials
formula_3
given by
formula_4
where formula_5 is a complex parameter. For each formula_5, one looks at the forward orbit
formula_6
of the critical point formula_7 of the antiholomorphic polynomial formula_8. In analogy with the Mandelbrot set, the tricorn is defined as the set of all parameters formula_5 for which the forward orbit of the critical point is bounded. This is equivalent to saying that the tricorn is the connectedness locus of the family of quadratic antiholomorphic polynomials; i.e. the set of all parameters formula_5 for which the Julia set formula_9 is connected.
The higher degree analogues of the tricorn are known as the multicorns. These are the connectedness loci of the family of antiholomorphic polynomials formula_10.
Image gallery of various zooms.
Much like the Mandelbrot set, the tricorn has many complex and intricate designs. Due to their similarity, they share many features. However, in the tricorn such features appear to be squeezed and stretched along its boundary.
The following images are progressional zooms on a selected formula_5 value where formula_11. The images are not stretched or altered, that is how they look on magnification.
Implementation.
The below pseudocode implementation hardcodes the complex operations for Z. Consider implementing complex number operations to allow for more dynamic and reusable code.
For each pixel (x, y) on the screen, do:
x = scaled x coordinate of pixel (scaled to lie in the Mandelbrot X scale (-2.5, 1))
y = scaled y coordinate of pixel (scaled to lie in the Mandelbrot Y scale (-1, 1))
zx = x; // zx represents the real part of z
zy = y; // zy represents the imaginary part of z
iteration = 0
max_iteration = 1000
while (zx*zx + zy*zy < 4 AND iteration < max_iteration)
xtemp = zx*zx - zy*zy + x
zy = -2*zx*zy + y
zx = xtemp
iteration = iteration + 1
if (iteration == max_iteration) //Belongs to the set
return insideColor;
return iteration * color;
Further topological properties.
The tricorn is not path connected. Hubbard and Schleicher showed that there are hyperbolic components of odd period of the tricorn that cannot be connected to the hyperbolic component of period one by paths. A stronger statement to the effect that no two (non-real) odd period hyperbolic components of the tricorn can be connected by a path was proved by Inou and Mukherjee.
It is well known that every rational parameter ray of the Mandelbrot set lands at a single parameter. On the other hand, the rational parameter rays at odd-periodic (except period one) angles of the tricorn accumulate on arcs of positive length consisting of parabolic parameters.
Moreover, unlike the Mandelbrot set, the dynamically natural straightening map from a baby tricorn to the original tricorn is discontinuous at infinitely many parameters.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "z \\mapsto \\bar{z}^2 + c"
},
{
"math_id": 1,
"text": "z \\mapsto z^2 + c"
},
{
"math_id": 2,
"text": "T"
},
{
"math_id": 3,
"text": "f_c:\\mathbb C\\to\\mathbb C"
},
{
"math_id": 4,
"text": "f_c: z\\mapsto \\bar{z}^2 + c,"
},
{
"math_id": 5,
"text": "c"
},
{
"math_id": 6,
"text": "(0, f_c(0), f_c(f_c(0)), f_c(f_c(f_c(0))), \\ldots)"
},
{
"math_id": 7,
"text": "0"
},
{
"math_id": 8,
"text": "p_c"
},
{
"math_id": 9,
"text": "J(f_c)"
},
{
"math_id": 10,
"text": "f_c: z\\mapsto \\bar{z}^d + c"
},
{
"math_id": 11,
"text": "c = 0.48 + 0.58i"
}
] | https://en.wikipedia.org/wiki?curid=9403144 |
940429 | 1024 (number) | Natural number
1024 is the natural number following 1023 and preceding 1025.
1024 is a power of two: 210 (2 to the tenth power). It is the nearest power of two from decimal 1000 and senary 100006 (decimal 1296). It is the 64th quarter square.
1024 is the smallest number with exactly 11 divisors (but there are smaller numbers with more than 11 divisors; e.g., 60 has 12 divisors) (sequence in the OEIS).
Enumeration of groups.
The number of groups of order 1024 is , up to isomorphism. An earlier calculation gave this number as , but in 2021 this was shown to be in error.
This count is more than 99% of all the isomorphism classes of groups of order less than 2000.
Approximation to 1000.
The neat coincidence that 210 is nearly equal to 103 provides the basis of a technique of estimating larger powers of 2 in decimal notation. Using 210"a"+"b" ≈ 2"b"103"a"(or 2a≈2a mod 1010floor(a/10) if "a" stands for the whole power) is fairly accurate for exponents up to about 100. For exponents up to 300, 3"a" continues to be a good estimate of the number of digits.
For example, 253 ≈ 8×1015. The actual value is closer to 9×1015.
In the case of larger exponents, the relationship becomes increasingly inaccurate, with errors exceeding an order of magnitude for "a" ≥ 97. For example:
formula_0
In measuring bytes, 1024 is often used in place of 1000 as the quotients of the units byte, kilobyte, megabyte, etc. In 1999, the IEC coined the term kibibyte for multiples of 1024, with kilobyte being used for multiples of 1000.
Special use in computers.
In binary notation, 1024 is represented as 10000000000, making it a simple round number occurring frequently in computer applications.
1024 is the maximum number of computer memory addresses that can be referenced with ten binary switches. This is the origin of the organization of computer memory into 1024-byte chunks or kibibytes.
In the Rich Text Format (RTF), language code 1024 indicates the text is not in any language and should be skipped over when proofing. Most used languages codes in RTF are integers slightly over 1024.
1024×768 pixels and 1280×1024 pixels are common standards of display resolution.
1024 is the lowest non-system and non-reserved port number in TCP/IP networking. Ports above this number can usually be opened for listening by non-superusers.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\begin{align}\n\\frac{2^{1000}}{10^{300}}\n&= \\exp \\left( \\ln \\left( \\frac{2^{1000}}{10^{300}} \\right) \\right) \\\\\n&= \\exp \\left( \\ln \\left( 2^{1000}\\right) - \\ln\\left(10^{300}\\right)\\right)\\\\\n&\\approx \\exp\\left(693.147-690.776\\right)\\\\\n&\\approx \\exp\\left(2.372\\right)\\\\\n&\\approx 10.72\n\\end{align}"
}
] | https://en.wikipedia.org/wiki?curid=940429 |
940606 | Population growth | Increase in the number of individuals in a population
Population growth is the increase in the number of people in a population or dispersed group. Actual global human population growth amounts to around 83 million annually, or 1.1% per year. The global population has grown from 1 billion in 1800 to 8.1 billion in 2024. The UN projected population to keep growing, and estimates have put the total population at 8.6 billion by mid-2030, 9.8 billion by mid-2050 and 11.2 billion by 2100. However, some academics outside the UN have increasingly developed human population models that account for additional downward pressures on population growth; in such a scenario population would peak before 2100. Others have challenged many recent population projections as having underestimated population growth.
The world human population has been growing since the end of the Black Death, around the year 1350. A mix of technological advancement that improved agricultural productivity and sanitation and medical advancement that reduced mortality increased population growth. In some geographies, this has slowed through the process called the demographic transition, where many nations with high standards of living have seen a significant slowing of population growth. This is in direct contrast with less developed contexts, where population growth is still happening. Globally, the rate of population growth has declined from a peak of 2.2% per year in 1963.
Population growth alongside increased consumption is a driver of environmental concerns, such as biodiversity loss and climate change, due to overexploitation of natural resources for human development. International policy focused on mitigating the impact of human population growth is concentrated in the Sustainable Development Goals which seeks to improve the standard of living globally while reducing the impact of society on the environment while advancing human well-being.
History.
World population has been rising continuously since the end of the Black Death, around the year 1350. Population began growing rapidly in the Western world during the industrial revolution. The most significant increase in the world's population has been since the 1950s, mainly due to medical advancements and increases in agricultural productivity.
Haber process.
Due to its dramatic impact on the human ability to grow food, the Haber process, named after one of its inventors, the German chemist Fritz Haber, served as the "detonator of the population explosion", enabling the global population to increase from 1.6 billion in 1900 to 7.7 billion by November 2019.
Thomas McKeown hypotheses.
Some of the reasons for the "Modern Rise of Population" were particularly investigated by the British health scientist Thomas McKeown (1912–1988). In his publications, McKeown challenged four theories about the population growth:
Although the McKeown thesis has been heavily disputed, recent studies have confirmed the value of his ideas. His work is pivotal for present day thinking about population growth, birth control, public health and medical care. McKeown had a major influence on many population researchers, such as health economists and Nobel prize winners Robert W. Fogel (1993) and Angus Deaton (2015). The latter considered McKeown as "the founder of social medicine".
Growth rate models.
The "population growth rate" is the rate at which the number of individuals in a population increases in a given time period, expressed as a fraction of the initial population. Specifically, population growth rate refers to the change in population over a unit time period, often expressed as a percentage of the number of individuals in the population at the beginning of that period. This can be written as the formula, valid for a sufficiently small time interval:
formula_0
A positive growth rate indicates that the population is increasing, while a negative growth rate indicates that the population is decreasing. A growth ratio of zero indicates that there were the same number of individuals at the beginning and end of the period—a growth rate may be zero even when there are significant changes in the birth rates, death rates, immigration rates, and age distribution between the two times.
A related measure is the net reproduction rate. In the absence of migration, a net reproduction rate of more than 1 indicates that the population of females is increasing, while a net reproduction rate less than one (sub-replacement fertility) indicates that the population of females is decreasing.
Most populations do not grow exponentially, rather they follow a logistic model. Once the population has reached its carrying capacity, it will stabilize and the exponential curve will level off towards the carrying capacity, which is usually when a population has depleted most its natural resources. In the world human population, growth may be said to have been following a linear trend throughout the last few decades.
Logistic equation.
The growth of a population can often be modelled by the logistic equation
formula_1
where
As it is a separable differential equation, the population may be solved explicitly, producing a logistic function:
formula_6,
where formula_7 and formula_8 is the initial population at time 0.
Population growth rate.
The world population growth rate peaked in 1963 at 2.2% per year and subsequently declined. In 2017, the estimated annual growth rate was 1.1%. The CIA World Factbook gives the world annual birthrate, mortality rate, and growth rate as 1.86%, 0.78%, and 1.08% respectively. The last 100 years have seen a massive fourfold increase in the population, due to medical advances, lower mortality rates, and an increase in agricultural productivity made possible by the Green Revolution.
The annual increase in the number of living humans peaked at 88.0 million in 1989, then slowly declined to 73.9 million in 2003, after which it rose again to 75.2 million in 2006. In 2017, the human population increased by 83 million. Generally, developed nations have seen a decline in their growth rates in recent decades, though annual growth rates remain above 2% in some countries of the Middle East and Sub-Saharan Africa, and also in South Asia, Southeast Asia, and Latin America.
In some countries the population is declining, especially in Eastern Europe, mainly due to low fertility rates, high death rates and emigration. In Southern Africa, growth is slowing due to the high number of AIDS-related deaths. Some Western Europe countries might also experience population decline. Japan's population began decreasing in 2005.
The United Nations Population Division projects world population to reach 11.2 billion by the end of the 21st century.
The Institute for Health Metrics and Evaluation projects that the global population will peak in 2064 at 9.73 billion and decline to 8.89 billion in 2100.
A 2014 study in "Science" concludes that the global population will reach 11 billion by 2100, with a 70% chance of continued growth into the 22nd century. The German Foundation for World Population reported in December 2019 that the global human population grows by 2.6 people every second, and could reach 8 billion by 2023.
Growth by country.
According to United Nations population statistics, the world population grew by 30%, or 1.6 billion humans, between 1990 and 2010. In number of people the increase was highest in India (350 million) and China (196 million). Population growth rate was among highest in the United Arab Emirates (315%) and Qatar (271%).
Many of the world's countries, including many in Sub-Saharan Africa, the Middle East, South Asia and South East Asia, have seen a sharp rise in population since the end of the Cold War. The fear is that high population numbers are putting further strain on natural resources, food supplies, fuel supplies, employment, housing, etc. in some of the less fortunate countries. For example, the population of Chad has ultimately grown from 6,279,921 in 1993 to 10,329,208 in 2009, further straining its resources. Vietnam, Mexico, Nigeria, Egypt, Ethiopia, and the DRC are witnessing a similar growth in population.
The following table gives some example countries or territories:
* Eritrea left Ethiopia in 1991.
† Split into the nations of Sudan and South Sudan during 2011.
‡ Japan and the Ryukyu Islands merged in 1972.
# India and Sikkim merged in 1975.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "Population\\ growth\\ rate = \\frac{ P(t_2) - P(t_1)} {P(t_1)(t_2-t_1)}"
},
{
"math_id": 1,
"text": "\\frac{dP}{dt}=rP\\left(1-\\frac{P}{K}\\right),"
},
{
"math_id": 2,
"text": "P(t)"
},
{
"math_id": 3,
"text": "t"
},
{
"math_id": 4,
"text": "r"
},
{
"math_id": 5,
"text": "K"
},
{
"math_id": 6,
"text": "P(t)=\\frac{K}{1+Ae^{-rt}}"
},
{
"math_id": 7,
"text": "A=\\frac{K-P_0}{P_0}"
},
{
"math_id": 8,
"text": "P_0"
}
] | https://en.wikipedia.org/wiki?curid=940606 |
9409943 | String duality | String duality is a class of symmetries in physics that link different string theories, theories which assume that the fundamental building blocks of the universe are strings instead of point particles.
Overview.
Before the so-called "duality revolution" there were believed to be five distinct versions of string theory, plus the (unstable) bosonic and gluonic theories.
Note that in the type IIA and type IIB string theories closed strings are allowed to move everywhere throughout the ten-dimensional space-time (called the "bulk"), while open strings have their ends attached to D-branes, which are membranes of lower dimensionality (their dimension is odd - 1,3,5,7 or 9 - in type IIA and even - 0,2,4,6 or 8 - in type IIB, including the time direction).
Before the 1990s, string theorists believed there were five distinct superstring theories: type I, types IIA and IIB, and the two heterotic string theories (SO(32) and "E"8×"E"8). The thinking was that out of these five candidate theories, only one was the actual theory of everything, and that theory was the theory whose low energy limit, with ten dimensions spacetime compactified down to four, matched the physics observed in our world today. It is now known that the five superstring theories are not fundamental, but are instead different limits of a more fundamental theory, dubbed M-theory. These theories are related by transformations called dualities. If two theories are related by a duality transformation, each observable of the first theory can be mapped in some way to the second theory to yield equivalent predictions. The two theories are then said to be dual to one another under that transformation. Put differently, the two theories are two mathematically different descriptions of the same phenomena. A simple example of a duality is the equivalence of particle physics upon replacing matter with antimatter; describing our universe in terms of anti-particles would yield identical predictions for any possible experiment.
String dualities often link quantities that appear to be separate: Large and small distance scales, strong and weak coupling strengths. These quantities have always marked very distinct limits of behavior of a physical system, in both classical field theory and quantum particle physics. But strings can obscure the difference between large and small, strong and weak, and this is how these five very different theories end up being related.
T-duality.
Suppose we are in ten spacetime dimensions, which means we have nine space dimensions and one time. Take one of those nine space dimensions and make it a circle of radius R, so that traveling in that direction for a distance L = 2πR takes you around the circle and brings you back to where you started. A particle traveling around this circle will have a quantized momentum around the circle, because its momentum is linked to its wavelength (see wave–particle duality), and 2πR must be a multiple of that. In fact, the particle momentum around the circle - and the contribution to its energy - is of the form n/R (in standard units, for an integer n), so that at large R there will be many more states compared to small R (for a given maximum energy). A string, in addition to traveling around the circle, may also wrap around it. The number of times the string winds around the circle is called the winding number, and that is also quantized (as it must be an integer). Winding around the circle requires energy, because the string must be stretched against its tension, so it contributes an amount of energy of the form formula_0, where formula_1 is a constant called the "string length" and w is the winding number (an integer). Now (for a given maximum energy) there will be many different states (with different momenta) at large R, but there will also be many different states (with different windings) at small R. In fact, a theory with large R and a theory with small R are equivalent, where the role of momentum in the first is played by the winding in the second, and vice versa. Mathematically, taking R to formula_2 and switching n and w will yield the same equations. So exchanging momentum and winding modes of the string exchanges a large distance scale with a small distance scale.
This type of duality is called T-duality. T-duality relates type IIA superstring theory to type IIB superstring theory. That means if we take type IIA and Type IIB theory and compactify them both on a circle (one with a large radius and the other with a small radius) then switching the momentum and winding modes, and switching the distance scale, changes one theory into the other. The same is also true for the two heterotic theories. T-duality also relates type I superstring theory to both type IIA and type IIB superstring theories with certain boundary conditions (termed orientifold).
Formally, the location of the string on the circle is described by two fields living on it, one which is left-moving and another which is right-moving. The movement of the string center (and hence its momentum) is related to the sum of the fields, while the string stretch (and hence its winding number) is related to their difference. T-duality can be formally described by taking the left-moving field to minus itself, so that the sum and the difference are interchanged, leading to switching of momentum and winding.
S-duality.
Every force has a coupling constant, which is a measure of its strength, and determines the chances of one particle to emit or absorb another particle. For electromagnetism, the coupling constant is proportional to the square of the electric charge. When physicists study the quantum behavior of electromagnetism, they can't solve the whole theory exactly, because every particle may emit and absorb many other particles, which may also do the same, endlessly. So events of emission and absorption are considered as perturbations and are dealt with by a series of approximations, first assuming there is only one such event, then correcting the result for allowing two such events, etc. (this method is called Perturbation theory). This is a reasonable approximation only if the coupling constant is small, which is the case for electromagnetism. But if the coupling constant gets large, that method of calculation breaks down, and the little pieces become worthless as an approximation to the real physics.
This also can happen in string theory. String theories have a coupling constant. But unlike in particle theories, the string coupling constant is not just a number, but depends on one of the oscillation modes of the string, called the dilaton. Exchanging the dilaton field with minus itself exchanges a very large coupling constant with a very small one. This symmetry is called S-duality. If two string theories are related by S-duality, then one theory with a strong coupling constant is the same as the other theory with weak coupling constant. The theory with strong coupling cannot be understood by means of perturbation theory, but the theory with weak coupling can. So if the two theories are related by S-duality, then we just need to understand the weak theory, and that is equivalent to understanding the strong theory.
Superstring theories related by S-duality are: type I superstring theory with heterotic SO(32) superstring theory, and type IIB theory with itself.
Furthermore, type IIA theory in strong coupling behaves like an 11-dimensional theory, with the dilaton field playing the role of an eleventh dimension. This 11-dimensional theory is known as M-theory.
Unlike the T-duality, however, S-duality has not been proven to even a physics level of rigor for any of the aforementioned cases. It remains, strictly speaking, a conjecture, although most string theorists believe in its validity. | [
{
"math_id": 0,
"text": "wR/L_{st}^2"
},
{
"math_id": 1,
"text": "L_{st}"
},
{
"math_id": 2,
"text": "L_{st}^2/R"
}
] | https://en.wikipedia.org/wiki?curid=9409943 |
94102 | Solid angle | Measure of how large an object appears to an observer at a given point in three-dimensional space
In geometry, a solid angle (symbol: Ω) is a measure of the amount of the field of view from some particular point that a given object covers. That is, it is a measure of how large the object appears to an observer looking from that point.
The point from which the object is viewed is called the "apex" of the solid angle, and the object is said to "subtend" its solid angle at that point.
In the International System of Units (SI), a solid angle is expressed in a dimensionless unit called a "steradian" (symbol: sr). One steradian corresponds to one unit of area on the unit sphere surrounding the apex, so an object that blocks all rays from the apex would cover a number of steradians equal to the total surface area of the unit sphere, formula_0. Solid angles can also be measured in squares of angular measures such as degrees, minutes, and seconds.
A small object nearby may subtend the same solid angle as a larger object farther away. For example, although the Moon is much smaller than the Sun, it is also much closer to Earth. Indeed, as viewed from any point on Earth, both objects have approximately the same solid angle (and therefore apparent size). This is evident during a solar eclipse.
Definition and properties.
An object's solid angle in steradians is equal to the area of the segment of a unit sphere, centered at the apex, that the object covers. Giving the area of a segment of a unit sphere in steradians is analogous to giving the length of an arc of a unit circle in radians. Just like a planar angle in radians is the ratio of the length of an arc to its radius, a solid angle in steradians is the ratio of the area covered on a sphere by an object to the area given by the square of the radius of the sphere. The formula is
formula_1
where formula_2 is the spherical surface area and formula_3 is the radius of the considered sphere.
Solid angles are often used in astronomy, physics, and in particular astrophysics. The solid angle of an object that is very far away is roughly proportional to the ratio of area to squared distance. Here "area" means the area of the object when projected along the viewing direction.
The solid angle of a sphere measured from any point in its interior is 4π sr, and the solid angle subtended at the center of a cube by one of its faces is one-sixth of that, or 2π/3 sr. Solid angles can also be measured in square degrees (1 sr = ( )2 square degrees), in square arc-minutes and square arc-seconds, or in fractions of the sphere (1 sr = fractional area), also known as spat (1 sp = 4π sr).
In spherical coordinates there is a formula for the differential,
formula_4
where θ is the colatitude (angle from the North Pole) and φ is the longitude.
The solid angle for an arbitrary oriented surface S subtended at a point P is equal to the solid angle of the projection of the surface S to the unit sphere with center P, which can be calculated as the surface integral:
formula_5
where formula_6 is the unit vector corresponding to formula_7, the position vector of an infinitesimal area of surface "dS" with respect to point P, and where formula_8 represents the unit normal vector to "dS". Even if the projection on the unit sphere to the surface S is not isomorphic, the multiple folds are correctly considered according to the surface orientation described by the sign of the scalar product formula_9.
Thus one can approximate the solid angle subtended by a small facet having flat surface area "dS", orientation formula_10, and distance "r" from the viewer as:
formula_11
where the surface area of a sphere is "A" = 4π"r"2.
Solid angles for common objects.
Cone, spherical cap, hemisphere.
The solid angle of a cone with its apex at the apex of the solid angle, and with apex angle 2"θ", is the area of a spherical cap on a unit sphere
formula_12
For small "θ" such that cos "θ" ≈ 1 − "θ"2/2 this reduces to π"θ"2, the area of a circle.
The above is found by computing the following double integral using the unit surface element in spherical coordinates:
formula_13
This formula can also be derived without the use of calculus.
Over 2200 years ago Archimedes proved that the surface area of a spherical cap is always equal to the area of a circle whose radius equals the distance from the rim of the spherical cap to the point where the cap's axis of symmetry intersects the cap. In the above coloured diagram this radius is given as
formula_14
In the adjacent black & white diagram this radius is given as "t".
Hence for a unit sphere the solid angle of the spherical cap is given as
formula_15
When "θ" = , the spherical cap becomes a hemisphere having a solid angle 2π.
The solid angle of the complement of the cone is
formula_16
This is also the solid angle of the part of the celestial sphere that an astronomical observer positioned at latitude "θ" can see as the Earth rotates. At the equator all of the celestial sphere is visible; at either pole, only one half.
The solid angle subtended by a segment of a spherical cap cut by a plane at angle "γ" from the cone's axis and passing through the cone's apex can be calculated by the formula
formula_17
For example, if "γ" = −"θ", then the formula reduces to the spherical cap formula above: the first term becomes π, and the second .
Tetrahedron.
Let OABC be the vertices of a tetrahedron with an origin at O subtended by the triangular face ABC where formula_18 are the vector positions of the vertices A, B and C. Define the vertex angle θa to be the angle BOC and define θb, θc correspondingly. Let formula_19 be the dihedral angle between the planes that contain the tetrahedral faces OAC and OBC and define formula_20, formula_21 correspondingly. The solid angle Ω subtended by the triangular surface ABC is given by
formula_22
This follows from the theory of spherical excess and it leads to the fact that there is an analogous theorem to the theorem that ""The sum of internal angles of a planar triangle is equal to π"", for the sum of the four internal solid angles of a tetrahedron as follows:
formula_23
where formula_24 ranges over all six of the dihedral angles between any two planes that contain the tetrahedral faces OAB, OAC, OBC and ABC.
A useful formula for calculating the solid angle of the tetrahedron at the origin O that is purely a function of the vertex angles θa, θb, θc is given by L'Huilier's theorem as
formula_25
where
formula_26
Another interesting formula involves expressing the vertices as vectors in 3 dimensional space. Let formula_18 be the vector positions of the vertices A, B and C, and let a, b, and c be the magnitude of each vector (the origin-point distance). The solid angle Ω subtended by the triangular surface ABC is:
formula_27
where
formula_28
denotes the scalar triple product of the three vectors and formula_29 denotes the scalar product.
Care must be taken here to avoid negative or incorrect solid angles. One source of potential errors is that the scalar triple product can be negative if a, b, c have the wrong winding. Computing the absolute value is a sufficient solution since no other portion of the equation depends on the winding. The other pitfall arises when the scalar triple product is positive but the divisor is negative. In this case returns a negative value that must be increased by π.
Pyramid.
The solid angle of a four-sided right rectangular pyramid with apex angles a and b (dihedral angles measured to the opposite side faces of the pyramid) is
formula_30
If both the side lengths ("α" and "β") of the base of the pyramid and the distance ("d") from the center of the base rectangle to the apex of the pyramid (the center of the sphere) are known, then the above equation can be manipulated to give
formula_31
The solid angle of a right n-gonal pyramid, where the pyramid base is a regular n-sided polygon of circumradius r, with a
pyramid height h is
formula_32
The solid angle of an arbitrary pyramid with an "n"-sided base defined by the sequence of unit vectors representing edges {"s"1, "s"2}, ... "s""n" can be efficiently computed by:
formula_33
where parentheses (* *) is a scalar product and square brackets [* * *] is a scalar triple product, and i is an imaginary unit. Indices are cycled: "s"0
"s""n" and "s"1
"s""n" + 1. The complex products add the phase associated with each vertex angle of the polygon. However, a multiple of
formula_34 is lost in the branch cut of formula_35 and must be kept track of separately. Also, the running product of complex phases must scaled occasionally to avoid underflow in the limit of nearly parallel segments.
Latitude-longitude rectangle.
The solid angle of a latitude-longitude rectangle on a globe is
formula_36
where "φ"N and "φ"S are north and south lines of latitude (measured from the equator in radians with angle increasing northward), and "θ"E and "θ"W are east and west lines of longitude (where the angle in radians increases eastward). Mathematically, this represents an arc of angle "ϕ"N − "ϕ"S swept around a sphere by "θ"E − "θ"W radians. When longitude spans 2π radians and latitude spans π radians, the solid angle is that of a sphere.
A latitude-longitude rectangle should not be confused with the solid angle of a rectangular pyramid. All four sides of a rectangular pyramid intersect the sphere's surface in great circle arcs. With a latitude-longitude rectangle, only lines of longitude are great circle arcs; lines of latitude are not.
Celestial objects.
By using the definition of angular diameter, the formula for the solid angle of a celestial object can be defined in terms of the radius of the object, formula_37, and the distance from the observer to the object, formula_38:
formula_39
By inputting the appropriate average values for the Sun and the Moon (in relation to Earth), the average solid angle of the Sun is steradians and the average solid angle of the Moon is steradians. In terms of the total celestial sphere, the Sun and the Moon subtend average "fractional areas" of % () and % (), respectively. As these solid angles are about the same size, the Moon can cause both total and annular solar eclipses depending on the distance between the Earth and the Moon during the eclipse.
Solid angles in arbitrary dimensions.
The solid angle subtended by the complete (d − 1)-dimensional spherical surface of the unit sphere in "d"-dimensional Euclidean space can be defined in any number of dimensions "d". One often needs this solid angle factor in calculations with spherical symmetry. It is given by the formula
formula_40
where Γ is the gamma function. When "d" is an integer, the gamma function can be computed explicitly. It follows that
formula_41
This gives the expected results of 4π steradians for the 3D sphere bounded by a surface of area 4π"r"2 and 2π radians for the 2D circle bounded by a circumference of length 2π"r". It also gives the slightly less obvious 2 for the 1D case, in which the origin-centered 1D "sphere" is the interval [−"r", "r"] and this is bounded by two limiting points.
The counterpart to the vector formula in arbitrary dimension was derived by Aomoto
and independently by Ribando. It expresses them as an infinite multivariate Taylor series:
formula_42
Given d unit vectors formula_43 defining the angle, let V denote the matrix formed by combining them so the ith column is formula_43, and formula_44. The variables formula_45 form a multivariable formula_46. For a "congruent" integer multiexponent formula_47 define formula_48. Note that here formula_49 = non-negative integers, or natural numbers beginning with 0. The notation formula_50 for formula_51 means the variable formula_52, similarly for the exponents formula_53.
Hence, the term formula_54 means the sum over all terms in formula_55 in which l appears as either the first or second index.
Where this series converges, it converges to the solid angle defined by the vectors.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "4\\pi"
},
{
"math_id": 1,
"text": "\\Omega=\\frac{A}{r^2},"
},
{
"math_id": 2,
"text": "A"
},
{
"math_id": 3,
"text": "r"
},
{
"math_id": 4,
"text": "d\\Omega = \\sin\\theta\\,d\\theta\\,d\\varphi,"
},
{
"math_id": 5,
"text": "\\Omega = \\iint_S \\frac{ \\hat{r} \\cdot \\hat{n}}{r^2}\\,dS \\ = \\iint_S \\sin\\theta\\,d\\theta\\,d\\varphi,"
},
{
"math_id": 6,
"text": "\\hat{r} = \\vec{r} / r"
},
{
"math_id": 7,
"text": " \\vec{r} "
},
{
"math_id": 8,
"text": " \\hat{n} "
},
{
"math_id": 9,
"text": "\\hat{r} \\cdot \\hat{n}"
},
{
"math_id": 10,
"text": "\\hat{n}"
},
{
"math_id": 11,
"text": "d\\Omega = 4 \\pi \\left(\\frac{dS}{A}\\right) \\, (\\hat{r} \\cdot \\hat{n}),"
},
{
"math_id": 12,
"text": "\\Omega = 2\\pi \\left (1 - \\cos\\theta \\right)\\ = 4\\pi \\sin^2 \\frac{\\theta}{2}."
},
{
"math_id": 13,
"text": "\\begin{align}\n\\int_0^{2\\pi} \\int_0^\\theta \\sin\\theta' \\, d \\theta' \\, d \\phi\n&= \\int_0^{2\\pi} d \\phi\\int_0^\\theta \\sin\\theta' \\, d \\theta' \\\\\n&= 2\\pi\\int_0^\\theta \\sin\\theta' \\, d \\theta' \\\\\n&= 2\\pi\\left[ -\\cos\\theta' \\right]_0^{\\theta} \\\\\n&= 2\\pi\\left(1 - \\cos\\theta \\right).\n\\end{align}"
},
{
"math_id": 14,
"text": " 2r \\sin \\frac{\\theta}{2}. "
},
{
"math_id": 15,
"text": " \\Omega = 4\\pi \\sin^2 \\frac{\\theta}{2} = 2\\pi \\left (1 - \\cos\\theta \\right). "
},
{
"math_id": 16,
"text": "4\\pi - \\Omega = 2\\pi \\left(1 + \\cos\\theta \\right) = 4\\pi\\cos^2 \\frac{\\theta}{2}."
},
{
"math_id": 17,
"text": " \\Omega = 2 \\left[ \\arccos \\left(\\frac{\\sin\\gamma}{\\sin\\theta}\\right) - \\cos\\theta \\arccos\\left(\\frac{\\tan\\gamma}{\\tan\\theta}\\right) \\right]. "
},
{
"math_id": 18,
"text": "\\vec a\\ ,\\, \\vec b\\ ,\\, \\vec c "
},
{
"math_id": 19,
"text": "\\phi_{ab}"
},
{
"math_id": 20,
"text": "\\phi_{ac}"
},
{
"math_id": 21,
"text": "\\phi_{bc}"
},
{
"math_id": 22,
"text": " \\Omega = \\left(\\phi_{ab} + \\phi_{bc} + \\phi_{ac}\\right)\\ - \\pi."
},
{
"math_id": 23,
"text": " \\sum_{i=1}^4 \\Omega_i = 2 \\sum_{i=1}^6 \\phi_i\\ - 4 \\pi,"
},
{
"math_id": 24,
"text": "\\phi_i"
},
{
"math_id": 25,
"text": " \\tan \\left( \\frac{1}{4} \\Omega \\right) =\n \\sqrt{ \\tan \\left( \\frac{\\theta_s}{2}\\right) \\tan \\left( \\frac{\\theta_s - \\theta_a}{2}\\right) \\tan \\left( \\frac{\\theta_s - \\theta_b}{2}\\right) \\tan \\left(\\frac{\\theta_s - \\theta_c}{2}\\right)}, "
},
{
"math_id": 26,
"text": " \\theta_s = \\frac {\\theta_a + \\theta_b + \\theta_c}{2}. "
},
{
"math_id": 27,
"text": "\\tan \\left( \\frac{1}{2} \\Omega \\right) =\n \\frac{\\left|\\vec a\\ \\vec b\\ \\vec c\\right|}{abc + \\left(\\vec a \\cdot \\vec b\\right)c + \\left(\\vec a \\cdot \\vec c\\right)b + \\left(\\vec b \\cdot \\vec c\\right)a},\n"
},
{
"math_id": 28,
"text": "\\left|\\vec a\\ \\vec b\\ \\vec c\\right|=\\vec a \\cdot (\\vec b \\times \\vec c)"
},
{
"math_id": 29,
"text": "\\vec a \\cdot \\vec b"
},
{
"math_id": 30,
"text": "\\Omega = 4 \\arcsin \\left( \\sin \\left({a \\over 2}\\right) \\sin \\left({b \\over 2}\\right) \\right). "
},
{
"math_id": 31,
"text": "\\Omega = 4 \\arctan \\frac {\\alpha\\beta} {2d\\sqrt{4d^2 + \\alpha^2 + \\beta^2}}. "
},
{
"math_id": 32,
"text": "\\Omega = 2\\pi - 2n \\arctan\\left(\\frac {\\tan \\left({\\pi\\over n}\\right)}{\\sqrt{1 + {r^2 \\over h^2}}} \\right). "
},
{
"math_id": 33,
"text": " \\Omega = 2\\pi - \\arg \\prod_{j=1}^{n} \\left(\n \\left( s_{j-1} s_j \\right)\\left( s_{j} s_{j+1} \\right) -\n \\left( s_{j-1} s_{j+1} \\right) +\n i\\left[ s_{j-1} s_j s_{j+1} \\right]\n \\right).\n"
},
{
"math_id": 34,
"text": "2\\pi"
},
{
"math_id": 35,
"text": "\\arg"
},
{
"math_id": 36,
"text": "\\left ( \\sin \\phi_\\mathrm{N} - \\sin \\phi_\\mathrm{S} \\right ) \\left ( \\theta_\\mathrm{E} - \\theta_\\mathrm{W} \\,\\! \\right)\\;\\mathrm{sr},"
},
{
"math_id": 37,
"text": "R"
},
{
"math_id": 38,
"text": "d"
},
{
"math_id": 39,
"text": "\\Omega = 2 \\pi \\left (1 - \\frac{\\sqrt{d^2 - R^2}}{d} \\right ) : d \\geq R."
},
{
"math_id": 40,
"text": "\\Omega_{d} = \\frac{2\\pi^\\frac{d}{2}}{\\Gamma\\left(\\frac{d}{2}\\right)}, "
},
{
"math_id": 41,
"text": "\n \\Omega_{d} = \\begin{cases}\n \\frac{1}{ \\left(\\frac{d}{2} - 1 \\right)!} 2\\pi^\\frac{d}{2}\\ & d\\text{ even} \\\\\n \\frac{\\left(\\frac{1}{2}\\left(d - 1\\right)\\right)!}{(d - 1)!} 2^d \\pi^{\\frac{1}{2}(d - 1)}\\ & d\\text{ odd}.\n \\end{cases}\n"
},
{
"math_id": 42,
"text": "\\Omega = \\Omega_d \\frac{\\left|\\det(V)\\right|}{(4\\pi)^{d/2}} \\sum_{\\vec a\\in \\N_0^{\\binom {d}{2}}}\n \\left [\n \\frac{(-2)^{\\sum_{i<j} a_{ij}}}{\\prod_{i<j} a_{ij}!}\\prod_i \\Gamma \\left (\\frac{1+\\sum_{m\\neq i} a_{im}}{2} \\right )\n \\right ] \\vec \\alpha^{\\vec a}.\n "
},
{
"math_id": 43,
"text": "\\vec{v}_i"
},
{
"math_id": 44,
"text": "\\alpha_{ij} = \\vec{v}_i\\cdot\\vec{v}_j = \\alpha_{ji}, \\alpha_{ii}=1"
},
{
"math_id": 45,
"text": "\\alpha_{ij},1 \\le i < j \\le d"
},
{
"math_id": 46,
"text": "\\vec \\alpha = (\\alpha_{12},\\dotsc , \\alpha_{1d}, \\alpha_{23}, \\dotsc, \\alpha_{d-1,d}) \\in \\R^{\\binom{d}{2}}"
},
{
"math_id": 47,
"text": "\\vec a=(a_{12}, \\dotsc, a_{1d}, a_{23}, \\dotsc , a_{d-1,d}) \\in \\N_0^{\\binom{d}{2}}, "
},
{
"math_id": 48,
"text": "\\vec \\alpha^{\\vec a}=\\prod \\alpha_{ij}^{a_{ij}}"
},
{
"math_id": 49,
"text": "\\N_0"
},
{
"math_id": 50,
"text": "\\alpha_{ji}"
},
{
"math_id": 51,
"text": "j > i"
},
{
"math_id": 52,
"text": "\\alpha_{ij}"
},
{
"math_id": 53,
"text": "a_{ji}"
},
{
"math_id": 54,
"text": "\\sum_{m \\ne l} a_{lm}"
},
{
"math_id": 55,
"text": "\\vec a"
}
] | https://en.wikipedia.org/wiki?curid=94102 |
9412979 | Pushforward measure | "Pushed forward" from one measurable space to another
In measure theory, a pushforward measure (also known as push forward, push-forward or image measure) is obtained by transferring ("pushing forward") a measure from one measurable space to another using a measurable function.
Definition.
Given measurable spaces formula_0 and formula_1, a measurable mapping formula_2 and a measure formula_3, the pushforward of formula_4 is defined to be the measure formula_5 given by
formula_6 for formula_7
This definition applies "mutatis mutandis" for a signed or complex measure.
The pushforward measure is also denoted as formula_8, formula_9, formula_10, or formula_11.
Properties.
Change of variable formula.
Theorem: A measurable function "g" on "X"2 is integrable with respect to the pushforward measure "f"∗("μ") if and only if the composition formula_12 is integrable with respect to the measure "μ". In that case, the integrals coincide, i.e.,
formula_13
Note that in the previous formula formula_14.
Functoriality.
Pushforwards of measures allow to induce, from a function between measurable spaces formula_15, a function between the spaces of measures formula_16.
As with many induced mappings, this construction has the structure of a functor, on the category of measurable spaces.
For the special case of probability measures, this property amounts to functoriality of the Giry monad.
formula_17
This iterated function forms a dynamical system. It is often of interest in the study of such systems to find a measure "μ" on "X" that the map "f" leaves unchanged, a so-called invariant measure, i.e one for which "f"∗("μ") = "μ".
A generalization.
In general, any measurable function can be pushed forward. The push-forward then becomes a linear operator, known as the transfer operator or Frobenius–Perron operator. In finite spaces this operator typically satisfies the requirements of the Frobenius–Perron theorem, and the maximal eigenvalue of the operator corresponds to the invariant measure.
The adjoint to the push-forward is the pullback; as an operator on spaces of functions on measurable spaces, it is the composition operator or Koopman operator. | [
{
"math_id": 0,
"text": "(X_1,\\Sigma_1)"
},
{
"math_id": 1,
"text": "(X_2,\\Sigma_2)"
},
{
"math_id": 2,
"text": "f\\colon X_1\\to X_2"
},
{
"math_id": 3,
"text": "\\mu\\colon\\Sigma_1\\to[0,+\\infty]"
},
{
"math_id": 4,
"text": "\\mu"
},
{
"math_id": 5,
"text": "f_{*}(\\mu)\\colon\\Sigma_2\\to[0,+\\infty]"
},
{
"math_id": 6,
"text": "f_{*} (\\mu) (B) = \\mu \\left( f^{-1} (B) \\right)"
},
{
"math_id": 7,
"text": "B \\in \\Sigma_{2}."
},
{
"math_id": 8,
"text": "\\mu \\circ f^{-1}"
},
{
"math_id": 9,
"text": "f_\\sharp \\mu"
},
{
"math_id": 10,
"text": "f \\sharp \\mu"
},
{
"math_id": 11,
"text": "f \\# \\mu"
},
{
"math_id": 12,
"text": "g \\circ f"
},
{
"math_id": 13,
"text": "\\int_{X_2} g \\, d(f_* \\mu) = \\int_{X_1} g \\circ f \\, d\\mu."
},
{
"math_id": 14,
"text": "X_1=f^{-1}(X_2)"
},
{
"math_id": 15,
"text": "f:X\\to Y"
},
{
"math_id": 16,
"text": "M(X)\\to M(Y)"
},
{
"math_id": 17,
"text": "f^{(n)} = \\underbrace{f \\circ f \\circ \\dots \\circ f}_{n \\mathrm{\\, times}} : X \\to X."
},
{
"math_id": 18,
"text": "(X,\\Sigma)"
},
{
"math_id": 19,
"text": "f"
},
{
"math_id": 20,
"text": "\\mu, \\nu"
},
{
"math_id": 21,
"text": "\\forall A\\in \\Sigma: \\ \\mu(A) = 0 \\iff \\nu(A) = 0"
},
{
"math_id": 22,
"text": "\\forall A \\in \\Sigma: \\ \\mu(A) = 0 \\iff f_* \\mu(A) = \\mu\\big(f^{-1}(A)\\big) = 0"
}
] | https://en.wikipedia.org/wiki?curid=9412979 |
9413032 | Natural exponential family | In probability and statistics, a natural exponential family (NEF) is a class of probability distributions that is a special case of an exponential family (EF).
Definition.
Univariate case.
The natural exponential families (NEF) are a subset of the exponential families. A NEF is an exponential family in which the natural parameter "η" and the natural statistic "T"("x") are both the identity. A distribution in an exponential family with parameter "θ" can be written with probability density function (PDF)
formula_0
where formula_1 and formula_2 are known functions.
A distribution in a natural exponential family with parameter θ can thus be written with PDF
formula_3
[Note that slightly different notation is used by the originator of the NEF, Carl Morris. Morris uses "ω" instead of "η" and "ψ" instead of "A".]
General multivariate case.
Suppose that formula_4, then a natural exponential family of order "p" has density or mass function of the form:
formula_5
where in this case the parameter formula_6
Moment and cumulant generating functions.
A member of a natural exponential family has moment generating function (MGF) of the form
formula_7
The cumulant generating function is by definition the logarithm of the MGF, so it is
formula_8
Examples.
The five most important univariate cases are:
These five examples – Poisson, binomial, negative binomial, normal, and gamma – are a special subset of NEF, called NEF with quadratic variance function (NEF-QVF) because the variance can be written as a quadratic function of the mean. NEF-QVF are discussed below.
Distributions such as the exponential, Bernoulli, and geometric distributions are special cases of the above five distributions. For example, the Bernoulli distribution is a binomial distribution with "n" = 1 trial, the exponential distribution is a gamma distribution with shape parameter α = 1 (or "k" = 1 ), and the geometric distribution is a special case of the negative binomial distribution.
Some exponential family distributions are not NEF. The lognormal and Beta distribution are in the exponential family, but not the natural exponential family.
The gamma distribution with two parameters is an exponential family but not a NEF and the chi-squared distribution is a special case of the gamma distribution with fixed scale
parameter, and thus is also an exponential family but not a NEF (note that only a gamma distribution with fixed shape
parameter is a NEF).
The inverse Gaussian distribution is a NEF with a cubic variance function.
The parameterization of most of the above distributions has been written differently from the parameterization commonly used in textbooks and the above linked pages. For example, the above parameterization differs from the parameterization in the linked article in the Poisson case. The two parameterizations are related by formula_10, where λ is the mean parameter, and so that the density may be written as
formula_11
for formula_12, so
formula_13
This alternative parameterization can greatly simplify calculations in mathematical statistics. For example, in Bayesian inference, a posterior probability distribution is calculated as the product of two distributions. Normally this calculation requires writing out the probability distribution functions (PDF) and integrating; with the above parameterization, however, that calculation can be avoided. Instead, relationships between distributions can be abstracted due to the properties of the NEF described below.
An example of the multivariate case is the multinomial distribution with known number of trials.
Properties.
The properties of the natural exponential family can be used to simplify calculations involving these distributions.
Multivariate case.
In the multivariate case, the mean vector and covariance matrix are
formula_14
whereformula_15 is the gradient and formula_16 is the Hessian matrix.
Natural exponential families with quadratic variance functions (NEF-QVF).
A special case of the natural exponential families are those with quadratic variance functions.
Six NEFs have quadratic variance functions (QVF) in which the variance of the distribution can be written as a quadratic function of the mean. These are called NEF-QVF. The properties of these distributions were first described by Carl Morris.
formula_17
The six NEF-QVFs.
The six NEF-QVF are written here in increasing complexity of the relationship between variance and mean.
Properties of NEF-QVF.
The properties of NEF-QVF can simplify calculations that use these distributions.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " f_X(x\\mid \\theta) = h(x)\\ \\exp\\Big(\\ \\eta(\\theta) T(x) - A(\\theta)\\ \\Big) \\,\\! ,"
},
{
"math_id": 1,
"text": "h(x)"
},
{
"math_id": 2,
"text": "A(\\theta)"
},
{
"math_id": 3,
"text": " f_X(x\\mid \\theta) = h(x)\\ \\exp\\Big(\\ \\theta x - A(\\theta)\\ \\Big) \\,\\! ."
},
{
"math_id": 4,
"text": "\\mathbf{x} \\in \\mathcal{X} \\subseteq \\mathbb{R}^p"
},
{
"math_id": 5,
"text": " f_X(\\mathbf{x} \\mid \\boldsymbol\\theta) = h(\\mathbf{x})\\ \\exp\\Big(\\boldsymbol\\theta^{\\rm T} \\mathbf{x} - A(\\boldsymbol\\theta)\\ \\Big) \\,\\! ,"
},
{
"math_id": 6,
"text": "\\boldsymbol\\theta \\in \\mathbb{R}^p ."
},
{
"math_id": 7,
"text": "M_X(\\mathbf{t}) = \\exp\\Big(\\ A(\\boldsymbol\\theta + \\mathbf{t}) - A(\\boldsymbol\\theta)\\ \\Big) \\, ."
},
{
"math_id": 8,
"text": "K_X(\\mathbf{t}) = A(\\boldsymbol\\theta + \\mathbf{t}) - A(\\boldsymbol\\theta) \\, ."
},
{
"math_id": 9,
"text": "r"
},
{
"math_id": 10,
"text": " \\theta = \\log(\\lambda) "
},
{
"math_id": 11,
"text": "f(k;\\theta) = \\frac{1}{k!} \\exp\\Big(\\ \\theta\\ k - \\exp(\\theta)\\ \\Big) \\ ,"
},
{
"math_id": 12,
"text": " \\theta \\in \\mathbb{R}"
},
{
"math_id": 13,
"text": "h(k) = \\frac{1}{k!}, \\text{ and } A(\\theta) = \\exp(\\theta)\\ ."
},
{
"math_id": 14,
"text": " \\operatorname{E}[X] = \\nabla A(\\boldsymbol\\theta) \\text{ and } \\operatorname{Cov}[X] = \\nabla \\nabla^{\\rm T} A(\\boldsymbol\\theta)\\, ,"
},
{
"math_id": 15,
"text": "\\nabla"
},
{
"math_id": 16,
"text": "\\nabla \\nabla^{\\rm T} "
},
{
"math_id": 17,
"text": " \\operatorname{Var}(X) = V(\\mu) = \\nu_0 + \\nu_1 \\mu + \\nu_2 \\mu^2."
},
{
"math_id": 18,
"text": "X \\sim N(\\mu, \\sigma^2) "
},
{
"math_id": 19,
"text": " \\operatorname{Var}(X) = V(\\mu) = \\sigma^2"
},
{
"math_id": 20,
"text": "X \\sim \\operatorname{Poisson}(\\mu) "
},
{
"math_id": 21,
"text": "\\operatorname{Var}(X) = V(\\mu) = \\mu"
},
{
"math_id": 22,
"text": "X \\sim \\operatorname{Gamma}(r, \\lambda) "
},
{
"math_id": 23,
"text": "\\mu = r\\lambda"
},
{
"math_id": 24,
"text": "\\operatorname{Var}(X) = V(\\mu) = \\mu^2/r"
},
{
"math_id": 25,
"text": " X \\sim \\operatorname{Binomial}(n, p) "
},
{
"math_id": 26,
"text": "\\mu = np"
},
{
"math_id": 27,
"text": " \\operatorname{Var}(X) = np(1-p) "
},
{
"math_id": 28,
"text": "V(X) = - np^2 + np = -\\mu^2/n + \\mu."
},
{
"math_id": 29,
"text": " X \\sim \\operatorname{NegBin}(n, p) "
},
{
"math_id": 30,
"text": "\\mu = np/(1-p)"
},
{
"math_id": 31,
"text": "V(\\mu) = \\mu^2/n + \\mu."
},
{
"math_id": 32,
"text": "V(\\mu) = \\mu^2/n +n"
},
{
"math_id": 33,
"text": "\\mu > 0."
}
] | https://en.wikipedia.org/wiki?curid=9413032 |
9413508 | Pullback | In mathematics, a pullback is either of two different, but related processes: precomposition and fiber-product. Its dual is a pushforward.
Precomposition.
Precomposition with a function probably provides the most elementary notion of pullback: in simple terms, a function formula_0 of a variable formula_1 where formula_2 itself is a function of another variable formula_3 may be written as a function of formula_4 This is the pullback of formula_0 by the function formula_5
formula_6
It is such a fundamental process that it is often passed over without mention.
However, it is not just functions that can be "pulled back" in this sense. Pullbacks can be applied to many other objects such as differential forms and their cohomology classes; see
Fiber-product.
The pullback bundle is an example that bridges the notion of a pullback as precomposition, and the notion of a pullback as a Cartesian square. In that example, the base space of a fiber bundle is pulled back, in the sense of precomposition, above. The fibers then travel along with the points in the base space at which they are anchored: the resulting new pullback bundle looks locally like a Cartesian product of the new base space, and the (unchanged) fiber. The pullback bundle then has two projections: one to the base space, the other to the fiber; the product of the two becomes coherent when treated as a fiber product.
Generalizations and category theory.
The notion of pullback as a fiber-product ultimately leads to the very general idea of a categorical pullback, but it has important special cases: inverse image (and pullback) sheaves in algebraic geometry, and pullback bundles in algebraic topology and differential geometry.
See also:
Functional analysis.
When the pullback is studied as an operator acting on function spaces, it becomes a linear operator, and is known as the transpose or composition operator. Its adjoint is the push-forward, or, in the context of functional analysis, the transfer operator.
Relationship.
The relation between the two notions of pullback can perhaps best be illustrated by sections of fiber bundles: if formula_7 is a section of a fiber bundle formula_8 over formula_9 and formula_10 then the pullback (precomposition) formula_11 of "s" with formula_0 is a section of the pullback (fiber-product) bundle formula_12 over formula_13
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "f"
},
{
"math_id": 1,
"text": "y,"
},
{
"math_id": 2,
"text": "y"
},
{
"math_id": 3,
"text": "x,"
},
{
"math_id": 4,
"text": "x."
},
{
"math_id": 5,
"text": "y."
},
{
"math_id": 6,
"text": "f(y(x)) \\equiv g(x)"
},
{
"math_id": 7,
"text": "s"
},
{
"math_id": 8,
"text": "E"
},
{
"math_id": 9,
"text": "N,"
},
{
"math_id": 10,
"text": "f : M \\to N,"
},
{
"math_id": 11,
"text": "f^* s = s\\circ f"
},
{
"math_id": 12,
"text": "f^*E"
},
{
"math_id": 13,
"text": "M."
}
] | https://en.wikipedia.org/wiki?curid=9413508 |
94139 | Rifling | Grooves in a weapon barrel for accuracy
Rifling is the term for helical grooves machined into the internal surface of a firearms's barrel for imparting a spin to a projectile to improve its aerodynamic stability and accuracy. It is also the term (as a verb) for creating such grooves.
Rifling is measured in "twist rate", the distance the rifling takes to complete one full revolution, expressed as a ratio with 1 as its base (e.g., 1:). A shorter distance/lower ratio indicates a faster twist, generating a higher spin rate (and greater projectile stability).
The combination of length, weight, and shape of a projectile determines the twist rate needed to gyroscopically stabilize it: barrels intended for short, large-diameter projectiles such as spherical lead balls require a very low twist rate, such as 1 turn in 48 inches (122 cm). Barrels intended for long, small-diameter projectiles, such as the ultra-low-drag 80-grain 0.223 inch bullets (5.2 g, 5.56 mm), use twist rates of 1 turn in 8 inches (20 cm) or faster.
Rifling which increases the twist rate from breech to muzzle is called a "gain" or "progressive"
twist; a rate which decreases down the length of a barrel
is undesirable because it cannot reliably stabilize the projectile as it travels down the bore.
An extremely long projectile, such as a flechette, requires impractically high twist rates to stabilize; it is often stabilized aerodynamically instead. An aerodynamically stabilized projectile can be fired from a smoothbore barrel without a reduction in accuracy.
History.
Muskets are smoothbore, large caliber weapons using ball-shaped ammunition fired at relatively low velocity. Due to the high cost, great difficulty of precision manufacturing, and the need to load readily and speedily from the muzzle, musket balls were generally a loose fit in the barrels. Consequently, on firing the balls would often bounce off the sides of the barrel when fired and the final destination after leaving the muzzle was less predictable. This was countered when accuracy was more important, for example when hunting, by using a tighter-fitting combination of a closer-to-bore-sized ball and a patch. The accuracy was improved, but still not reliable for precision shooting over long distances.
Like the invention of gunpowder itself, the inventor of barrel rifling is not yet definitely known. Straight grooving had been applied to small arms since at least 1480, originally intended as "soot grooves" to collect gunpowder residue.
Some of the earliest recorded European attempts of spiral-grooved musket barrels were of Gaspard Kollner, a gunsmith of Vienna in 1498 and Augustus Kotter of Nuremberg in 1520. Some scholars allege that Kollner's works at the end of the 15th century only used straight grooves, and it was not until he received help from Kotter that a working spiral-grooved firearm was made. There may have been attempts even earlier than this, as the main inspiration of rifled firearms came from archers and crossbowmen who realized that their projectiles flew far faster and more accurately when they imparted rotation through twisted fletchings.
Though true rifling dates from the 16th century, it had to be engraved by hand and consequently did not become commonplace until the mid-19th century. Due to the laborious and expensive manufacturing process involved, early rifled firearms were primarily used by wealthy recreational hunters, who did not need to fire their weapons many times in rapid succession and appreciated the increased accuracy. Rifled firearms were not popular with military users since they were difficult to clean, and loading projectiles presented numerous challenges. If the bullet was of sufficient diameter to take up the rifling, a large mallet was required to force it down the bore. If, on the other hand, it was of reduced diameter to assist in its insertion, the bullet would not fully engage the rifling and accuracy was reduced.
The first practical military weapons using rifling with black powder were breech loaders such as the Queen Anne pistol.
Twist rate.
For best performance, the barrel should have a twist rate sufficient to spin stabilize any bullet that it would reasonably be expected to fire, but not significantly more. Large diameter bullets provide more stability, as the larger radius provides more gyroscopic inertia, while long bullets are harder to stabilize, as they tend to be very backheavy and the aerodynamic pressures have a longer arm ("lever") to act on. The slowest twist rates are found in muzzle-loading firearms meant to fire a round ball; these will have twist rates as low as 1 in , or slightly longer, although for a typical multi-purpose muzzleloader rifle, a twist rate of 1 in is very common. The M16A2 rifle, which is designed to fire the 5.56×45mm NATO SS109 ball and L110 tracer bullets, has a 1 in or 32 calibers twist. Civilian AR-15 rifles are commonly found with 1 in or 54.8 calibers for older rifles and 1 in or 41.1 calibers for most newer rifles, although some are made with 1 in or 32 calibers twist rates, the same as used for the M16 rifle. Rifles, which generally fire longer, smaller diameter bullets, will in general have higher twist rates than handguns, which fire shorter, larger diameter bullets.
There are three methods in use to describe the twist rate:
The, traditionally speaking, most common method expresses the twist rate in terms of the 'travel' (length) required to complete one full projectile revolution in the rifled barrel. This method does not give an easy or straightforward understanding of whether a twist rate is "relatively" slow or fast when bores of different diameters are compared.
The second method describes the 'rifled travel' required to complete one full projectile revolution in calibers or bore diameters:
formula_0
where formula_1 is the twist rate expressed in bore diameters; formula_2 is the twist length required to complete one full projectile revolution (in mm or in); and formula_3 is the bore diameter (diameter of the lands, in mm or in).
The twist travel formula_2 and the bore diameter formula_3 must be expressed in a consistent unit of measure, i.e. metric (mm) "or" imperial (in).
The third method simply reports the angle of the grooves relative to the bore axis, measured in degrees.
The latter two methods have the inherent advantage of expressing twist rate as a ratio and give an easy understanding if a twist rate is "relatively" slow or fast even when comparing bores of differing diameters.
In 1879, George Greenhill, a professor of mathematics at the Royal Military Academy (RMA) at Woolwich, London, UK developed a rule of thumb for calculating the optimal twist rate for lead-core bullets. This shortcut uses the bullet's length, needing no allowances for weight or nose shape. The eponymous "Greenhill Formula", still used today, is:
formula_4
where formula_5 is 150 (use 180 for muzzle velocities higher than 2,800 f/s); formula_6 is the bullet's diameter in inches; formula_2 is the bullet's length in inches; and formula_7 is the bullet's specific gravity (10.9 for lead-core bullets, which cancels out the second half of the equation).
The original value of formula_5 was 150, which yields a twist rate in inches per turn, when given the diameter formula_6 and the length formula_2 of the bullet in inches. This works to velocities of about 840 m/s (2800 ft/s); above those velocities, a formula_5 of 180 should be used. For instance, with a velocity of 600 m/s (2000 ft/s), a diameter of and a length of , the Greenhill formula would give a value of 25, which means 1 turn in .
Improved formulas for determining stability and twist rates include the Miller Twist Rule and the McGyro program developed by Bill Davis and Robert McCoy.
If an insufficient twist rate is used, the bullet will begin to yaw and then tumble; this is usually seen as "keyholing", where bullets leave elongated holes in the target as they strike at an angle. Once the bullet starts to yaw, any hope of accuracy is lost, as the bullet will begin to veer off in random directions as it precesses.
Conversely, too high a rate of twist can also cause problems. The excessive twist can cause accelerated barrel wear, and coupled with high velocities also induce a very high spin rate which can cause projectile jacket ruptures causing high velocity spin stabilized projectiles to disintegrate in flight. Projectiles made out of mono metals cannot practically achieve flight and spin velocities such that they disintegrate in flight due to their spin rate. Smokeless powder can produce muzzle velocities of approximately for spin stabilized projectiles and more advanced propellants used in smoothbore tank guns can produce muzzle velocities of approximately . A higher twist than needed can also cause more subtle problems with accuracy: Any inconsistency within the bullet, such as a void that causes an unequal distribution of mass, may be magnified by the spin. Undersized bullets also have problems, as they may not enter the rifling exactly concentric and coaxial to the bore, and excess twist will exacerbate the accuracy problems this causes.
A bullet fired from a rifled barrel can spin at over 300,000 rpm (5 kHz), depending on the bullet's muzzle velocity and the barrel's twist rate.
The general definition of the spin formula_8 of an object rotating around a single axis can be written as:
formula_9
where formula_10 is the linear velocity of a point in the rotating object (in units of distance/time) and formula_5 refers to the circumference of the circle that this measuring point performs around the axis of rotation.
A bullet that matches the rifling of the firing barrel will exit that barrel with a spin:
formula_11
where formula_12 is the muzzle velocity and formula_2 is the twist rate.
For example, an M4 Carbine with a twist rate of 1 in and a muzzle velocity of will give the bullet a spin of 930 m/s / 0.1778 m = 5.2 kHz (314,000 rpm).
Excessive rotational speed can exceed the bullet's designed limits and the resulting centrifugal force can cause the bullet to disintegrate radially during flight.
Design.
A barrel of circular bore cross-section is not capable of imparting a spin to a projectile, so a rifled barrel has a non-circular cross-section. Typically the rifled barrel contains one or more grooves that run down its length, giving it a cross-section resembling an internal gear, though it can also take the shape of a polygon, usually with rounded corners. Since the barrel is not circular in cross-section, it cannot be accurately described with a single diameter. Rifled bores may be described by the "bore diameter" (the diameter across the "lands" or high points in the rifling), or by "groove diameter" (the diameter across the "grooves" or low points in the rifling). Differences in naming conventions for cartridges can cause confusion; for example, the projectiles of the .303 British are actually slightly larger in diameter than the projectiles of the .308 Winchester, because the ".303" refers to the bore diameter in inches (bullet is .312), while the ".308" refers to the bullet diameter in inches (7.92 mm and 7.82 mm, respectively).
Despite differences in form, the common goal of rifling is to deliver the projectile accurately to the target. In addition to imparting the spin to the bullet, the barrel must hold the projectile securely and concentrically as it travels down the barrel. This requires that the rifling meet a number of tasks:
Rifling may not begin immediately forward of the chamber. There may be an unrifled throat ahead of the chamber so a cartridge may be chambered without pushing the bullet into the rifling. This reduces the force required to load a cartridge into the chamber, and prevents leaving a bullet stuck in the rifling when an unfired cartridge is removed from the chamber. The specified diameter of the throat may be somewhat greater than groove diameter, and may be enlarged by use if hot powder gas melts the interior barrel surface when the rifle is fired. Freebore is a groove-diameter length of smoothbore barrel without lands forward of the throat. Freebore allows the bullet to transition from static friction to sliding friction and gain linear momentum prior to encountering the resistance of increasing rotational momentum. Freebore may allow more effective use of propellants by reducing the initial pressure peak during the minimum volume phase of internal ballistics before the bullet starts moving down the barrel. Barrels with freebore length exceeding the rifled length have been known by a variety of trade names including paradox.
Manufacture.
An early method of introducing rifling to a pre-drilled barrel was to use a cutter mounted on a square-section rod, accurately twisted into a spiral of the desired pitch, mounted in two fixed square-section holes. As the cutter was advanced through the barrel it twisted at a uniform rate governed by the pitch. The first cut was shallow. The cutter points were gradually expanded as repeated cuts were made. The blades were in slots in a wooden dowel which were gradually packed out with slips of paper until the required depth was obtained. The process was finished off by casting a slug of molten lead into the barrel, withdrawing it and using it with a paste of emery and oil to smooth the bore.
Most rifling is created by either:
The "grooves" are the spaces that are cut out, and the resulting ridges are called "lands". These lands and grooves can vary in number, depth, shape, direction of twist (right or left), and twist rate. The spin imparted by rifling significantly improves the stability of the projectile, improving both range and accuracy. Typically rifling is a constant rate down the barrel, usually measured by the length of travel required to produce a single turn. Occasionally firearms are encountered with a "gain twist", where the rate of spin increases from chamber to muzzle. While intentional gain twists are rare, due to manufacturing variance, a slight gain twist is in fact fairly common. Since a reduction in twist rate is very detrimental to accuracy, gunsmiths who are machining a new barrel from a rifled blank will often measure the twist carefully so they may put the faster rate, no matter how minute the difference is, at the muzzle end.
Projectiles.
The original firearms were loaded from the muzzle by forcing a ball from the muzzle to the chamber. Whether using a rifled or smooth bore, a good fit was needed to seal the bore and provide the best possible accuracy from the gun. To ease the force required to load the projectile, these early guns used an undersized ball, and a patch made of cloth, paper, or leather to fill the "windage" (the gap between the ball and the walls of the bore). The patch acted as a wadding and provided some degree of pressure sealing, kept the ball seated on the charge of black powder, and kept the ball concentric to the bore. In rifled barrels, the patch also provided a means to transfer the spin from the rifling to the bullet, as the patch is engraved rather than the ball. Until the advent of the hollow-based Minié ball, which expands and obturates upon firing to seal the bore and engage the rifling, the patch provided the best means of getting the projectile to engage the rifling.
In breech-loading firearms, the task of seating the projectile into the rifling is handled by the "throat" of the chamber. Next is the "freebore", which is the portion of the throat down which the projectile travels before the rifling starts. The last section of the throat is the "throat angle", where the throat transitions into the rifled barrel.
The throat is usually sized slightly larger than the projectile, so the loaded cartridge can be inserted and removed easily, but the throat should be as close as practical to the groove diameter of the barrel. Upon firing, the projectile expands under the pressure from the chamber, and obturates to fit the throat. The bullet then travels down the throat and engages the rifling, where it is engraved, and begins to spin. Engraving the projectile requires a significant amount of force, and in some firearms there is a significant amount of freebore, which helps keep chamber pressures low by allowing the propellant gases to expand before being required to engrave the projectile. Minimizing freebore improves accuracy by decreasing the chance that a projectile will distort before entering the rifling.
When the projectile is swaged into the rifling, it takes on a mirror image of the rifling, as the lands push into the projectile in a process called "engraving". Engraving takes on not only the major features of the bore, such as the lands and grooves, but also minor features, like scratches and tool marks. The relationship between the bore characteristics and the engraving on the projectile are often used in forensic ballistics.
Recent developments.
The grooves most commonly used in modern rifling have fairly sharp edges. More recently, polygonal rifling, a throwback to the earliest types of rifling, has become popular, especially in handguns. Polygonal barrels tend to have longer service lives because the reduction of the sharp edges of the land (the grooves are the spaces that are cut out, and the resulting ridges are called lands) reduces erosion of the barrel. Supporters of polygonal rifling also claim higher velocities and greater accuracy. Polygonal rifling is currently seen on pistols from CZ, Heckler & Koch, Glock, Tanfoglio, and the Kahr Arms (P series only), as well as the Desert Eagle.
For field artillery pieces, the "extended range, full bore" (ERFB) concept developed in early 1970s by Dennis Hyatt Jenkins and Luis Palacio of Gerald Bull's Space Research Corporation for the GC-45 howitzer replaces the bourrelet with small nubs, which both tightly fit into lands of the barrel. Guns capable of firing these projectiles have achieved significant increases in range, but this is compensated with a significantly (3-4 times) decreased accuracy, due to which they were not adopted by NATO militaries. Unlike a shell narrower than the gun's bore with a sabot, ERFB shells use the full bore, permitting a larger payload. Examples include the South African G5 and the German PzH 2000. ERFB may be combined with base bleed.
Variable pitch rifling.
A "gain-twist" or "progressive rifling" begins with a slow twist rate that gradually increases down the bore, resulting in very little initial change in the projectile's angular momentum during the first few inches of bullet travel after it enters the throat. This enables the bullet to remain essentially undisturbed and trued to the case mouth. After engaging the rifling at the throat, the bullet is progressively subjected to accelerated angular momentum as it is propelled down the barrel. The theoretical advantage is that by gradually increasing the spin rate, torque is imparted along a much longer bore length, allowing thermomechanical stress to be spread over a larger area rather than being focused predominantly at the throat, which typically wears out much faster than other parts of the barrel. Gain-twist rifling was used prior to and during the American Civil War (1861–65). Colt Army and Navy revolvers both employed gain-twist rifling. Gain-twist rifling, however, is more difficult to produce than uniform rifling, and therefore is more expensive. The military has used gain-twist rifling in a variety of weapons such as the M61 Vulcan Gatling gun used in some current fighter jets and the larger GAU-8 Avenger Gatling gun used in the A10 Thunderbolt II close air support jet. In these applications it allows lighter construction of the barrels by decreasing chamber pressures through the use of low initial twist rates but ensuring the projectiles have sufficient stability once they leave the barrel. It is seldom used in commercially available products, though notably on the Smith & Wesson Model 460 (X-treme Velocity Revolver).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\text{twist} = \\frac{L}{D_\\text{bore}},"
},
{
"math_id": 1,
"text": "\\text{twist}"
},
{
"math_id": 2,
"text": "L"
},
{
"math_id": 3,
"text": "D_\\text{bore}"
},
{
"math_id": 4,
"text": "\\text{twist} = \\frac{C D^2}{L} \\times \\sqrt{\\frac{\\mathrm{SG}}{10.9}}"
},
{
"math_id": 5,
"text": "C"
},
{
"math_id": 6,
"text": "D"
},
{
"math_id": 7,
"text": "\\mathrm{SG}"
},
{
"math_id": 8,
"text": "S"
},
{
"math_id": 9,
"text": "S = \\frac{\\upsilon}{C}"
},
{
"math_id": 10,
"text": "\\upsilon"
},
{
"math_id": 11,
"text": "S = \\frac{\\upsilon_0}{L}"
},
{
"math_id": 12,
"text": "\\upsilon_0"
}
] | https://en.wikipedia.org/wiki?curid=94139 |
9414205 | Myrosinase | Class of enzymes
Myrosinase (EC 3.2.1.147, "thioglucoside glucohydrolase", "sinigrinase", and "sinigrase") is a family of enzymes involved in plant defense against herbivores, specifically the mustard oil bomb. The three-dimensional structure has been elucidated and is available in the PDB (see links in the infobox).
A member of the glycoside hydrolase family, myrosinase possesses several similarities with the more ubiquitous O-glycosidases. However, myrosinase is the only known enzyme found in nature that can cleave a thio-linked glucose. Its known biological function is to catalyze the hydrolysis of a class of compounds called glucosinolates.
Myrosinase activity.
Myrosinase is regarded as a defense-related enzyme and is capable of hydrolyzing glucosinolates into various compounds, some of which are toxic.
Mechanism.
Myrosinase catalyzes the chemical reaction
a thioglucoside + H2O formula_0 a sugar + a thiol
Thus, the two substrates of this enzyme are thioglucoside and H2O, whereas its two products are sugar and thiol.
In the presence of water, myrosinase cleaves off the glucose group from a glucosinolate. The remaining molecule then quickly converts to a thiocyanate, an isothiocyanate, or a nitrile; these are the active substances that serve as defense for the plant. The hydrolysis of glucosinolates by myrosinase can yield a variety of products, depending on various physiological conditions such as pH and the presence of certain cofactors. All known reactions have been observed to share the same initial steps. "(See Figure 2.)" First, the β-thioglucoside linkage is cleaved by myrosinase, releasing D-glucose. The resulting aglycone undergoes a spontaneous Lossen-like rearrangement, releasing a sulfate. The last step in the mechanism is subject to the greatest variety depending on the physiological conditions under which the reaction takes place. At neutral pH, the primary product is the isothiocyanate. Under acidic conditions (pH < 3), and in the presence of ferrous ions or epithiospecifer proteins, the formation of nitriles is favored instead.
Cofactors and inhibitors.
Ascorbate is a known cofactor of myrosinase, serving as a base catalyst in glucosinolate hydrolysis.
For example, myrosinase isolated from daikon ("Raphanus sativus") demonstrated an increase in V max from 2.06 μmol/min per mg of protein to 280 μmol/min per mg of protein on the substrate, allyl glucosinolate (sinigrin) when in the presence of 500 μM ascorbate.
Sulfate, a byproduct of glucosinolate hydrolysis, has been identified as a competitive inhibitor of myrosinase.
In addition, 2-F-2-deoxybenzylglucosinolate, which was synthesized specifically to study the mechanism of myrosinase, inhibits the enzyme by trapping one of the glutamic acid residues in the active site, Glu 409.
Structure.
Myrosinase exists as a dimer with subunits of 60-70 kDa each.
X-ray crystallography of myrosinase isolated from "Sinapis alba" revealed the two subunits are linked by a zinc atom.
The prominence of salt bridges, disulfide bridges, hydrogen bonding, and glycosylation are thought to contribute to the enzyme’s stability, especially when the plant is under attack and experiences severe tissue damage.
A feature of many β-glucosidases are catalytic glutamate residues at their active sites, but two of these have been replaced by a single glutamine residue in myrosinase. Ascorbate has been shown to substitute for the activity of the glutamate residues. "(See Figure 3 for mechanism.)"
Biological function.
Myrosinase and its natural substrate, glucosinolate, are known to be part of the plant’s defense response. When the plant is attacked by pathogens, insects, or other herbivores, the plant uses myrosinase to convert glucosinolates, which are otherwise-benign, into toxic products like isothiocyanates, thiocyanates, and nitriles.
Compartmentalization in plants.
The glucosinolate-myrosinase defensive system is packaged in the plant in a unique manner. Plants store myrosinase glucosinolates by compartmentalization, such that the latter is released and activated only when the plant is under attack.
Myrosinase is stored largely as myrosin grains in the vacuoles of particular idioblasts called myrosin cells, but have also been reported in protein bodies or vacuoles, and as cytosolic enzymes that tend to bind to membranes. Glucosinolates are stored in adjacent but separate "S-cells."
When the plant experiences tissue damage, the myrosinase comes into contact with glucosinolates, quickly activating them into their potent, antibacterial form. The most potent of such products are isothiocyanates, followed by thiocyanates and nitriles.
Evolution.
Plants known to have evolved a myrosinase-glucosinolate defense system include: white mustard ("Sinapis alba"),
garden cress ("Lepidium sativum"),
wasabi ("Wasabia japonica"), and daikon ("Raphanus sativus"),
as well as several members of the family Brassicaceae, including
yellow mustard ("Brassica juncea"),
rape seed ("Brassica napus"), and common dietary brassicas like broccoli, cauliflower, cabbage, bok choy, and kale.
The bitter aftertaste of many of these vegetables can often be attributed to the hydrolysis of glucosinolates upon tissue damage during food preparation or when consuming these vegetables raw. Papaya seeds use this method of defense, but not the fruit pulp itself.
Myrosinase has also been isolated from the cabbage aphid. This suggests coevolution of the cabbage aphid with its main food source. The aphid employs a similar defense strategy to plants. Like its main food source, the cabbage aphid compartmentalizes its native myrosinase and the glucosinolates it ingests. When the cabbage aphid is attacked and its tissues are damaged, its stored glucosinolates are activated, producing isothiocyanates and deterring predators from attacking other aphids.
Historical relevance and modern applications.
Agriculture.
Historically, crops like rapeseed that contained the glucosinolate-myrosinase system were deliberately bred to minimize glucosinolate content, since rapeseed in animal feed was proving toxic to livestock.
The glucosinolate-myrosinase system has been investigated as a possible biofumigant to protect crops against pests. The potent glucosinolate hydrolysis products (GHPs) could be sprayed onto crops to deter herbivory. Another option would be to use techniques in genetic engineering to introduce the glucosinolate-myrosinase system in crops as a means of fortifying their resistance against pests.
Health effects.
Isothiocyanates, the primary product of glucosinolate hydrolysis, have been known to prevent iodine uptake in the thyroid, causing goiters. Isothiocyanates in high concentrations may cause hepatotoxicity. There is insufficient scientific evidence that consuming cruciferous vegetables with increased intake of isothiocyanates affects the risk of human diseases.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] | https://en.wikipedia.org/wiki?curid=9414205 |
9414239 | Paley–Zygmund inequality | In mathematics, the Paley–Zygmund inequality bounds the
probability that a positive random variable is small, in terms of
its first two moments. The inequality was
proved by Raymond Paley and Antoni Zygmund.
Theorem: If "Z" ≥ 0 is a random variable with
finite variance, and if formula_0, then
formula_1
Proof: First,
formula_2
The first addend is at most formula_3, while the second is at most formula_4 by the Cauchy–Schwarz inequality. The desired inequality then follows. ∎
Related inequalities.
The Paley–Zygmund inequality can be written as
formula_5
This can be improved. By the Cauchy–Schwarz inequality,
formula_6
which, after rearranging, implies that
formula_7
This inequality is sharp; equality is achieved if Z almost surely equals a positive constant.
In turn, this implies another convenient form (known as Cantelli's inequality) which is
formula_8
where formula_9 and formula_10.
This follows from the substitution formula_11 valid when formula_12.
A strengthened form of the Paley-Zygmund inequality states that if Z is a non-negative random variable then
formula_13
for every formula_14.
This inequality follows by applying the usual Paley-Zygmund inequality to the conditional distribution of Z given that it is positive and noting that the various factors of formula_15 cancel.
Both this inequality and the usual Paley-Zygmund inequality also admit formula_16 versions: If Z is a non-negative random variable and formula_17 then
formula_18
for every formula_14. This follows by the same proof as above but using Hölder's inequality in place of the Cauchy-Schwarz inequality.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "0 \\le \\theta \\le 1"
},
{
"math_id": 1,
"text": "\n\\operatorname{P}( Z > \\theta\\operatorname{E}[Z] )\n\\ge (1-\\theta)^2 \\frac{\\operatorname{E}[Z]^2}{\\operatorname{E}[Z^2]}.\n"
},
{
"math_id": 2,
"text": "\n\\operatorname{E}[Z] = \\operatorname{E}[ Z \\, \\mathbf{1}_{\\{ Z \\le \\theta \\operatorname{E}[Z] \\}}] + \\operatorname{E}[ Z \\, \\mathbf{1}_{\\{ Z > \\theta \\operatorname{E}[Z] \\}} ].\n"
},
{
"math_id": 3,
"text": "\\theta \\operatorname{E}[Z]"
},
{
"math_id": 4,
"text": " \\operatorname{E}[Z^2]^{1/2} \\operatorname{P}( Z > \\theta\\operatorname{E}[Z])^{1/2} "
},
{
"math_id": 5,
"text": "\n\\operatorname{P}( Z > \\theta \\operatorname{E}[Z] )\n\\ge \\frac{(1-\\theta)^2 \\, \\operatorname{E}[Z]^2}{\\operatorname{Var} Z + \\operatorname{E}[Z]^2}.\n"
},
{
"math_id": 6,
"text": "\n\\operatorname{E}[Z - \\theta \\operatorname{E}[Z]]\n\\le \\operatorname{E}[ (Z - \\theta \\operatorname{E}[Z]) \\mathbf{1}_{\\{ Z > \\theta \\operatorname{E}[Z] \\}} ]\n\\le \\operatorname{E}[ (Z - \\theta \\operatorname{E}[Z])^2 ]^{1/2} \\operatorname{P}( Z > \\theta \\operatorname{E}[Z] )^{1/2}\n"
},
{
"math_id": 7,
"text": "\n\\operatorname{P}(Z > \\theta \\operatorname{E}[Z])\n\\ge \\frac{(1-\\theta)^2 \\operatorname{E}[Z]^2}{\\operatorname{E}[( Z - \\theta \\operatorname{E}[Z] )^2]}\n= \\frac{(1-\\theta)^2 \\operatorname{E}[Z]^2}{\\operatorname{Var} Z + (1-\\theta)^2 \\operatorname{E}[Z]^2}.\n"
},
{
"math_id": 8,
"text": "\n\\operatorname{P}(Z > \\mu - \\theta \\sigma)\n\\ge \\frac{\\theta^2}{1+\\theta^2},\n"
},
{
"math_id": 9,
"text": "\\mu=\\operatorname{E}[Z]"
},
{
"math_id": 10,
"text": "\\sigma^2 = \\operatorname{Var}[Z]"
},
{
"math_id": 11,
"text": "\\theta = 1-\\theta'\\sigma/\\mu"
},
{
"math_id": 12,
"text": "0\\le \\mu - \\theta \\sigma\\le\\mu"
},
{
"math_id": 13,
"text": "\n\\operatorname{P}( Z > \\theta \\operatorname{E}[Z \\mid Z > 0] )\n\\ge \\frac{(1-\\theta)^2 \\, \\operatorname{E}[Z]^2}{\\operatorname{E}[Z^2]}\n"
},
{
"math_id": 14,
"text": " 0 \\leq \\theta \\leq 1 "
},
{
"math_id": 15,
"text": "\\operatorname{P}(Z>0)"
},
{
"math_id": 16,
"text": " L^p "
},
{
"math_id": 17,
"text": " p > 1 "
},
{
"math_id": 18,
"text": "\n\\operatorname{P}( Z > \\theta \\operatorname{E}[Z \\mid Z > 0] )\n\\ge \\frac{(1-\\theta)^{p/(p-1)} \\, \\operatorname{E}[Z]^{p/(p-1)}}{\\operatorname{E}[Z^p]^{1/(p-1)}}.\n"
}
] | https://en.wikipedia.org/wiki?curid=9414239 |
94158 | Lagrange inversion theorem | Formula for the Taylor series expansion of the inverse function of an analytic function
In mathematical analysis, the Lagrange inversion theorem, also known as the Lagrange–Bürmann formula, gives the Taylor series expansion of the inverse function of an analytic function. Lagrange inversion is a special case of the inverse function theorem.
Statement.
Suppose z is defined as a function of w by an equation of the form
formula_0
where f is analytic at a point a and formula_1 Then it is possible to "invert" or "solve" the equation for w, expressing it in the form formula_2 given by a power series
formula_3
where
formula_4
The theorem further states that this series has a non-zero radius of convergence, i.e., formula_5 represents an analytic function of z in a neighbourhood of formula_6 This is also called reversion of series.
If the assertions about analyticity are omitted, the formula is also valid for formal power series and can be generalized in various ways: It can be formulated for functions of several variables; it can be extended to provide a ready formula for "F"("g"("z")) for any analytic function F; and it can be generalized to the case formula_7 where the inverse g is a multivalued function.
The theorem was proved by Lagrange and generalized by Hans Heinrich Bürmann, both in the late 18th century. There is a straightforward derivation using complex analysis and contour integration; the complex formal power series version is a consequence of knowing the formula for polynomials, so the theory of analytic functions may be applied. Actually, the machinery from analytic function theory enters only in a formal way in this proof, in that what is really needed is some property of the formal residue, and a more direct formal proof is available.
If f is a formal power series, then the above formula does not give the coefficients of the compositional inverse series g directly in terms for the coefficients of the series f. If one can express the functions f and g in formal power series as
formula_8
with "f"0 = 0 and "f"1 ≠ 0, then an explicit form of inverse coefficients can be given in term of Bell polynomials:
formula_9
where
formula_10
is the rising factorial.
When "f"1 = 1, the last formula can be interpreted in terms of the faces of associahedra
formula_11
where formula_12 for each face formula_13 of the associahedron formula_14
Example.
For instance, the algebraic equation of degree p
formula_15
can be solved for x by means of the Lagrange inversion formula for the function "f"("x") = "x" − "x""p", resulting in a formal series solution
formula_16
By convergence tests, this series is in fact convergent for formula_17 which is also the largest disk in which a local inverse to f can be defined.
Applications.
Lagrange–Bürmann formula.
There is a special case of Lagrange inversion theorem that is used in combinatorics and applies when formula_18 for some analytic formula_19 with formula_20 Take formula_21 to obtain formula_22 Then for the inverse formula_5 (satisfying formula_23), we have
formula_24
which can be written alternatively as
formula_25
where formula_26 is an operator which extracts the coefficient of formula_27 in the Taylor series of a function of w.
A generalization of the formula is known as the Lagrange–Bürmann formula:
formula_28
where "H" is an arbitrary analytic function.
Sometimes, the derivative "H′"("w") can be quite complicated. A simpler version of the formula replaces "H′"("w") with "H"("w")(1 − "φ′"("w")/"φ"("w")) to get
formula_29
which involves "φ′"("w") instead of "H′"("w").
Lambert "W" function.
The Lambert W function is the function formula_30 that is implicitly defined by the equation
formula_31
We may use the theorem to compute the Taylor series of formula_30 at formula_32 We take formula_33 and formula_34 Recognizing that
formula_35
this gives
formula_36
The radius of convergence of this series is formula_37 (giving the principal branch of the Lambert function).
A series that converges for formula_38 (approximately formula_39) can also be derived by series inversion. The function formula_40 satisfies the equation
formula_41
Then formula_42 can be expanded into a power series and inverted. This gives a series for formula_43
formula_44
formula_45 can be computed by substituting formula_46 for z in the above series. For example, substituting −1 for z gives the value of formula_47
Binary trees.
Consider the set formula_48 of unlabelled binary trees. An element of formula_48 is either a leaf of size zero, or a root node with two subtrees. Denote by formula_49 the number of binary trees on formula_50 nodes.
Removing the root splits a binary tree into two trees of smaller size. This yields the functional equation on the generating function formula_51
formula_52
Letting formula_53, one has thus formula_54 Applying the theorem with formula_55 yields
formula_56
This shows that formula_49 is the nth Catalan number.
Asymptotic approximation of integrals.
In the Laplace–Erdelyi theorem that gives the asymptotic approximation for Laplace-type integrals, the function inversion is taken as a crucial step.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "z = f(w)"
},
{
"math_id": 1,
"text": "f'(a)\\neq 0."
},
{
"math_id": 2,
"text": "w=g(z)"
},
{
"math_id": 3,
"text": " g(z) = a + \\sum_{n=1}^{\\infty} g_n \\frac{(z - f(a))^n}{n!}, "
},
{
"math_id": 4,
"text": " g_n = \\lim_{w \\to a} \\frac{d^{n-1}}{dw^{n-1}} \\left[\\left( \\frac{w-a}{f(w) - f(a)} \\right)^n \\right]. "
},
{
"math_id": 5,
"text": "g(z)"
},
{
"math_id": 6,
"text": "z= f(a)."
},
{
"math_id": 7,
"text": "f'(a)=0,"
},
{
"math_id": 8,
"text": "f(w) = \\sum_{k=0}^\\infty f_k \\frac{w^k}{k!} \\qquad \\text{and} \\qquad g(z) = \\sum_{k=0}^\\infty g_k \\frac{z^k}{k!}"
},
{
"math_id": 9,
"text": " g_n = \\frac{1}{f_1^n} \\sum_{k=1}^{n-1} (-1)^k n^\\overline{k} B_{n-1,k}(\\hat{f}_1,\\hat{f}_2,\\ldots,\\hat{f}_{n-k}), \\quad n \\geq 2, "
},
{
"math_id": 10,
"text": "\\begin{align}\n \\hat{f}_k &= \\frac{f_{k+1}}{(k+1)f_{1}}, \\\\\n g_1 &= \\frac{1}{f_{1}}, \\text{ and} \\\\\n n^{\\overline{k}} &= n(n+1)\\cdots (n+k-1)\n \\end{align}"
},
{
"math_id": 11,
"text": " g_n = \\sum_{F \\text{ face of } K_n} (-1)^{n-\\dim F} f_F , \\quad n \\geq 2, "
},
{
"math_id": 12,
"text": " f_{F} = f_{i_{1}} \\cdots f_{i_{m}} "
},
{
"math_id": 13,
"text": " F = K_{i_1} \\times \\cdots \\times K_{i_m} "
},
{
"math_id": 14,
"text": " K_n ."
},
{
"math_id": 15,
"text": " x^p - x + z= 0"
},
{
"math_id": 16,
"text": " x = \\sum_{k=0}^\\infty \\binom{pk}{k} \\frac{z^{(p-1)k+1} }{(p-1)k+1} . "
},
{
"math_id": 17,
"text": "|z| \\leq (p-1)p^{-p/(p-1)},"
},
{
"math_id": 18,
"text": "f(w)=w/\\phi(w)"
},
{
"math_id": 19,
"text": "\\phi(w)"
},
{
"math_id": 20,
"text": "\\phi(0)\\ne 0."
},
{
"math_id": 21,
"text": "a=0"
},
{
"math_id": 22,
"text": "f(a)=f(0)=0."
},
{
"math_id": 23,
"text": "f(g(z))\\equiv z"
},
{
"math_id": 24,
"text": "\\begin{align}\n g(z) &= \\sum_{n=1}^{\\infty} \\left[ \\lim_{w \\to 0} \\frac {d^{n-1}}{dw^{n-1}} \\left(\\left( \\frac{w}{w/\\phi(w)} \\right)^n \\right)\\right] \\frac{z^n}{n!} \\\\\n {} &= \\sum_{n=1}^{\\infty} \\frac{1}{n} \\left[\\frac{1}{(n-1)!} \\lim_{w \\to 0} \\frac{d^{n-1}}{dw^{n-1}} (\\phi(w)^n) \\right] z^n,\n \\end{align}"
},
{
"math_id": 25,
"text": "[z^n] g(z) = \\frac{1}{n} [w^{n-1}] \\phi(w)^n,"
},
{
"math_id": 26,
"text": "[w^r]"
},
{
"math_id": 27,
"text": "w^r"
},
{
"math_id": 28,
"text": "[z^n] H (g(z)) = \\frac{1}{n} [w^{n-1}] (H' (w) \\phi(w)^n)"
},
{
"math_id": 29,
"text": " [z^n] H (g(z)) = [w^n] H(w) \\phi(w)^{n-1} (\\phi(w) - w \\phi'(w)), "
},
{
"math_id": 30,
"text": "W(z)"
},
{
"math_id": 31,
"text": " W(z) e^{W(z)} = z."
},
{
"math_id": 32,
"text": "z=0."
},
{
"math_id": 33,
"text": "f(w) = we^w"
},
{
"math_id": 34,
"text": "a = 0."
},
{
"math_id": 35,
"text": "\\frac{d^n}{dx^n} e^{\\alpha x} = \\alpha^n e^{\\alpha x},"
},
{
"math_id": 36,
"text": "\\begin{align}\n W(z) &= \\sum_{n=1}^{\\infty} \\left[\\lim_{w \\to 0} \\frac{d^{n-1}}{dw^{n-1}} e^{-nw} \\right] \\frac{z^n}{n!} \\\\\n {} &= \\sum_{n=1}^{\\infty} (-n)^{n-1} \\frac{z^n}{n!} \\\\\n {} &= z-z^2+\\frac{3}{2}z^3-\\frac{8}{3}z^4+O(z^5).\n \\end{align}"
},
{
"math_id": 37,
"text": "e^{-1}"
},
{
"math_id": 38,
"text": "|\\ln(z)-1|<{4+\\pi^2}"
},
{
"math_id": 39,
"text": "2.58\\ldots \\cdot 10^{-6} < z < 2.869\\ldots \\cdot 10^6"
},
{
"math_id": 40,
"text": "f(z) = W(e^z) - 1"
},
{
"math_id": 41,
"text": "1 + f(z) + \\ln (1 + f(z)) = z."
},
{
"math_id": 42,
"text": "z + \\ln (1 + z)"
},
{
"math_id": 43,
"text": "f(z+1) = W(e^{z+1})-1\\text{:}"
},
{
"math_id": 44,
"text": "W(e^{1+z}) = 1 + \\frac{z}{2} + \\frac{z^2}{16} - \\frac{z^3}{192} - \\frac{z^4}{3072} + \\frac{13 z^5}{61440} - O(z^6)."
},
{
"math_id": 45,
"text": "W(x)"
},
{
"math_id": 46,
"text": "\\ln x - 1"
},
{
"math_id": 47,
"text": "W(1) \\approx 0.567143."
},
{
"math_id": 48,
"text": "\\mathcal{B}"
},
{
"math_id": 49,
"text": "B_n"
},
{
"math_id": 50,
"text": "n"
},
{
"math_id": 51,
"text": "\\textstyle B(z) = \\sum_{n=0}^\\infty B_n z^n\\text{:}"
},
{
"math_id": 52,
"text": "B(z) = 1 + z B(z)^2."
},
{
"math_id": 53,
"text": "C(z) = B(z) - 1"
},
{
"math_id": 54,
"text": "C(z) = z (C(z)+1)^2."
},
{
"math_id": 55,
"text": "\\phi(w) = (w+1)^2"
},
{
"math_id": 56,
"text": " B_n = [z^n] C(z) = \\frac{1}{n} [w^{n-1}] (w+1)^{2n} = \\frac{1}{n} \\binom{2n}{n-1} = \\frac{1}{n+1} \\binom{2n}{n}."
}
] | https://en.wikipedia.org/wiki?curid=94158 |
9416036 | Hall's universal group | In algebra, Hall's universal group is
a countable locally finite group, say "U", which is uniquely
characterized by the following properties.
It was defined by Philip Hall in 1959, and has the universal property that "all countable locally finite groups" embed into it.
Hall's universal group is the Fraïssé limit of the class of all finite groups.
Construction.
Take any group formula_0 of order formula_1.
Denote by formula_2 the group formula_3
of permutations of elements of formula_0, by
formula_4 the group
formula_5
and so on. Since a group acts faithfully on itself by permutations
formula_6
according to Cayley's theorem, this gives a chain of monomorphisms
formula_7
A direct limit (that is, a union) of all formula_8
is Hall's universal group "U".
Indeed, "U" then contains a symmetric group of arbitrarily large order, and any
group admits a monomorphism to a group of permutations, as explained above.
Let "G" be a finite group admitting two embeddings to "U".
Since "U" is a direct limit and "G" is finite, the
images of these two embeddings belong to
formula_9. The group
formula_10 acts on formula_11
by permutations, and conjugates all possible embeddings
formula_12. | [
{
"math_id": 0,
"text": " \\Gamma_0 "
},
{
"math_id": 1,
"text": " \\geq 3 "
},
{
"math_id": 2,
"text": " \\Gamma_1 "
},
{
"math_id": 3,
"text": " S_{\\Gamma_0}"
},
{
"math_id": 4,
"text": "\\Gamma_2 "
},
{
"math_id": 5,
"text": " S_{\\Gamma_1}= S_{S_{\\Gamma_0}} \\, "
},
{
"math_id": 6,
"text": " x\\mapsto gx \\, "
},
{
"math_id": 7,
"text": "\\Gamma_0 \\hookrightarrow \\Gamma_1 \\hookrightarrow \\Gamma_2 \\hookrightarrow \\cdots . \\, "
},
{
"math_id": 8,
"text": " \\Gamma_i"
},
{
"math_id": 9,
"text": "\\Gamma_i \\subset U "
},
{
"math_id": 10,
"text": "\\Gamma_{i+1}= S_{\\Gamma_i}"
},
{
"math_id": 11,
"text": "\\Gamma_i"
},
{
"math_id": 12,
"text": "G \\hookrightarrow \\Gamma_i"
}
] | https://en.wikipedia.org/wiki?curid=9416036 |
9417 | Euclidean geometry | Mathematical model of the physical space
Euclidean geometry is a mathematical system attributed to ancient Greek mathematician Euclid, which he described in his textbook on geometry, "Elements". Euclid's approach consists in assuming a small set of intuitively appealing axioms (postulates) and deducing many other propositions (theorems) from these. Although many of Euclid's results had been stated earlier, Euclid was the first to organize these propositions into a logical system in which each result is "proved" from axioms and previously proved theorems.
The "Elements" begins with plane geometry, still taught in secondary school (high school) as the first axiomatic system and the first examples of mathematical proofs. It goes on to the solid geometry of three dimensions. Much of the "Elements" states results of what are now called algebra and number theory, explained in geometrical language.
For more than two thousand years, the adjective "Euclidean" was unnecessary because
Euclid's axioms seemed so intuitively obvious (with the possible exception of the parallel postulate) that theorems proved from them were deemed absolutely true, and thus no other sorts of geometry were possible. Today, however, many other self-consistent non-Euclidean geometries are known, the first ones having been discovered in the early 19th century. An implication of Albert Einstein's theory of general relativity is that physical space itself is not Euclidean, and Euclidean space is a good approximation for it only over short distances (relative to the strength of the gravitational field).
Euclidean geometry is an example of synthetic geometry, in that it proceeds logically from axioms describing basic properties of geometric objects such as points and lines, to propositions about those objects. This is in contrast to analytic geometry, introduced almost 2,000 years later by René Descartes, which uses coordinates to express geometric properties by means of algebraic formulas.
The "Elements".
The "Elements" is mainly a systematization of earlier knowledge of geometry. Its improvement over earlier treatments was rapidly recognized, with the result that there was little interest in preserving the earlier ones, and they are now nearly all lost.
There are 13 books in the "Elements":
Books I–IV and VI discuss plane geometry. Many results about plane figures are proved, for example, "In any triangle, two angles taken together in any manner are less than two right angles." (Book I proposition 17) and the Pythagorean theorem "In right-angled triangles the square on the side subtending the right angle is equal to the squares on the sides containing the right angle." (Book I, proposition 47)
Books V and VII–X deal with number theory, with numbers treated geometrically as lengths of line segments or areas of surface regions. Notions such as prime numbers and rational and irrational numbers are introduced. It is proved that there are infinitely many prime numbers.
Books XI–XIII concern solid geometry. A typical result is the 1:3 ratio between the volume of a cone and a cylinder with the same height and base. The platonic solids are constructed.
Axioms.
Euclidean geometry is an axiomatic system, in which all theorems ("true statements") are derived from a small number of simple axioms. Until the advent of non-Euclidean geometry, these axioms were considered to be obviously true in the physical world, so that all the theorems would be equally true. However, Euclid's reasoning from assumptions to conclusions remains valid independently from the physical reality.
Near the beginning of the first book of the "Elements", Euclid gives five postulates (axioms) for plane geometry, stated in terms of constructions (as translated by Thomas Heath):
Let the following be postulated:
Although Euclid explicitly only asserts the existence of the constructed objects, in his reasoning he also implicitly assumes them to be unique.
The "Elements" also include the following five "common notions":
Modern scholars agree that Euclid's postulates do not provide the complete logical foundation that Euclid required for his presentation. Modern treatments use more extensive and complete sets of axioms.
Parallel postulate.
To the ancients, the parallel postulate seemed less obvious than the others. They aspired to create a system of absolutely certain propositions, and to them, it seemed as if the parallel line postulate required proof from simpler statements. It is now known that such a proof is impossible since one can construct consistent systems of geometry (obeying the other axioms) in which the parallel postulate is true, and others in which it is false. Euclid himself seems to have considered it as being qualitatively different from the others, as evidenced by the organization of the "Elements": his first 28 propositions are those that can be proved without it.
Many alternative axioms can be formulated which are logically equivalent to the parallel postulate (in the context of the other axioms). For example, Playfair's axiom states:
In a plane, through a point not on a given straight line, at most one line can be drawn that never meets the given line.
The "at most" clause is all that is needed since it can be proved from the remaining axioms that at least one parallel line exists.
Methods of proof.
Euclidean Geometry is "constructive". Postulates 1, 2, 3, and 5 assert the existence and uniqueness of certain geometric figures, and these assertions are of a constructive nature: that is, we are not only told that certain things exist, but are also given methods for creating them with no more than a compass and an unmarked straightedge. In this sense, Euclidean geometry is more concrete than many modern axiomatic systems such as set theory, which often assert the existence of objects without saying how to construct them, or even assert the existence of objects that cannot be constructed within the theory. Strictly speaking, the lines on paper are "models" of the objects defined within the formal system, rather than instances of those objects. For example, a Euclidean straight line has no width, but any real drawn line will have. Though nearly all modern mathematicians consider nonconstructive proofs just as sound as constructive ones, they are often considered less elegant, intuitive, or practically useful. Euclid's constructive proofs often supplanted fallacious nonconstructive ones, e.g. some Pythagorean proofs that assumed all numbers are rational, usually requiring a statement such as "Find the greatest common measure of ..."
Euclid often used proof by contradiction.
Notation and terminology.
Naming of points and figures.
Points are customarily named using capital letters of the alphabet. Other figures, such as lines, triangles, or circles, are named by listing a sufficient number of points to pick them out unambiguously from the relevant figure, e.g., triangle ABC would typically be a triangle with vertices at points A, B, and C.
Complementary and supplementary angles.
Angles whose sum is a right angle are called complementary. Complementary angles are formed when a ray shares the same vertex and is pointed in a direction that is in between the two original rays that form the right angle. The number of rays in between the two original rays is infinite.
Angles whose sum is a straight angle are supplementary. Supplementary angles are formed when a ray shares the same vertex and is pointed in a direction that is in between the two original rays that form the straight angle (180 degree angle). The number of rays in between the two original rays is infinite.
Modern versions of Euclid's notation.
In modern terminology, angles would normally be measured in degrees or radians.
Modern school textbooks often define separate figures called lines (infinite), rays (semi-infinite), and line segments (of finite length). Euclid, rather than discussing a ray as an object that extends to infinity in one direction, would normally use locutions such as "if the line is extended to a sufficient length", although he occasionally referred to "infinite lines". A "line" for Euclid could be either straight or curved, and he used the more specific term "straight line" when necessary.
Some important or well known results.
Pons asinorum.
The pons asinorum ("bridge of asses") states that "in isosceles triangles the angles at the base equal one another, and, if the equal straight lines are produced further, then the angles under the base equal one another". Its name may be attributed to its frequent role as the first real test in the "Elements" of the intelligence of the reader and as a bridge to the harder propositions that followed. It might also be so named because of the geometrical figure's resemblance to a steep bridge that only a sure-footed donkey could cross.
Congruence of triangles.
Triangles are congruent if they have all three sides equal (SSS), two sides and the angle between them equal (SAS), or two angles and a side equal (ASA) (Book I, propositions 4, 8, and 26). Triangles with three equal angles (AAA) are similar, but not necessarily congruent. Also, triangles with two equal sides and an adjacent angle are not necessarily equal or congruent.
Triangle angle sum.
The sum of the angles of a triangle is equal to a straight angle (180 degrees). This causes an equilateral triangle to have three interior angles of 60 degrees. Also, it causes every triangle to have at least two acute angles and up to one obtuse or right angle.
Pythagorean theorem.
The celebrated Pythagorean theorem (book I, proposition 47) states that in any right triangle, the area of the square whose side is the hypotenuse (the side opposite the right angle) is equal to the sum of the areas of the squares whose sides are the two legs (the two sides that meet at a right angle).
Thales' theorem.
Thales' theorem, named after Thales of Miletus states that if A, B, and C are points on a circle where the line AC is a diameter of the circle, then the angle ABC is a right angle. Cantor supposed that Thales proved his theorem by means of Euclid Book I, Prop. 32 after the manner of Euclid Book III, Prop. 31.
Scaling of area and volume.
In modern terminology, the area of a plane figure is proportional to the square of any of its linear dimensions, formula_0, and the volume of a solid to the cube, formula_1. Euclid proved these results in various special cases such as the area of a circle and the volume of a parallelepipedal solid. Euclid determined some, but not all, of the relevant constants of proportionality. For instance, it was his successor Archimedes who proved that a sphere has 2/3 the volume of the circumscribing cylinder.
System of measurement and arithmetic.
Euclidean geometry has two fundamental types of measurements: angle and distance. The angle scale is absolute, and Euclid uses the right angle as his basic unit, so that, for example, a 45-degree angle would be referred to as half of a right angle. The distance scale is relative; one arbitrarily picks a line segment with a certain nonzero length as the unit, and other distances are expressed in relation to it. Addition of distances is represented by a construction in which one line segment is copied onto the end of another line segment to extend its length, and similarly for subtraction.
Measurements of area and volume are derived from distances. For example, a rectangle with a width of 3 and a length of 4 has an area that represents the product, 12. Because this geometrical interpretation of multiplication was limited to three dimensions, there was no direct way of interpreting the product of four or more numbers, and Euclid avoided such products, although they are implied, for example in the proof of book IX, proposition 20.
Euclid refers to a pair of lines, or a pair of planar or solid figures, as "equal" (ἴσος) if their lengths, areas, or volumes are equal respectively, and similarly for angles. The stronger term "congruent" refers to the idea that an entire figure is the same size and shape as another figure. Alternatively, two figures are congruent if one can be moved on top of the other so that it matches up with it exactly. (Flipping it over is allowed.) Thus, for example, a 2x6 rectangle and a 3x4 rectangle are equal but not congruent, and the letter R is congruent to its mirror image. Figures that would be congruent except for their differing sizes are referred to as similar. Corresponding angles in a pair of similar shapes are equal and corresponding sides are in proportion to each other.
Other general applications.
Because of Euclidean geometry's fundamental status in mathematics, it is impractical to give more than a representative sampling of applications here.
As suggested by the etymology of the word, one of the earliest reasons for interest in and also one of the most common current uses of geometry is surveying. In addition it has been used in classical mechanics and the cognitive and computational approaches to visual perception of objects. Certain practical results from Euclidean geometry (such as the right-angle property of the 3-4-5 triangle) were used long before they were proved formally. The fundamental types of measurements in Euclidean geometry are distances and angles, both of which can be measured directly by a surveyor. Historically, distances were often measured by chains, such as Gunter's chain, and angles using graduated circles and, later, the theodolite.
An application of Euclidean solid geometry is the determination of packing arrangements, such as the problem of finding the most efficient packing of spheres in n dimensions. This problem has applications in error detection and correction.
Geometry is used extensively in architecture.
Geometry can be used to design origami. Some classical construction problems of geometry are impossible using compass and straightedge, but can be solved using origami.
Later history.
Archimedes and Apollonius.
Archimedes (c. 287 BCE – c. 212 BCE), a colorful figure about whom many historical anecdotes are recorded, is remembered along with Euclid as one of the greatest of ancient mathematicians. Although the foundations of his work were put in place by Euclid, his work, unlike Euclid's, is believed to have been entirely original. He proved equations for the volumes and areas of various figures in two and three dimensions, and enunciated the Archimedean property of finite numbers.
Apollonius of Perga (c. 240 BCE – c. 190 BCE) is mainly known for his investigation of conic sections.
17th century: Descartes.
René Descartes (1596–1650) developed analytic geometry, an alternative method for formalizing geometry which focused on turning geometry into algebra.
In this approach, a point on a plane is represented by its Cartesian ("x", "y") coordinates, a line is represented by its equation, and so on.
In Euclid's original approach, the Pythagorean theorem follows from Euclid's axioms. In the Cartesian approach, the axioms are the axioms of algebra, and the equation expressing the Pythagorean theorem is then a definition of one of the terms in Euclid's axioms, which are now considered theorems.
The equation
formula_2
defining the distance between two points "P" = ("px", "py") and "Q" = ("qx", "qy") is then known as the "Euclidean metric", and other metrics define non-Euclidean geometries.
In terms of analytic geometry, the restriction of classical geometry to compass and straightedge constructions means a restriction to first- and second-order equations, e.g., "y" = 2"x" + 1 (a line), or "x"2 + "y"2 = 7 (a circle).
Also in the 17th century, Girard Desargues, motivated by the theory of perspective, introduced the concept of idealized points, lines, and planes at infinity. The result can be considered as a type of generalized geometry, projective geometry, but it can also be used to produce proofs in ordinary Euclidean geometry in which the number of special cases is reduced.
18th century.
Geometers of the 18th century struggled to define the boundaries of the Euclidean system. Many tried in vain to prove the fifth postulate from the first four. By 1763, at least 28 different proofs had been published, but all were found incorrect.
Leading up to this period, geometers also tried to determine what constructions could be accomplished in Euclidean geometry. For example, the problem of trisecting an angle with a compass and straightedge is one that naturally occurs within the theory, since the axioms refer to constructive operations that can be carried out with those tools. However, centuries of efforts failed to find a solution to this problem, until Pierre Wantzel published a proof in 1837 that such a construction was impossible. Other constructions that were proved impossible include doubling the cube and squaring the circle. In the case of doubling the cube, the impossibility of the construction originates from the fact that the compass and straightedge method involve equations whose order is an integral power of two, while doubling a cube requires the solution of a third-order equation.
Euler discussed a generalization of Euclidean geometry called affine geometry, which retains the fifth postulate unmodified while weakening postulates three and four in a way that eliminates the notions of angle (whence right triangles become meaningless) and of equality of length of line segments in general (whence circles become meaningless) while retaining the notions of parallelism as an equivalence relation between lines, and equality of length of parallel line segments (so line segments continue to have a midpoint).
19th century.
In the early 19th century, Carnot and Möbius systematically developed the use of signed angles and line segments as a way of simplifying and unifying results.
Higher dimensions.
In the 1840s William Rowan Hamilton developed the quaternions, and John T. Graves and Arthur Cayley the octonions. These are normed algebras which extend the complex numbers. Later it was understood that the quaternions are also a Euclidean geometric system with four real Cartesian coordinates. Cayley used quaternions to study rotations in 4-dimensional Euclidean space.
At mid-century Ludwig Schläfli developed the general concept of Euclidean space, extending Euclidean geometry to higher dimensions. He defined "polyschemes", later called polytopes, which are the higher-dimensional analogues of polygons and polyhedra. He developed their theory and discovered all the regular polytopes, i.e. the formula_3-dimensional analogues of regular polygons and Platonic solids. He found there are six regular convex polytopes in dimension four, and three in all higher dimensions.
!style="vertical-align:top;text-align:right;"|Short radius
!style="vertical-align:top;text-align:right;"|Area
!style="vertical-align:top;text-align:right;"|Volume
!style="vertical-align:top;text-align:right;"|4-Content
Schläfli performed this work in relative obscurity and it was published in full only posthumously in 1901. It had little influence until it was rediscovered and fully documented in 1948 by H.S.M. Coxeter.
In 1878 William Kingdon Clifford introduced what is now termed geometric algebra, unifying Hamilton's quaternions with Hermann Grassmann's algebra and revealing the geometric nature of these systems, especially in four dimensions. The operations of geometric algebra have the effect of mirroring, rotating, translating, and mapping the geometric objects that are being modeled to new positions. The Clifford torus on the surface of the 3-sphere is the simplest and most symmetric flat embedding of the Cartesian product of two circles (in the same sense that the surface of a cylinder is "flat").
Non-Euclidean geometry.
The century's most influential development in geometry occurred when, around 1830, János Bolyai and Nikolai Ivanovich Lobachevsky separately published work on non-Euclidean geometry, in which the parallel postulate is not valid. Since non-Euclidean geometry is provably relatively consistent with Euclidean geometry, the parallel postulate cannot be proved from the other postulates.
In the 19th century, it was also realized that Euclid's ten axioms and common notions do not suffice to prove all of the theorems stated in the "Elements". For example, Euclid assumed implicitly that any line contains at least two points, but this assumption cannot be proved from the other axioms, and therefore must be an axiom itself. The very first geometric proof in the "Elements," shown in the figure above, is that any line segment is part of a triangle; Euclid constructs this in the usual way, by drawing circles around both endpoints and taking their intersection as the third vertex. His axioms, however, do not guarantee that the circles actually intersect, because they do not assert the geometrical property of continuity, which in Cartesian terms is equivalent to the completeness property of the real numbers. Starting with Moritz Pasch in 1882, many improved axiomatic systems for geometry have been proposed, the best known being those of Hilbert, George Birkhoff, and Tarski.
20th century and relativity.
Einstein's theory of special relativity involves a four-dimensional space-time, the Minkowski space, which is non-Euclidean. This shows that non-Euclidean geometries, which had been introduced a few years earlier for showing that the parallel postulate cannot be proved, are also useful for describing the physical world.
However, the three-dimensional "space part" of the Minkowski space remains the space of Euclidean geometry. This is not the case with general relativity, for which the geometry of the space part of space-time is not Euclidean geometry. For example, if a triangle is constructed out of three rays of light, then in general the interior angles do not add up to 180 degrees due to gravity. A relatively weak gravitational field, such as the Earth's or the Sun's, is represented by a metric that is approximately, but not exactly, Euclidean. Until the 20th century, there was no technology capable of detecting these deviations in rays of light from Euclidean geometry, but Einstein predicted that such deviations would exist. They were later verified by observations such as the slight bending of starlight by the Sun during a solar eclipse in 1919, and such considerations are now an integral part of the software that runs the GPS system.
As a description of the structure of space.
Euclid believed that his axioms were self-evident statements about physical reality. Euclid's proofs depend upon assumptions perhaps not obvious in Euclid's fundamental axioms, in particular that certain movements of figures do not change their geometrical properties such as the lengths of sides and interior angles, the so-called "Euclidean motions", which include translations, reflections and rotations of figures. Taken as a physical description of space, postulate 2 (extending a line) asserts that space does not have holes or boundaries; postulate 4 (equality of right angles) says that space is isotropic and figures may be moved to any location while maintaining congruence; and postulate 5 (the parallel postulate) that space is flat (has no intrinsic curvature).
As discussed above, Albert Einstein's theory of relativity significantly modifies this view.
The ambiguous character of the axioms as originally formulated by Euclid makes it possible for different commentators to disagree about some of their other implications for the structure of space, such as whether or not it is infinite (see below) and what its topology is. Modern, more rigorous reformulations of the system typically aim for a cleaner separation of these issues. Interpreting Euclid's axioms in the spirit of this more modern approach, axioms 1–4 are consistent with either infinite or finite space (as in elliptic geometry), and all five axioms are consistent with a variety of topologies (e.g., a plane, a cylinder, or a torus for two-dimensional Euclidean geometry).
Treatment of infinity.
Infinite objects.
Euclid sometimes distinguished explicitly between "finite lines" (e.g., Postulate 2) and "infinite lines" (book I, proposition 12). However, he typically did not make such distinctions unless they were necessary. The postulates do not explicitly refer to infinite lines, although for example some commentators interpret postulate 3, existence of a circle with any radius, as implying that space is infinite.
The notion of infinitesimal quantities had previously been discussed extensively by the Eleatic School, but nobody had been able to put them on a firm logical basis, with paradoxes such as Zeno's paradox occurring that had not been resolved to universal satisfaction. Euclid used the method of exhaustion rather than infinitesimals.
Later ancient commentators, such as Proclus (410–485 CE), treated many questions about infinity as issues demanding proof and, e.g., Proclus claimed to prove the infinite divisibility of a line, based on a proof by contradiction in which he considered the cases of even and odd numbers of points constituting it.
At the turn of the 20th century, Otto Stolz, Paul du Bois-Reymond, Giuseppe Veronese, and others produced controversial work on non-Archimedean models of Euclidean geometry, in which the distance between two points may be infinite or infinitesimal, in the Newton–Leibniz sense. Fifty years later, Abraham Robinson provided a rigorous logical foundation for Veronese's work.
Infinite processes.
Ancient geometers may have considered the parallel postulate – that two parallel lines do not ever intersect – less certain than the others because it makes a statement about infinitely remote regions of space, and so cannot be physically verified.
The modern formulation of proof by induction was not developed until the 17th century, but some later commentators consider it implicit in some of Euclid's proofs, e.g., the proof of the infinitude of primes.
Supposed paradoxes involving infinite series, such as Zeno's paradox, predated Euclid. Euclid avoided such discussions, giving, for example, the expression for the partial sums of the geometric series in IX.35 without commenting on the possibility of letting the number of terms become infinite.
Logical basis.
Classical logic.
Euclid frequently used the method of proof by contradiction, and therefore the traditional presentation of Euclidean geometry assumes classical logic, in which every proposition is either true or false, i.e., for any proposition P, the proposition "P or not P" is automatically true.
Modern standards of rigor.
Placing Euclidean geometry on a solid axiomatic basis was a preoccupation of mathematicians for centuries. The role of primitive notions, or undefined concepts, was clearly put forward by Alessandro Padoa of the Peano delegation at the 1900 Paris conference: <templatestyles src="Template:Blockquote/styles.css" />...when we begin to formulate the theory, we can imagine that the undefined symbols are "completely devoid of meaning" and that the unproved propositions are simply "conditions" imposed upon the undefined symbols.
Then, the "system of ideas" that we have initially chosen is simply "one interpretation" of the undefined symbols; but..this interpretation can be ignored by the reader, who is free to replace it in his mind by "another interpretation".. that satisfies the conditions...
"Logical" questions thus become completely independent of "empirical" or "psychological" questions...
The system of undefined symbols can then be regarded as the "abstraction" obtained from the "specialized theories" that result when...the system of undefined symbols is successively replaced by each of the interpretations...
That is, mathematics is context-independent knowledge within a hierarchical framework. As said by Bertrand Russell:
<templatestyles src="Template:Blockquote/styles.css" />If our hypothesis is about "anything", and not about some one or more particular things, then our deductions constitute mathematics. Thus, mathematics may be defined as the subject in which we never know what we are talking about, nor whether what we are saying is true.
Such foundational approaches range between foundationalism and formalism.
Axiomatic formulations.
<templatestyles src="Template:Blockquote/styles.css" />Geometry is the science of correct reasoning on incorrect figures.
Notes.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "A \\propto L^2"
},
{
"math_id": 1,
"text": "V \\propto L^3"
},
{
"math_id": 2,
"text": "|PQ|=\\sqrt{(p_x-q_x)^2+(p_y-q_y)^2} \\, "
},
{
"math_id": 3,
"text": "n"
}
] | https://en.wikipedia.org/wiki?curid=9417 |
9419642 | Implication graph | Directed graph representing a Boolean expression
In mathematical logic and graph theory, an implication graph is a skew-symmetric, directed graph "G" = ("V", "E") composed of vertex set V and directed edge set E. Each vertex in V represents the truth status of a Boolean literal, and each directed edge from vertex u to vertex v represents the material implication "If the literal u is true then the literal v is also true". Implication graphs were originally used for analyzing complex Boolean expressions.
Applications.
A 2-satisfiability instance in conjunctive normal form can be transformed into an implication graph by replacing each of its disjunctions by a pair of implications. For example, the statement formula_0 can be rewritten as the pair formula_1. An instance is satisfiable if and only if no literal and its negation belong to the same strongly connected component of its implication graph; this characterization can be used to solve 2-satisfiability instances in linear time.
In CDCL SAT-solvers, unit propagation can be naturally associated with an implication graph that captures all possible ways of deriving all implied literals from decision literals, which is then used for clause learning. | [
{
"math_id": 0,
"text": "(x_0\\lor x_1)"
},
{
"math_id": 1,
"text": "(\\neg x_0 \\rightarrow x_1), (\\neg x_1 \\rightarrow x_0)"
}
] | https://en.wikipedia.org/wiki?curid=9419642 |
9421904 | Stream thrust averaging | Process to convert 3D flow into 1D
In fluid dynamics, stream thrust averaging is a process used to convert three-dimensional flow through a duct into one-dimensional uniform flow. It makes the assumptions that the flow is mixed adiabatically and without friction. However, due to the mixing process, there is a net increase in the entropy of the system. Although there is an increase in entropy, the stream thrust averaged values are more representative of the flow than a simple average as a simple average would violate the second Law of Thermodynamics.
Equations for a perfect gas.
Stream thrust:
formula_0
Mass flow:
formula_1
Stagnation enthalpy:
formula_2
formula_3
Solutions.
Solving for formula_4 yields two solutions. They must both be analyzed to determine which is the physical solution. One will usually be a subsonic root and the other a supersonic root. If it is not clear which value of velocity is correct, the second law of thermodynamics may be applied.
formula_5
formula_6
formula_7
Second law of thermodynamics:
formula_8
The values formula_9 and formula_10 are unknown and may be dropped from the formulation. The value of entropy is not necessary, only that the value is positive.
formula_11
One possible unreal solution for the stream thrust averaged velocity yields a negative entropy. Another method of determining the proper solution is to take a simple average of the velocity and determining which value is closer to the stream thrust averaged velocity. | [
{
"math_id": 0,
"text": " F = \\int \\left(\\rho \\mathbf{V} \\cdot d \\mathbf{A} \\right) \\mathbf{V} \\cdot \\mathbf{f} +\\int pd \\mathbf{A} \\cdot \\mathbf{f}."
},
{
"math_id": 1,
"text": " \\dot m = \\int \\rho \\mathbf{V} \\cdot d \\mathbf{A}."
},
{
"math_id": 2,
"text": " H = {1 \\over \\dot m} \\int \\left({\\rho \\mathbf{V} \\cdot d \\mathbf{A}} \\right) \\left( h+ {|\\mathbf{V}|^2 \\over 2} \\right),"
},
{
"math_id": 3,
"text": " \\overline{U}^2 \\left({1- {R \\over 2C_p}}\\right) -\\overline{U}{F\\over \\dot m} +{HR \\over C_p}=0."
},
{
"math_id": 4,
"text": " \\overline{U}"
},
{
"math_id": 5,
"text": " \\overline{\\rho} = {\\dot m \\over \\overline{U}A},"
},
{
"math_id": 6,
"text": " \\overline{p} = {F \\over A} -{\\overline{\\rho} \\overline{U}^2},"
},
{
"math_id": 7,
"text": " \\overline{h} = {\\overline{p} C_p \\over \\overline{\\rho} R}."
},
{
"math_id": 8,
"text": " \\nabla s = C_p \\ln({\\overline{T}\\over T_1}) +R \\ln({\\overline{p} \\over p_1})."
},
{
"math_id": 9,
"text": " T_1"
},
{
"math_id": 10,
"text": " p_1"
},
{
"math_id": 11,
"text": " \\nabla s = C_p \\ln(\\overline{T}) +R \\ln(\\overline{p})."
}
] | https://en.wikipedia.org/wiki?curid=9421904 |
9426 | Electromagnetic radiation | Physical model of propagating energy
In physics, electromagnetic radiation (EMR) consists of waves of the electromagnetic (EM) field, which propagate through space and carry momentum and electromagnetic radiant energy.
Classically, electromagnetic radiation consists of electromagnetic waves, which are synchronized oscillations of electric and magnetic fields. In a vacuum, electromagnetic waves travel at the speed of light, commonly denoted "c". There, depending on the frequency of oscillation, different wavelengths of electromagnetic spectrum are produced. In homogeneous, isotropic media, the oscillations of the two fields are on average perpendicular to each other and perpendicular to the direction of energy and wave propagation, forming a transverse wave.
The position of an electromagnetic wave within the electromagnetic spectrum can be characterized by either its frequency of oscillation or its wavelength. Electromagnetic waves of different frequency are called by different names since they have different sources and effects on matter. In order of increasing frequency and decreasing wavelength, the electromagnetic spectrum includes: radio waves, microwaves, infrared, visible light, ultraviolet, X-rays, and gamma rays.
Electromagnetic waves are emitted by electrically charged particles undergoing acceleration, and these waves can subsequently interact with other charged particles, exerting force on them. EM waves carry energy, momentum, and angular momentum away from their source particle and can impart those quantities to matter with which they interact. Electromagnetic radiation is associated with those EM waves that are free to propagate themselves ("radiate") without the continuing influence of the moving charges that produced them, because they have achieved sufficient distance from those charges. Thus, EMR is sometimes referred to as the far field, while the near field refers to EM fields near the charges and current that directly produced them, specifically electromagnetic induction and electrostatic induction phenomena.
In quantum mechanics, an alternate way of viewing EMR is that it consists of photons, uncharged elementary particles with zero rest mass which are the quanta of the electromagnetic field, responsible for all electromagnetic interactions. Quantum electrodynamics is the theory of how EMR interacts with matter on an atomic level. Quantum effects provide additional sources of EMR, such as the transition of electrons to lower energy levels in an atom and black-body radiation. The energy of an individual photon is quantized and proportional to frequency according to Planck's equation "E" = "hf", where "E" is the energy per photon, "f" is the frequency of the photon, and "h" is the Planck constant. Thus, higher frequency photons have more energy. For example, a gamma ray photon has times the energy of a extremely low frequency radio wave photon.
The effects of EMR upon chemical compounds and biological organisms depend both upon the radiation's power and its frequency. EMR of lower energy ultraviolet or lower frequencies (i.e., near ultraviolet, visible light, infrared, microwaves, and radio waves) is "non-ionizing" because its photons do not individually have enough energy to ionize atoms or molecules or to break chemical bonds. The effect of non-ionizing radiation on chemical systems and living tissue is primarily simply heating, through the combined energy transfer of many photons. In contrast, high frequency ultraviolet, X-rays and gamma rays are "ionizing" – individual photons of such high frequency have enough energy to ionize molecules or break chemical bonds. Ionizing radiation can cause chemical reactions and damage living cells beyond simply heating, and can be a health hazard and dangerous.
Physics.
Theory.
Maxwell's equations.
James Clerk Maxwell derived a wave form of the electric and magnetic equations, thus uncovering the wave-like nature of electric and magnetic fields and their symmetry. Because the speed of EM waves predicted by the wave equation coincided with the measured speed of light, Maxwell concluded that light itself is an EM wave. Maxwell's equations were confirmed by Heinrich Hertz through experiments with radio waves.
Near and far fields.
Maxwell's equations established that some charges and currents ("sources") produce local electromagnetic fields near them that do not radiate. Currents directly produce magnetic fields, but such fields of a magnetic-dipole–type that dies out with distance from the current. In a similar manner, moving charges pushed apart in a conductor by a changing electrical potential (such as in an antenna) produce an electric-dipole–type electrical field, but this also declines with distance. These fields make up the "near" field. Neither of these behaviours is responsible for EM radiation. Instead, they only efficiently transfer energy to a receiver very close to the source, such as inside a transformer. The near field has strong effects its source, with any energy withdrawn by a receiver causing increased "load" (decreased electrical reactance) on the source. The near field does not propagate freely into space, carrying energy away without a distance limit, but rather oscillates, returning its energy to the transmitter if it is not absorbed by a receiver.
By contrast, the "far" field is composed of "radiation" that is free of the transmitter, in the sense that the transmitter requires the same power to send changes in the field out regardless of whether anything absorbs the signal, e.g. a radio station does not need to increase its power when more receivers use the signal. This far part of the electromagnetic field "is" electromagnetic radiation. The far fields propagate (radiate) without allowing the transmitter to affect them. This causes them to be independent in the sense that their existence and their energy, after they have left the transmitter, is completely independent of both transmitter and receiver. Due to conservation of energy, the amount of power passing through any spherical surface drawn around the source is the same. Because such a surface has an area proportional to the square of its distance from the source, the power density of EM radiation from an isotropic source decreases with the inverse square of the distance from the source; this is called the inverse-square law. This is in contrast to dipole parts of the EM field, the near field, which varies in intensity according to an inverse cube power law, and thus does "not" transport a conserved amount of energy over distances but instead fades with distance, with its energy (as noted) rapidly returning to the transmitter or absorbed by a nearby receiver (such as a transformer secondary coil).
In the Liénard–Wiechert potential formulation of the electric and magnetic fields due to motion of a single particle (according to Maxwell's equations), the terms associated with acceleration of the particle are those that are responsible for the part of the field that is regarded as electromagnetic radiation. By contrast, the term associated with the changing static electric field of the particle and the magnetic term that results from the particle's uniform velocity are both associated with the near field, and do not comprise electromagnetic radiation.
Properties.
Electric and magnetic fields obey the properties of superposition. Thus, a field due to any particular particle or time-varying electric or magnetic field contributes to the fields present in the same space due to other causes. Further, as they are vector fields, all magnetic and electric field vectors add together according to vector addition. For example, in optics two or more coherent light waves may interact and by constructive or destructive interference yield a resultant irradiance deviating from the sum of the component irradiances of the individual light waves.
The electromagnetic fields of light are not affected by traveling through static electric or magnetic fields in a linear medium such as a vacuum. However, in nonlinear media, such as some crystals, interactions can occur between light and static electric and magnetic fields—these interactions include the Faraday effect and the Kerr effect.
In refraction, a wave crossing from one medium to another of different density alters its speed and direction upon entering the new medium. The ratio of the refractive indices of the media determines the degree of refraction, and is summarized by Snell's law. Light of composite wavelengths (natural sunlight) disperses into a visible spectrum passing through a prism, because of the wavelength-dependent refractive index of the prism material (dispersion); that is, each component wave within the composite light is bent a different amount.
EM radiation exhibits both wave properties and particle properties at the same time (see wave-particle duality). Both wave and particle characteristics have been confirmed in many experiments. Wave characteristics are more apparent when EM radiation is measured over relatively large timescales and over large distances while particle characteristics are more evident when measuring small timescales and distances. For example, when electromagnetic radiation is absorbed by matter, particle-like properties will be more obvious when the average number of photons in the cube of the relevant wavelength is much smaller than 1. It is not so difficult to experimentally observe non-uniform deposition of energy when light is absorbed, however this alone is not evidence of "particulate" behavior. Rather, it reflects the quantum nature of "matter". Demonstrating that the light itself is quantized, not merely its interaction with matter, is a more subtle affair.
Some experiments display both the wave and particle natures of electromagnetic waves, such as the self-interference of a single photon. When a single photon is sent through an interferometer, it passes through both paths, interfering with itself, as waves do, yet is detected by a photomultiplier or other sensitive detector only once.
A quantum theory of the interaction between electromagnetic radiation and matter such as electrons is described by the theory of quantum electrodynamics.
Electromagnetic waves can be polarized, reflected, refracted, or diffracted, and can interfere with each other.
Wave model.
In homogeneous, isotropic media, electromagnetic radiation is a transverse wave, meaning that its oscillations are perpendicular to the direction of energy transfer and travel. It comes from the following equations:formula_0These equations predicate that any electromagnetic wave must be a transverse wave, where the electric field E and the magnetic field B are both perpendicular to the direction of wave propagation.
The electric and magnetic parts of the field in an electromagnetic wave stand in a fixed ratio of strengths to satisfy the two Maxwell equations that specify how one is produced from the other. In dissipation-less (lossless) media, these E and B fields are also in phase, with both reaching maxima and minima at the same points in space (see illustrations). In the far-field EM radiation which is described by the two source-free Maxwell curl operator equations, a time-change in one type of field is proportional to the curl of the other. These derivatives require that the E and B fields in EMR are in-phase (see mathematics section below).
An important aspect of light's nature is its frequency. The frequency of a wave is its rate of oscillation and is measured in hertz, the SI unit of frequency, where one hertz is equal to one oscillation per second. Light usually has multiple frequencies that sum to form the resultant wave. Different frequencies undergo different angles of refraction, a phenomenon known as dispersion.
A monochromatic wave (a wave of a single frequency) consists of successive troughs and crests, and the distance between two adjacent crests or troughs is called the wavelength. Waves of the electromagnetic spectrum vary in size, from very long radio waves longer than a continent to very short gamma rays smaller than atom nuclei. Frequency is inversely proportional to wavelength, according to the equation:
formula_1
where "v" is the speed of the wave ("c" in a vacuum or less in other media), "f" is the frequency and "λ" is the wavelength. As waves cross boundaries between different media, their speeds change but their frequencies remain constant.
Electromagnetic waves in free space must be solutions of Maxwell's electromagnetic wave equation. Two main classes of solutions are known, namely plane waves and spherical waves. The plane waves may be viewed as the limiting case of spherical waves at a very large (ideally infinite) distance from the source. Both types of waves can have a waveform which is an arbitrary time function (so long as it is sufficiently differentiable to conform to the wave equation). As with any time function, this can be decomposed by means of Fourier analysis into its frequency spectrum, or individual sinusoidal components, each of which contains a single frequency, amplitude and phase. Such a component wave is said to be "monochromatic". A monochromatic electromagnetic wave can be characterized by its frequency or wavelength, its peak amplitude, its phase relative to some reference phase, its direction of propagation, and its polarization.
Interference is the superposition of two or more waves resulting in a new wave pattern. If the fields have components in the same direction, they constructively interfere, while opposite directions cause destructive interference. Additionally, multiple polarization signals can be combined (i.e. interfered) to form new states of polarization, which is known as parallel polarization state generation.
The energy in electromagnetic waves is sometimes called radiant energy.
Particle model and quantum theory.
An anomaly arose in the late 19th century involving a contradiction between the wave theory of light and measurements of the electromagnetic spectra that were being emitted by thermal radiators known as black bodies. Physicists struggled with this problem unsuccessfully for many years, and it later became known as the ultraviolet catastrophe. In 1900, Max Planck developed a new theory of black-body radiation that explained the observed spectrum. Planck's theory was based on the idea that black bodies emit light (and other electromagnetic radiation) only as discrete bundles or packets of energy. These packets were called quanta. In 1905, Albert Einstein proposed that light quanta be regarded as real particles. Later the particle of light was given the name photon, to correspond with other particles being described around this time, such as the electron and proton. A photon has an energy, "E", proportional to its frequency, "f", by
formula_2
where "h" is the Planck constant, formula_3 is the wavelength and "c" is the speed of light. This is sometimes known as the Planck–Einstein equation. In quantum theory (see first quantization) the energy of the photons is thus directly proportional to the frequency of the EMR wave.
Likewise, the momentum "p" of a photon is also proportional to its frequency and inversely proportional to its wavelength:
formula_4
The source of Einstein's proposal that light was composed of particles (or could act as particles in some circumstances) was an experimental anomaly not explained by the wave theory: the photoelectric effect, in which light striking a metal surface ejected electrons from the surface, causing an electric current to flow across an applied voltage. Experimental measurements demonstrated that the energy of individual ejected electrons was proportional to the "frequency", rather than the "intensity", of the light. Furthermore, below a certain minimum frequency, which depended on the particular metal, no current would flow regardless of the intensity. These observations appeared to contradict the wave theory, and for years physicists tried in vain to find an explanation. In 1905, Einstein explained this puzzle by resurrecting the particle theory of light to explain the observed effect. Because of the preponderance of evidence in favor of the wave theory, however, Einstein's ideas were met initially with great skepticism among established physicists. Eventually Einstein's explanation was accepted as new particle-like behavior of light was observed, such as the Compton effect.
As a photon is absorbed by an atom, it excites the atom, elevating an electron to a higher energy level (one that is on average farther from the nucleus). When an electron in an excited molecule or atom descends to a lower energy level, it emits a photon of light at a frequency corresponding to the energy difference. Since the energy levels of electrons in atoms are discrete, each element and each molecule emits and absorbs its own characteristic frequencies. Immediate photon emission is called fluorescence, a type of photoluminescence. An example is visible light emitted from fluorescent paints, in response to ultraviolet (blacklight). Many other fluorescent emissions are known in spectral bands other than visible light. Delayed emission is called phosphorescence.
Wave–particle duality.
The modern theory that explains the nature of light includes the notion of wave–particle duality.
Wave and particle effects of electromagnetic radiation.
Together, wave and particle effects fully explain the emission and absorption spectra of EM radiation. The matter-composition of the medium through which the light travels determines the nature of the absorption and emission spectrum. These bands correspond to the allowed energy levels in the atoms. Dark bands in the absorption spectrum are due to the atoms in an intervening medium between source and observer. The atoms absorb certain frequencies of the light between emitter and detector/eye, then emit them in all directions. A dark band appears to the detector, due to the radiation scattered out of the light beam. For instance, dark bands in the light emitted by a distant star are due to the atoms in the star's atmosphere. A similar phenomenon occurs for emission, which is seen when an emitting gas glows due to excitation of the atoms from any mechanism, including heat. As electrons descend to lower energy levels, a spectrum is emitted that represents the jumps between the energy levels of the electrons, but lines are seen because again emission happens only at particular energies after excitation. An example is the emission spectrum of nebulae. Rapidly moving electrons are most sharply accelerated when they encounter a region of force, so they are responsible for producing much of the highest frequency electromagnetic radiation observed in nature.
These phenomena can aid various chemical determinations for the composition of gases lit from behind (absorption spectra) and for glowing gases (emission spectra). Spectroscopy (for example) determines what chemical elements comprise a particular star. Spectroscopy is also used in the determination of the distance of a star, using the red shift.
Propagation speed.
When any wire (or other conducting object such as an antenna) conducts alternating current, electromagnetic radiation is propagated at the same frequency as the current.
As a wave, light is characterized by a velocity (the speed of light), wavelength, and frequency. As particles, light is a stream of photons. Each has an energy related to the frequency of the wave given by Planck's relation "E = hf", where "E" is the energy of the photon, "h" is the Planck constant, 6.626 × 10−34 J·s, and "f" is the frequency of the wave.
In a medium (other than vacuum), velocity factor or refractive index are considered, depending on frequency and application. Both of these are ratios of the speed in a medium to speed in a vacuum.
History of discovery.
Electromagnetic radiation of wavelengths other than those of visible light were discovered in the early 19th century. The discovery of infrared radiation is ascribed to astronomer William Herschel, who published his results in 1800 before the Royal Society of London. Herschel used a glass prism to refract light from the Sun and detected invisible rays that caused heating beyond the red part of the spectrum, through an increase in the temperature recorded with a thermometer. These "calorific rays" were later termed infrared.
In 1801, German physicist Johann Wilhelm Ritter discovered ultraviolet in an experiment similar to Herschel's, using sunlight and a glass prism. Ritter noted that invisible rays near the violet edge of a solar spectrum dispersed by a triangular prism darkened silver chloride preparations more quickly than did the nearby violet light. Ritter's experiments were an early precursor to what would become photography. Ritter noted that the ultraviolet rays (which at first were called "chemical rays") were capable of causing chemical reactions.
In 1862–64 James Clerk Maxwell developed equations for the electromagnetic field which suggested that waves in the field would travel with a speed that was very close to the known speed of light. Maxwell therefore suggested that visible light (as well as invisible infrared and ultraviolet rays by inference) all consisted of propagating disturbances (or radiation) in the electromagnetic field. Radio waves were first produced deliberately by Heinrich Hertz in 1887, using electrical circuits calculated to produce oscillations at a much lower frequency than that of visible light, following recipes for producing oscillating charges and currents suggested by Maxwell's equations. Hertz also developed ways to detect these waves, and produced and characterized what were later termed radio waves and microwaves.
Wilhelm Röntgen discovered and named X-rays. After experimenting with high voltages applied to an evacuated tube on 8 November 1895, he noticed a fluorescence on a nearby plate of coated glass. In one month, he discovered X-rays' main properties.
The last portion of the EM spectrum to be discovered was associated with radioactivity. Henri Becquerel found that uranium salts caused fogging of an unexposed photographic plate through a covering paper in a manner similar to X-rays, and Marie Curie discovered that only certain elements gave off these rays of energy, soon discovering the intense radiation of radium. The radiation from pitchblende was differentiated into alpha rays (alpha particles) and beta rays (beta particles) by Ernest Rutherford through simple experimentation in 1899, but these proved to be charged particulate types of radiation. However, in 1900 the French scientist Paul Villard discovered a third neutrally charged and especially penetrating type of radiation from radium, and after he described it, Rutherford realized it must be yet a third type of radiation, which in 1903 Rutherford named gamma rays. In 1910 British physicist William Henry Bragg demonstrated that gamma rays are electromagnetic radiation, not particles, and in 1914 Rutherford and Edward Andrade measured their wavelengths, finding that they were similar to X-rays but with shorter wavelengths and higher frequency, although a 'cross-over' between X and gamma rays makes it possible to have X-rays with a higher energy (and hence shorter wavelength) than gamma rays and vice versa. The origin of the ray differentiates them, gamma rays tend to be natural phenomena originating from the unstable nucleus of an atom and X-rays are electrically generated (and hence man-made) unless they are as a result of bremsstrahlung X-radiation caused by the interaction of fast moving particles (such as beta particles) colliding with certain materials, usually of higher atomic numbers.
Electromagnetic spectrum.
EM radiation (the designation 'radiation' excludes static electric and magnetic and near fields) is classified by wavelength into radio, microwave, infrared, visible, ultraviolet, X-rays and gamma rays. Arbitrary electromagnetic waves can be expressed by Fourier analysis in terms of sinusoidal waves (monochromatic radiation), which in turn can each be classified into these regions of the EMR spectrum.
For certain classes of EM waves, the waveform is most usefully treated as "random", and then spectral analysis must be done by slightly different mathematical techniques appropriate to random or stochastic processes. In such cases, the individual frequency components are represented in terms of their "power" content, and the phase information is not preserved. Such a representation is called the power spectral density of the random process. Random electromagnetic radiation requiring this kind of analysis is, for example, encountered in the interior of stars, and in certain other very wideband forms of radiation such as the Zero point wave field of the electromagnetic vacuum.
The behavior of EM radiation and its interaction with matter depends on its frequency, and changes qualitatively as the frequency changes. Lower frequencies have longer wavelengths, and higher frequencies have shorter wavelengths, and are associated with photons of higher energy. There is no fundamental limit known to these wavelengths or energies, at either end of the spectrum, although photons with energies near the Planck energy or exceeding it (far too high to have ever been observed) will require new physical theories to describe.
Radio and microwave.
When radio waves impinge upon a conductor, they couple to the conductor, travel along it and induce an electric current on the conductor surface by moving the electrons of the conducting material in correlated bunches of charge.
Electromagnetic radiation phenomena with wavelengths ranging from as long as one meter to as short as one millimeter are called microwaves; with frequencies between 300 MHz (0.3 GHz) and 300 GHz.
At radio and microwave frequencies, EMR interacts with matter largely as a bulk collection of charges which are spread out over large numbers of affected atoms. In electrical conductors, such induced bulk movement of charges (electric currents) results in absorption of the EMR, or else separations of charges that cause generation of new EMR (effective reflection of the EMR). An example is absorption or emission of radio waves by antennas, or absorption of microwaves by water or other molecules with an electric dipole moment, as for example inside a microwave oven. These interactions produce either electric currents or heat, or both.
Infrared.
Like radio and microwave, infrared (IR) also is reflected by metals (and also most EMR, well into the ultraviolet range). However, unlike lower-frequency radio and microwave radiation, Infrared EMR commonly interacts with dipoles present in single molecules, which change as atoms vibrate at the ends of a single chemical bond. It is consequently absorbed by a wide range of substances, causing them to increase in temperature as the vibrations dissipate as heat. The same process, run in reverse, causes bulk substances to radiate in the infrared spontaneously (see thermal radiation section below).
Infrared radiation is divided into spectral subregions. While different subdivision schemes exist, the spectrum is commonly divided as near-infrared (0.75–1.4 μm), short-wavelength infrared (1.4–3 μm), mid-wavelength infrared (3–8 μm), long-wavelength infrared (8–15 μm) and far infrared (15–1000 μm).
Visible light.
Natural sources produce EM radiation across the spectrum. EM radiation with a wavelength between approximately 400 nm and 700 nm is directly detected by the human eye and perceived as visible light. Other wavelengths, especially nearby infrared (longer than 700 nm) and ultraviolet (shorter than 400 nm) are also sometimes referred to as light.
As frequency increases into the visible range, photons have enough energy to change the bond structure of some individual molecules. It is not a coincidence that this happens in the visible range, as the mechanism of vision involves the change in bonding of a single molecule, retinal, which absorbs a single photon. The change in retinal causes a change in the shape of the rhodopsin protein it is contained in, which starts the biochemical process that causes the retina of the human eye to sense the light.
Photosynthesis becomes possible in this range as well, for the same reason. A single molecule of chlorophyll is excited by a single photon. In plant tissues that conduct photosynthesis, carotenoids act to quench electronically excited chlorophyll produced by visible light in a process called non-photochemical quenching, to prevent reactions that would otherwise interfere with photosynthesis at high light levels.
Animals that detect infrared make use of small packets of water that change temperature, in an essentially thermal process that involves many photons.
Infrared, microwaves and radio waves are known to damage molecules and biological tissue only by bulk heating, not excitation from single photons of the radiation.
Visible light is able to affect only a tiny percentage of all molecules. Usually not in a permanent or damaging way, rather the photon excites an electron which then emits another photon when returning to its original position. This is the source of color produced by most dyes. Retinal is an exception. When a photon is absorbed, the retinal permanently changes structure from cis to trans, and requires a protein to convert it back, i.e. reset it to be able to function as a light detector again.
Limited evidence indicate that some reactive oxygen species are created by visible light in skin, and that these may have some role in photoaging, in the same manner as ultraviolet A.
Ultraviolet.
As frequency increases into the ultraviolet, photons now carry enough energy (about three electron volts or more) to excite certain doubly bonded molecules into permanent chemical rearrangement. In DNA, this causes lasting damage. DNA is also indirectly damaged by reactive oxygen species produced by ultraviolet A (UVA), which has energy too low to damage DNA directly. This is why ultraviolet at all wavelengths can damage DNA, and is capable of causing cancer, and (for UVB) skin burns (sunburn) that are far worse than would be produced by simple heating (temperature increase) effects.
At the higher end of the ultraviolet range, the energy of photons becomes large enough to impart enough energy to electrons to cause them to be liberated from the atom, in a process called photoionisation. The energy required for this is always larger than about 10 electron volt (eV) corresponding with wavelengths smaller than 124 nm (some sources suggest a more realistic cutoff of 33 eV, which is the energy required to ionize water). This high end of the ultraviolet spectrum with energies in the approximate ionization range, is sometimes called "extreme UV." Ionizing UV is strongly filtered by the Earth's atmosphere.
X-rays and gamma rays.
Electromagnetic radiation composed of photons that carry minimum-ionization energy, or more, (which includes the entire spectrum with shorter wavelengths), is therefore termed ionizing radiation. (Many other kinds of ionizing radiation are made of non-EM particles). Electromagnetic-type ionizing radiation extends from the extreme ultraviolet to all higher frequencies and shorter wavelengths, which means that all X-rays and gamma rays qualify. These are capable of the most severe types of molecular damage, which can happen in biology to any type of biomolecule, including mutation and cancer, and often at great depths below the skin, since the higher end of the X-ray spectrum, and all of the gamma ray spectrum, penetrate matter.
Atmosphere and magnetosphere.
Most UV and X-rays are blocked by absorption first from molecular nitrogen, and then (for wavelengths in the upper UV) from the electronic excitation of dioxygen and finally ozone at the mid-range of UV. Only 30% of the Sun's ultraviolet light reaches the ground, and almost all of this is well transmitted.
Visible light is well transmitted in air, a property known as an atmospheric window, as it is not energetic enough to excite nitrogen, oxygen, or ozone, but too energetic to excite molecular vibrational frequencies of water vapor and CO2.
Absorption bands in the infrared are due to modes of vibrational excitation in water vapor. However, at energies too low to excite water vapor, the atmosphere becomes transparent again, allowing free transmission of most microwave and radio waves.
Finally, at radio wavelengths longer than 10 m or so (about 30 MHz), the air in the lower atmosphere remains transparent to radio, but plasma in certain layers of the ionosphere begins to interact with radio waves (see skywave). This property allows some longer wavelengths (100 m or 3 MHz) to be reflected and results in shortwave radio beyond line-of-sight. However, certain ionospheric effects begin to block incoming radiowaves from space, when their frequency is less than about 10 MHz (wavelength longer than about 30 m).
Thermal and electromagnetic radiation as a form of heat.
The basic structure of matter involves charged particles bound together. When electromagnetic radiation impinges on matter, it causes the charged particles to oscillate and gain energy. The ultimate fate of this energy depends on the context. It could be immediately re-radiated and appear as scattered, reflected, or transmitted radiation. It may get dissipated into other microscopic motions within the matter, coming to thermal equilibrium and manifesting itself as thermal energy, or even kinetic energy, in the material. With a few exceptions related to high-energy photons (such as fluorescence, harmonic generation, photochemical reactions, the photovoltaic effect for ionizing radiations at far ultraviolet, X-ray and gamma radiation), absorbed electromagnetic radiation simply deposits its energy by heating the material. This happens for infrared, microwave and radio wave radiation. Intense radio waves can thermally burn living tissue and can cook food. In addition to infrared lasers, sufficiently intense visible and ultraviolet lasers can easily set paper afire.
Ionizing radiation creates high-speed electrons in a material and breaks chemical bonds, but after these electrons collide many times with other atoms eventually most of the energy becomes thermal energy all in a tiny fraction of a second. This process makes ionizing radiation far more dangerous per unit of energy than non-ionizing radiation. This caveat also applies to UV, even though almost all of it is not ionizing, because UV can damage molecules due to electronic excitation, which is far greater per unit energy than heating effects.
Infrared radiation in the spectral distribution of a black body is usually considered a form of heat, since it has an equivalent temperature and is associated with an entropy change per unit of thermal energy. However, "heat" is a technical term in physics and thermodynamics and is often confused with thermal energy. Any type of electromagnetic energy can be transformed into thermal energy in interaction with matter. Thus, "any" electromagnetic radiation can "heat" (in the sense of increase the thermal energy temperature of) a material, when it is absorbed.
The inverse or time-reversed process of absorption is thermal radiation. Much of the thermal energy in matter consists of random motion of charged particles, and this energy can be radiated away from the matter. The resulting radiation may subsequently be absorbed by another piece of matter, with the deposited energy heating the material.
The electromagnetic radiation in an opaque cavity at thermal equilibrium is effectively a form of thermal energy, having maximum radiation entropy.
Biological effects.
Bioelectromagnetics is the study of the interactions and effects of EM radiation on living organisms. The effects of electromagnetic radiation upon living cells, including those in humans, depends upon the radiation's power and frequency. For low-frequency radiation (radio waves to near ultraviolet) the best-understood effects are those due to radiation power alone, acting through heating when radiation is absorbed. For these thermal effects, frequency is important as it affects the intensity of the radiation and penetration into the organism (for example, microwaves penetrate better than infrared). It is widely accepted that low frequency fields that are too weak to cause significant heating could not possibly have any biological effect.
Some research suggests that weaker "non-thermal" electromagnetic fields (including weak ELF magnetic fields, although the latter does not strictly qualify as EM radiation) and modulated RF and microwave fields can have biological effects, though the significance of this is unclear.
The World Health Organization has classified radio frequency electromagnetic radiation as Group 2B – possibly carcinogenic. This group contains possible carcinogens such as lead, DDT, and styrene.
At higher frequencies (some of visible and beyond), the effects of individual photons begin to become important, as these now have enough energy individually to directly or indirectly damage biological molecules. All UV frequencies have been classed as Group 1 carcinogens by the World Health Organization. Ultraviolet radiation from sun exposure is the primary cause of skin cancer.
Thus, at UV frequencies and higher, electromagnetic radiation does more damage to biological systems than simple heating predicts. This is most obvious in the "far" (or "extreme") ultraviolet. UV, with X-ray and gamma radiation, are referred to as ionizing radiation due to the ability of photons of this radiation to produce ions and free radicals in materials (including living tissue). Since such radiation can severely damage life at energy levels that produce little heating, it is considered far more dangerous (in terms of damage-produced per unit of energy, or power) than the rest of the electromagnetic spectrum.
Use as a weapon.
The heat ray is an application of EMR that makes use of microwave frequencies to create an unpleasant heating effect in the upper layer of the skin. A publicly known heat ray weapon called the Active Denial System was developed by the US military as an experimental weapon to deny the enemy access to an area. A death ray is a theoretical weapon that delivers heat ray based on electromagnetic energy at levels that are capable of injuring human tissue. An inventor of a death ray, Harry Grindell Matthews, claimed to have lost sight in his left eye while working on his death ray weapon based on a microwave magnetron from the 1920s (a normal microwave oven creates a tissue damaging cooking effect inside the oven at around 2 kV/m).
Derivation from electromagnetic theory.
Electromagnetic waves are predicted by the classical laws of electricity and magnetism, known as Maxwell's equations. There are nontrivial solutions of the homogeneous Maxwell's equations (without charges or currents), describing "waves" of changing electric and magnetic fields. Beginning with Maxwell's equations in free space:
where
Besides the trivial solution
formula_14
useful solutions can be derived with the following vector identity, valid for all vectors formula_15 in some vector field:
formula_16
Taking the curl of the second Maxwell equation (2) yields:
Evaluating the left hand side of (5) with the above identity and simplifying using (1), yields:
Evaluating the right hand side of (5) by exchanging the sequence of derivatives and inserting the fourth Maxwell equation (4), yields:
Combining (6) and (7) again, gives a vector-valued differential equation for the electric field, solving the homogeneous Maxwell equations:
formula_17
Taking the curl of the fourth Maxwell equation (4) results in a similar differential equation for a magnetic field solving the homogeneous Maxwell equations:
formula_18
Both differential equations have the form of the general wave equation for waves propagating with speed formula_19 where formula_20 is a function of time and location, which gives the amplitude of the wave at some time at a certain location:
formula_21
This is also written as: formula_22
where formula_23 denotes the so-called d'Alembert operator, which in Cartesian coordinates is given as:
formula_24
Comparing the terms for the speed of propagation, yields in the case of the electric and magnetic fields:
formula_25
This is the speed of light in vacuum. Thus Maxwell's equations connect the vacuum permittivity formula_13, the vacuum permeability formula_12, and the speed of light, "c"0, via the above equation. This relationship had been discovered by Wilhelm Eduard Weber and Rudolf Kohlrausch prior to the development of Maxwell's electrodynamics, however Maxwell was the first to produce a field theory consistent with waves traveling at the speed of light.
These are only two equations versus the original four, so more information pertains to these waves hidden within Maxwell's equations. A generic vector wave for the electric field has the form
formula_26
Here, formula_27 is a constant vector, formula_20 is any second differentiable function, formula_28 is a unit vector in the direction of propagation, and formula_29 is a position vector. formula_30 is a generic solution to the wave equation. In other words,
formula_31
for a generic wave traveling in the formula_32 direction.
From the first of Maxwell's equations, we get
formula_33
Thus,
formula_34
which implies that the electric field is orthogonal to the direction the wave propagates. The second of Maxwell's equations yields the magnetic field, namely,
formula_35
Thus,
formula_36
The remaining equations will be satisfied by this choice of formula_37.
The electric and magnetic field waves in the far-field travel at the speed of light. They have a special restricted orientation and proportional magnitudes, formula_38, which can be seen immediately from the Poynting vector. The electric field, magnetic field, and direction of wave propagation are all orthogonal, and the wave propagates in the same direction as formula_39. Also, E and B far-fields in free space, which as wave solutions depend primarily on these two Maxwell equations, are in-phase with each other. This is guaranteed since the generic wave solution is first order in both space and time, and the curl operator on one side of these equations results in first-order spatial derivatives of the wave solution, while the time-derivative on the other side of the equations, which gives the other field, is first-order in time, resulting in the same phase shift for both fields in each mathematical operation.
From the viewpoint of an electromagnetic wave traveling forward, the electric field might be oscillating up and down, while the magnetic field oscillates right and left. This picture can be rotated with the electric field oscillating right and left and the magnetic field oscillating down and up. This is a different solution that is traveling in the same direction. This arbitrariness in the orientation with respect to propagation direction is known as polarization. On a quantum level, it is described as photon polarization. The direction of the polarization is defined as the direction of the electric field.
More general forms of the second-order wave equations given above are available, allowing for both non-vacuum propagation media and sources. Many competing derivations exist, all with varying levels of approximation and intended applications. One very general example is a form of the electric field equation, which was factorized into a pair of explicitly directional wave equations, and then efficiently reduced into a single uni-directional wave equation by means of a simple slow-evolution approximation.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\begin{align}\n\\nabla \\cdot \\mathbf{E} &= 0\\\\\n\\nabla \\cdot \\mathbf{B} &= 0\n\\end{align}"
},
{
"math_id": 1,
"text": "\\displaystyle v=f\\lambda"
},
{
"math_id": 2,
"text": "E = hf = \\frac{hc}{\\lambda} \\,\\! "
},
{
"math_id": 3,
"text": "\\lambda"
},
{
"math_id": 4,
"text": "p = { E \\over c } = { hf \\over c } = { h \\over \\lambda }. "
},
{
"math_id": 5,
"text": "\\mathbf{E}"
},
{
"math_id": 6,
"text": "\\mathbf{B}"
},
{
"math_id": 7,
"text": "\\nabla \\cdot \\mathbf X "
},
{
"math_id": 8,
"text": "\\nabla \\times \\mathbf X "
},
{
"math_id": 9,
"text": "\\mathbf X;"
},
{
"math_id": 10,
"text": "\\frac{\\partial \\mathbf{B}}{\\partial t}"
},
{
"math_id": 11,
"text": "\\frac{\\partial \\mathbf{E}}{\\partial t}"
},
{
"math_id": 12,
"text": "\\mu_0"
},
{
"math_id": 13,
"text": "\\varepsilon_0"
},
{
"math_id": 14,
"text": "\\mathbf{E} = \\mathbf{B} = \\mathbf{0},"
},
{
"math_id": 15,
"text": "\\mathbf{A}"
},
{
"math_id": 16,
"text": "\\nabla \\times \\left( \\nabla \\times \\mathbf{A} \\right) = \\nabla \\left( \\nabla \\cdot \\mathbf{A} \\right) - \\nabla^2 \\mathbf{A}."
},
{
"math_id": 17,
"text": "\\nabla^2 \\mathbf{E} = \\mu_0 \\varepsilon_0 \\frac{\\partial^2 \\mathbf{E}}{\\partial t^2}"
},
{
"math_id": 18,
"text": "\\nabla^2 \\mathbf{B} = \\mu_0 \\varepsilon_0 \\frac{\\partial^2 \\mathbf{B}}{\\partial t^2}."
},
{
"math_id": 19,
"text": "c_0,"
},
{
"math_id": 20,
"text": "f"
},
{
"math_id": 21,
"text": "\\nabla^2 f = \\frac{1}{{c_0}^2} \\frac{\\partial^2 f}{\\partial t^2}"
},
{
"math_id": 22,
"text": "\\Box f = 0"
},
{
"math_id": 23,
"text": "\\Box"
},
{
"math_id": 24,
"text": "\\Box = \\nabla^2 - \\frac{1}{{c_0}^2} \\frac{\\partial^2}{\\partial t^2} = \\frac{\\partial^2}{\\partial x^2} + \\frac{\\partial^2}{\\partial y^2} + \\frac{\\partial^2}{\\partial z^2} - \\frac{1}{{c_0}^2} \\frac{\\partial^2}{\\partial t^2} \\ "
},
{
"math_id": 25,
"text": "c_0 = \\frac{1}{\\sqrt{\\mu_0 \\varepsilon_0}}."
},
{
"math_id": 26,
"text": "\\mathbf{E} = \\mathbf{E}_0 f{\\left( \\hat{\\mathbf{k}} \\cdot \\mathbf{x} - c_0 t \\right)}"
},
{
"math_id": 27,
"text": "\\mathbf{E}_0"
},
{
"math_id": 28,
"text": " \\hat{\\mathbf{k}}"
},
{
"math_id": 29,
"text": " {\\mathbf{x}} "
},
{
"math_id": 30,
"text": "f{\\left( \\hat{\\mathbf{k}} \\cdot \\mathbf{x} - c_0 t \\right)}"
},
{
"math_id": 31,
"text": "\\nabla^2 f{\\left( \\hat{\\mathbf{k}} \\cdot \\mathbf{x} - c_0 t \\right)} = \\frac{1}{{c_0}^2} \\frac{\\partial^2}{\\partial t^2} f{\\left( \\hat{\\mathbf{k}} \\cdot \\mathbf{x} - c_0 t \\right)},"
},
{
"math_id": 32,
"text": "\\hat{\\mathbf{k}}"
},
{
"math_id": 33,
"text": "\\nabla \\cdot \\mathbf{E} = \\hat{\\mathbf{k}} \\cdot \\mathbf{E}_0 f'{\\left( \\hat{\\mathbf{k}} \\cdot \\mathbf{x} - c_0 t \\right)} = 0"
},
{
"math_id": 34,
"text": "\\mathbf{E} \\cdot \\hat{\\mathbf{k}} = 0"
},
{
"math_id": 35,
"text": "\\nabla \\times \\mathbf{E} = \\hat{\\mathbf{k}} \\times \\mathbf{E}_0 f'{\\left( \\hat{\\mathbf{k}} \\cdot \\mathbf{x} - c_0 t \\right)} = -\\frac{\\partial \\mathbf{B}}{\\partial t}"
},
{
"math_id": 36,
"text": "\\mathbf{B} = \\frac{1}{c_0} \\hat{\\mathbf{k}} \\times \\mathbf{E}"
},
{
"math_id": 37,
"text": "\\mathbf{E},\\mathbf{B}"
},
{
"math_id": 38,
"text": "E_0 = c_0 B_0"
},
{
"math_id": 39,
"text": "\\mathbf{E} \\times \\mathbf{B}"
}
] | https://en.wikipedia.org/wiki?curid=9426 |
9427669 | Dilution assay | The term dilution assay is generally used to designate a special type of bioassay in which one or more preparations (e.g. a drug) are administered to experimental units at different dose levels inducing a measurable biological response. The dose levels are prepared by dilution in a diluent that is inert in respect of the response. The experimental units can for example be cell-cultures, tissues, organs or living animals. The biological response may be quantal (e.g. positive/negative) or quantitative (e.g. growth). The goal is to relate the response to the dose, usually by interpolation techniques, and in many cases to express the potency/activity of the test preparation(s) relative to a standard of known potency/activity.
Dilution assays can be direct or indirect. In a direct dilution assay the amount of dose needed to produce a specific (fixed) response is measured, so that the dose is a stochastic variable defining the tolerance distribution. Conversely, in an indirect dilution assay the dose levels are administered at fixed dose levels, so that the response is a stochastic variable.
In some assays, there may be strong reasons for believing that all the constituents of the test preparation except one, are without any effect on the studied response of the subjects. An assay of the preparation against a standard preparation of the effective constituent, is then equivalent to an analysis for determining the content of the constituent. This may be described as analytical dilution assay.
Statistical models.
For a mathematical definition of a dilution assay an observation space formula_0 is defined and a function formula_1 so that the responses formula_2 are mapped to the set of real numbers. It is now assumed that a function formula_3 exists which relates the dose formula_4 to the response
formula_5
in which formula_6 is an error term with expectation 0. formula_3 is usually assumed to be continuous and monotone. In situations where a standard preparation is included it is furthermore assumed that the test preparation formula_7 behaves like a dilution (or concentration) of the standard formula_8
formula_9, for all formula_10
where formula_11 is the relative potency of formula_7. This is the fundamental assumption of similarity of dose-response curves which is necessary for a meaningful and unambiguous definition of the relative potency. In many cases it is convenient to apply a power transformation formula_12 with formula_13 or a logarithmic transformation formula_14. The latter can be shown to be a limit case of formula_15 so if formula_16 is written for the log transformation the above equation can be redefined as
formula_17, for all formula_18.
Estimates formula_19 of formula_3 are usually restricted to be member of a well-defined parametric family of functions, for example the family of linear functions characterized by an intercept and a slope. Statistical techniques such as optimization by Maximum Likelihood can be used to calculate estimates of the parameters. Of notable importance in this respect is the theory of Generalized Linear Models with which a wide range of dilution assays can be modelled. Estimates of formula_3 may describe formula_3 satisfactorily over the range of doses tested, but they do not necessarily have to describe formula_3 beyond that range. However, this does not mean that dissimilar curves can be restricted to an interval where they happen to be similar.
In practice, formula_3 itself is rarely of interest. More of interest is an estimate of formula_20 or an estimate of the dose that induces a specific response. These estimates involve taking ratios of statistically dependent parameter estimates. Fieller's theorem can be used to compute confidence intervals of these ratios.
Some special cases deserve particular mention because of their widespread use: If formula_3 is linear and formula_13 this is known as a slope-ratio model. If formula_3 is linear and formula_16 this is known as a parallel line model. Another commonly applied model is the probit model where formula_3 is the cumulative normal distribution function, formula_16 and formula_6 follows a binomial distribution.
Example: Microbiological assay of antibiotics.
An antibiotic standard (shown in red) and test preparation (shown in blue) are applied at three dose levels to sensitive microorganisms on a layer of agar in petri dishes. The stronger the dose the larger the zone of inhibition of growth of the microorganisms. The biological response formula_21 is in this case the zone of inhibition and the diameter of this zone formula_22 can be used as the measurable response. The doses formula_10 are transformed to logarithms formula_14 and the method of least squares is used to fit two parallel lines to the data. The horizontal distance formula_23 between the two lines (shown in green) serves as an estimate of the potency formula_20 of the test preparation relative to the standard.
Software.
The major statistical software packages do not cover dilution assays although a statistician should not have difficulties to write suitable scripts or macros to that end. Several special purpose software packages for dilution assays exist.
External links.
Software for dilution assays: | [
{
"math_id": 0,
"text": "U"
},
{
"math_id": 1,
"text": "f:U\\rightarrow\\mathbb{R}"
},
{
"math_id": 2,
"text": "u\\in U"
},
{
"math_id": 3,
"text": "F"
},
{
"math_id": 4,
"text": "z\\in[0,\\infty)"
},
{
"math_id": 5,
"text": "f(u)=F(z)+e"
},
{
"math_id": 6,
"text": "e"
},
{
"math_id": 7,
"text": "T"
},
{
"math_id": 8,
"text": "S"
},
{
"math_id": 9,
"text": "F_{T}(z)=F_{S}(\\rho z)"
},
{
"math_id": 10,
"text": "z"
},
{
"math_id": 11,
"text": "\\rho>0"
},
{
"math_id": 12,
"text": "x=z^{\\lambda}"
},
{
"math_id": 13,
"text": "\\lambda>0"
},
{
"math_id": 14,
"text": "x=\\log (z)"
},
{
"math_id": 15,
"text": "\\lambda\\downarrow0"
},
{
"math_id": 16,
"text": "\\lambda=0"
},
{
"math_id": 17,
"text": "F_{T}(x)=F_{S}(\\rho^{\\lambda}x)"
},
{
"math_id": 18,
"text": "x"
},
{
"math_id": 19,
"text": "\\hat F"
},
{
"math_id": 20,
"text": "\\rho"
},
{
"math_id": 21,
"text": "u"
},
{
"math_id": 22,
"text": "f(u)"
},
{
"math_id": 23,
"text": "\\log (\\hat\\rho)"
}
] | https://en.wikipedia.org/wiki?curid=9427669 |
9428917 | Snub (geometry) | Geometric operation applied to a polyhedron
In geometry, a snub is an operation applied to a polyhedron. The term originates from Kepler's names of two Archimedean solids, for the snub cube () and snub dodecahedron ().
In general, snubs have chiral symmetry with two forms: with clockwise or counterclockwise orientation. By Kepler's names, a snub can be seen as an expansion of a regular polyhedron: moving the faces apart, twisting them about their centers, adding new polygons centered on the original vertices, and adding pairs of triangles fitting between the original edges.
The terminology was generalized by Coxeter, with a slightly different definition, for a wider set of uniform polytopes.
Conway snubs.
John Conway explored generalized polyhedron operators, defining what is now called Conway polyhedron notation, which can be applied to polyhedra and tilings. Conway calls Coxeter's operation a "semi-snub".
In this notation, snub is defined by the dual and gyro operators, as "s" = "dg", and it is equivalent to an alternation of a truncation of an ambo operator. Conway's notation itself avoids Coxeter's alternation (half) operation since it only applies for polyhedra with only even-sided faces.
In 4-dimensions, Conway suggests the snub 24-cell should be called a "semi-snub 24-cell" because, unlike 3-dimensional snub polyhedra are alternated omnitruncated forms, it is not an alternated omnitruncated 24-cell. It is instead actually an alternated truncated 24-cell.
Coxeter's snubs, regular and quasiregular.
Coxeter's snub terminology is slightly different, meaning an alternated truncation, deriving the snub cube as a "snub cuboctahedron", and the snub dodecahedron as a "snub icosidodecahedron". This definition is used in the naming of two Johnson solids: the snub disphenoid and the snub square antiprism, and of higher dimensional polytopes, such as the 4-dimensional snub 24-cell, with extended Schläfli symbol s{3,4,3}, and Coxeter diagram .
A regular polyhedron (or tiling), with Schläfli symbol formula_1, and Coxeter diagram , has truncation defined as formula_2, and , and has snub defined as an alternated truncation formula_3, and . This alternated construction requires "q" to be even.
A quasiregular polyhedron, with Schläfli symbol formula_4 or "r"{"p","q"}, and Coxeter diagram or , has quasiregular truncation defined as formula_5 or "tr"{"p","q"}, and or , and has quasiregular snub defined as an alternated truncated rectification formula_6 or "htr"{"p","q"} = "sr"{"p","q"}, and or .
For example, Kepler's snub cube is derived from the quasiregular cuboctahedron, with a vertical Schläfli symbol formula_0, and Coxeter diagram , and so is more explicitly called a snub cuboctahedron, expressed by a vertical Schläfli symbol formula_7, and Coxeter diagram . The snub cuboctahedron is the alternation of the "truncated cuboctahedron", formula_8, and .
Regular polyhedra with even-order vertices can also be snubbed as alternated truncations, like the "snub octahedron", as formula_9, , is the alternation of the truncated octahedron, formula_10, and . The "snub octahedron" represents the pseudoicosahedron, a regular icosahedron with pyritohedral symmetry.
The "snub tetratetrahedron", as formula_11, and , is the alternation of the truncated tetrahedral symmetry form, formula_12, and .
Coxeter's snub operation also allows n-antiprisms to be defined as formula_13 or formula_14, based on n-prisms formula_15 or formula_16, while formula_17 is a regular n-hosohedron, a degenerate polyhedron, but a valid tiling on the sphere with digon or lune-shaped faces.
The same process applies for snub tilings:
Nonuniform snub polyhedra.
Nonuniform polyhedra with all even-valance vertices can be snubbed, including some infinite sets; for example:
Coxeter's uniform snub star-polyhedra.
Snub star-polyhedra are constructed by their Schwarz triangle (p q r), with rational ordered mirror-angles, and all mirrors active and alternated.
Coxeter's higher-dimensional snubbed polytopes and honeycombs.
In general, a regular polychoron with Schläfli symbol formula_18, and Coxeter diagram , has a snub with extended Schläfli symbol formula_19, and .
A rectified polychoron formula_20 = r{p,q,r}, and has snub symbol formula_21 = sr{p,q,r}, and .
Examples.
There is only one uniform convex snub in 4-dimensions, the snub 24-cell. The regular 24-cell has Schläfli symbol, formula_22, and Coxeter diagram , and the snub 24-cell is represented by formula_23, Coxeter diagram . It also has an index 6 lower symmetry constructions as formula_24 or s{31,1,1} and , and an index 3 subsymmetry as formula_25 or sr{3,3,4}, and or .
The related snub 24-cell honeycomb can be seen as a formula_26 or s{3,4,3,3}, and , and lower symmetry formula_27 or sr{3,3,4,3} and or , and lowest symmetry form as formula_28 or s{31,1,1,1} and .
A Euclidean honeycomb is an alternated hexagonal slab honeycomb, s{2,6,3}, and or sr{2,3,6}, and or sr{2,3[3]}, and .
Another Euclidean (scaliform) honeycomb is an alternated square slab honeycomb, s{2,4,4}, and or sr{2,41,1} and :
The only uniform snub hyperbolic uniform honeycomb is the "snub hexagonal tiling honeycomb", as s{3,6,3} and , which can also be constructed as an alternated hexagonal tiling honeycomb, h{6,3,3}, . It is also constructed as s{3[3,3]} and .
Another hyperbolic (scaliform) honeycomb is a snub order-4 octahedral honeycomb, s{3,4,4}, and .
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\begin{Bmatrix} 4 \\\\ 3 \\end{Bmatrix}"
},
{
"math_id": 1,
"text": "\\begin{Bmatrix} p , q \\end{Bmatrix}"
},
{
"math_id": 2,
"text": "t \\begin{Bmatrix} p , q \\end{Bmatrix}"
},
{
"math_id": 3,
"text": "ht \\begin{Bmatrix} p , q \\end{Bmatrix} = s \\begin{Bmatrix} p , q \\end{Bmatrix}"
},
{
"math_id": 4,
"text": "\\begin{Bmatrix} p \\\\ q \\end{Bmatrix}"
},
{
"math_id": 5,
"text": "t\\begin{Bmatrix} p \\\\ q \\end{Bmatrix}"
},
{
"math_id": 6,
"text": "ht\\begin{Bmatrix} p \\\\ q \\end{Bmatrix} = s\\begin{Bmatrix} p \\\\ q \\end{Bmatrix}"
},
{
"math_id": 7,
"text": "s\\begin{Bmatrix} 4 \\\\ 3 \\end{Bmatrix}"
},
{
"math_id": 8,
"text": "t\\begin{Bmatrix} 4 \\\\ 3 \\end{Bmatrix}"
},
{
"math_id": 9,
"text": "s\\begin{Bmatrix} 3 , 4 \\end{Bmatrix}"
},
{
"math_id": 10,
"text": "t\\begin{Bmatrix} 3 , 4 \\end{Bmatrix}"
},
{
"math_id": 11,
"text": "s\\begin{Bmatrix} 3 \\\\ 3 \\end{Bmatrix}"
},
{
"math_id": 12,
"text": "t\\begin{Bmatrix} 3 \\\\ 3 \\end{Bmatrix}"
},
{
"math_id": 13,
"text": "s\\begin{Bmatrix} 2 \\\\ n \\end{Bmatrix}"
},
{
"math_id": 14,
"text": "s\\begin{Bmatrix} 2 , 2n \\end{Bmatrix}"
},
{
"math_id": 15,
"text": "t\\begin{Bmatrix} 2 \\\\ n \\end{Bmatrix}"
},
{
"math_id": 16,
"text": "t\\begin{Bmatrix} 2 , 2n \\end{Bmatrix}"
},
{
"math_id": 17,
"text": "\\begin{Bmatrix} 2 , n \\end{Bmatrix}"
},
{
"math_id": 18,
"text": "\\begin{Bmatrix} p , q, r \\end{Bmatrix}"
},
{
"math_id": 19,
"text": "s \\begin{Bmatrix} p , q, r \\end{Bmatrix}"
},
{
"math_id": 20,
"text": "\\begin{Bmatrix} p \\\\ q, r \\end{Bmatrix}"
},
{
"math_id": 21,
"text": "s\\begin{Bmatrix} p \\\\ q , r \\end{Bmatrix}"
},
{
"math_id": 22,
"text": "\\begin{Bmatrix} 3 , 4, 3 \\end{Bmatrix}"
},
{
"math_id": 23,
"text": "s\\begin{Bmatrix} 3 , 4, 3 \\end{Bmatrix}"
},
{
"math_id": 24,
"text": "s\\left\\{\\begin{array}{l}3\\\\3\\\\3\\end{array}\\right\\}"
},
{
"math_id": 25,
"text": "s\\begin{Bmatrix} 3 \\\\ 3 , 4 \\end{Bmatrix}"
},
{
"math_id": 26,
"text": "s\\begin{Bmatrix} 3 , 4, 3, 3 \\end{Bmatrix}"
},
{
"math_id": 27,
"text": "s\\begin{Bmatrix} 3 \\\\ 3 , 4, 3 \\end{Bmatrix}"
},
{
"math_id": 28,
"text": "s\\left\\{\\begin{array}{l}3\\\\3\\\\3\\\\3\\end{array}\\right\\}"
}
] | https://en.wikipedia.org/wiki?curid=9428917 |
9432976 | Mars cycler | Kind of spacecraft trajectory
A Mars cycler (or Earth–Mars cycler) is a kind of spacecraft trajectory that encounters Earth and Mars regularly. The term may also refer to a spacecraft on a Mars cycler trajectory. The Aldrin cycler is an example of a Mars cycler.
Cyclers are potentially useful for transporting people or materials between those bodies using minimal propellant (relying on gravity-assist flybys for most trajectory changes) and can carry heavy radiation shielding to protect people in transit from cosmic rays and solar storms.
Earth–Mars cyclers.
A cycler is a trajectory that encounters two or more bodies regularly. Once the orbit is established, no propulsion is required to shuttle between the two, although some minor corrections may be necessary due to small perturbations in the orbit. The use of cyclers was considered in 1969 by Walter M. Hollister, who examined the case of an Earth–Venus cycler. Hollister did not have any particular mission in mind, but posited their use for both regular communication between two planets, and for multi-planet flyby missions.
A Martian year is 1.8808 Earth years, so Mars makes eight orbits of the Sun in about the same time as Earth makes 15. Cycler trajectories between Earth and Mars occur in whole-number multiples of the synodic period between the two planets, which is about 2.135 Earth years. In 1985, Buzz Aldrin presented an extension of his earlier Lunar cycler work which identified a Mars cycler corresponding to a single synodic period. The Aldrin cycler (as it is now known) makes a single eccentric loop around the Sun. It travels from Earth to Mars in 146 days (4.8 months), spends the next 16 months beyond the orbit of Mars, and takes another 146 days going from the orbit of Mars back to the first crossing of Earth's orbit.
The existence of the now-eponymous Aldrin cycler was calculated and confirmed by scientists at Jet Propulsion Laboratory later that year, along with the VISIT-1 and VISIT-2 cyclers proposed by John Niehoff in 1985. For each Earth–Mars cycler that is not a multiple of seven synodic periods, an outbound cycler intersects Mars on the way out from Earth while an inbound cycler intersects Mars on the way in to Earth. The only difference in these trajectories is the date in the synodic period in which the vehicle is launched from Earth. Earth–Mars cyclers with a multiple of seven synodic periods return to Earth at nearly the same point in its orbit and may encounter Earth and/or Mars multiple times during each cycle. VISIT-1 encounters Earth three times and Mars four times in 15 years. VISIT-2 encounters Earth five times and Mars two times in 15 years. Some possible Earth–Mars cyclers include the following:
A detailed survey of Earth–Mars cycler trajectories was conducted by Ryan Russell and Cesar Ocampo from the University of Texas at Austin, Texas. They identified 24 Earth-Mars cyclers with periods of two to four synodic periods, and 92 cyclers with periods of five or six synodic periods. They also found hundreds of non-ballistic cyclers, ones which would require some powered maneuvers.
Physics.
Earth orbits the Sun in one Earth year, Mars in 1.881. Neither orbit is perfectly circular; Earth has an orbital eccentricity of 0.0168, and Mars of 0.0934. The two orbits are not quite coplanar either, as the orbit of Mars is inclined by 1.85 degrees to that of Earth. The effect of the gravity of Mars on the cycler orbits is almost negligible, but that of the far more massive Earth needs to be considered. If we ignore these factors, and approximate Mars's orbital period as 1.875 Earth years, then 15 Earth years is 8 Martian years. In the diagram above, a spacecraft in an Aldrin cycler orbit that starts from Earth at point E1 will encounter Mars at M1. When it gets back to E1 just over two Earth years later, Earth will no longer be there, but it will encounter Earth again at E2, which is formula_0, <templatestyles src="Fraction/styles.css" />1⁄7 of an Earth orbit, further round.
The shape of the cycler orbit can be obtained from the conic equation:
formula_1
where formula_2 is 1 astronomical unit, formula_3 is the semi-major axis, formula_4 is the orbital eccentricity and formula_5 (half of formula_6). We can obtain formula_3 by solving Lambert's problem with formula_0 as the initial and final transfer angle. This gives:
formula_7
Solving the quadratic equation gives:
formula_8
with an orbital period of 2.02 years.
The angle at which the spacecraft flies past Earth, formula_9, is given by:
formula_10
Substituting the values given and derived above gives a value for formula_9 of formula_11. We can calculate the gravity assist from Earth:
formula_12
where formula_13 is the heliocentric flyby velocity. This can be calculated from:
formula_14
where is the velocity of Earth, which is 29.8 km/s. Substituting gives us V = 34.9 km/s, and ΔV = 8.73 km/s.
The excess speed is given by:
formula_15
Which gives a value for of 6.54 km/s. The turn angle formula_16 can be calculated from:
formula_17
Which gives formula_18, meaning that we have an formula_19 turn. The radius of closest approach to Earth will be given by:
formula_20
Where is the gravitational constant of the Earth. Substituting the values gives = , which is bad because the radius of the Earth is . A correction would therefore be required to comfortably avoid the planet.
Proposed use.
Aldrin proposed a pair of Mars cycler vehicles providing regular transport between Earth and Mars. While astronauts can tolerate traveling to the Moon in relatively cramped spacecraft for a few days, a mission to Mars, lasting several months, would require much more habitable accommodations for the much longer journey: astronauts would need a facility with ample living space, life support, and heavy radiation shielding. A 1999 NASA study estimated that a mission to Mars would require lifting about into space, of which was propellant.
Aldrin proposed that the costs of Mars missions could be greatly reduced by use of large space stations in cyclic orbits called "castles". Once established in their orbits, they would make regular trips between Earth and Mars without requiring any propellant. Other than consumables, cargo would therefore only have to be launched once. Two "castles" would be used, an outbound one on an Aldrin "cycler" with a fast transfer to Mars and long trip back, and an inbound one with fast trip to Earth and long return to Mars, which Aldrin called "up and down escalators".
The astronauts would meet up with the cycler in Earth orbit and later Mars orbit in specialised craft called "taxis". One cycler would travel an outbound route from Earth to Mars in about five months. Another Mars cycler in a complementary trajectory would travel from Mars to Earth, also in about five months. Taxi and cargo vehicles would attach to the cycler at one planet and detach upon reaching the other. The cycler concept would therefore provide for routine, safe, and economical transport between Earth and Mars.
A significant drawback of the "cycler" concept was that the Aldrin cycler flies by both planets at high speed. A taxi would need to accelerate to around Earth, and near Mars. To get around this, Aldrin proposed what he called a "semi-cycler", in which the "castle" would slow down around Mars, orbiting it, and later resume the "cycler" orbit. This would require fuel to execute the braking and re-cycling maneuvers.
The castles could be inserted into cycler orbits with considerable savings in fuel by performing a series of low thrust maneuvers: The castle would be placed into an interim orbit upon launch, and then use an Earth-swing-by maneuver to boost it into the final cycler orbit. Assuming the use of conventional fuels, it is possible to estimate the fuel required to establish a cycler orbit. In the case of the Aldrin cycler, use of a gravity assist reduces the fuel requirement by about , or 15 percent. Other cyclers showed less impressive improvement, due to the shape of their orbits, and when they encounter the Earth. In the case of the VISIT-1 cycler, the benefit would be around , less than one percent, which would hardly justify the additional three years required to establish the orbit.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "51.4^\\circ"
},
{
"math_id": 1,
"text": "r = a\\frac{1-\\epsilon^2}{1+\\epsilon\\cos\\theta}"
},
{
"math_id": 2,
"text": "r"
},
{
"math_id": 3,
"text": "a"
},
{
"math_id": 4,
"text": "\\epsilon"
},
{
"math_id": 5,
"text": "\\theta=-25.7^\\circ"
},
{
"math_id": 6,
"text": "-51.4^\\circ"
},
{
"math_id": 7,
"text": "a = 1.60"
},
{
"math_id": 8,
"text": "\\epsilon = 0.393"
},
{
"math_id": 9,
"text": "\\gamma"
},
{
"math_id": 10,
"text": "\\tan \\gamma = \\frac{\\epsilon r\\sin\\theta}{a(1-\\epsilon^2)}"
},
{
"math_id": 11,
"text": "7.18^\\circ"
},
{
"math_id": 12,
"text": "\\Delta V = 2 V \\sin \\gamma"
},
{
"math_id": 13,
"text": "V"
},
{
"math_id": 14,
"text": "V = V_E\\sqrt{2-\\frac{r}{a}}"
},
{
"math_id": 15,
"text": "V_\\infty = \\sqrt{V^2 + V_E^2 -2 V V_E \\cos \\gamma}"
},
{
"math_id": 16,
"text": "\\delta"
},
{
"math_id": 17,
"text": "\\Delta V = 2 V_\\infty \\sin \\delta"
},
{
"math_id": 18,
"text": "\\delta = 41.9^\\circ"
},
{
"math_id": 19,
"text": "83.8^\\circ"
},
{
"math_id": 20,
"text": "\\sin \\delta = \\frac{1}{1+\\frac{r_pV_\\infty^2}{\\mu_E}}"
}
] | https://en.wikipedia.org/wiki?curid=9432976 |
943321 | Associator | In abstract algebra, the term associator is used in different ways as a measure of the non-associativity of an algebraic structure. Associators are commonly studied as triple systems.
Ring theory.
For a non-associative ring or algebra "R", the associator is the multilinear map formula_0 given by
formula_1
Just as the commutator
formula_2
measures the degree of non-commutativity, the associator measures the degree of non-associativity of "R".
For an associative ring or algebra the associator is identically zero.
The associator in any ring obeys the identity
formula_3
The associator is alternating precisely when "R" is an alternative ring.
The associator is symmetric in its two rightmost arguments when "R" is a pre-Lie algebra.
The nucleus is the set of elements that associate with all others: that is, the "n" in "R" such that
formula_4
The nucleus is an associative subring of "R".
Quasigroup theory.
A quasigroup "Q" is a set with a binary operation formula_5 such that for each "a", "b" in "Q",
the equations formula_6 and formula_7 have unique solutions "x", "y" in "Q". In a quasigroup "Q", the associator is the map formula_8 defined by the equation
formula_9
for all "a", "b", "c" in "Q". As with its ring theory analog, the quasigroup associator is a measure of nonassociativity of "Q".
Higher-dimensional algebra.
In higher-dimensional algebra, where there may be non-identity morphisms between algebraic expressions, an associator is an isomorphism
formula_10
Category theory.
In category theory, the associator expresses the associative properties of the internal product functor in monoidal categories. | [
{
"math_id": 0,
"text": "[\\cdot,\\cdot,\\cdot] : R \\times R \\times R \\to R"
},
{
"math_id": 1,
"text": "[x,y,z] = (xy)z - x(yz)."
},
{
"math_id": 2,
"text": "[x, y] = xy - yx"
},
{
"math_id": 3,
"text": "w[x,y,z] + [w,x,y]z = [wx,y,z] - [w,xy,z] + [w,x,yz]."
},
{
"math_id": 4,
"text": "[n,R,R] = [R,n,R] = [R,R,n] = \\{0\\} \\ ."
},
{
"math_id": 5,
"text": "\\cdot : Q \\times Q \\to Q"
},
{
"math_id": 6,
"text": "a \\cdot x = b"
},
{
"math_id": 7,
"text": "y \\cdot a = b"
},
{
"math_id": 8,
"text": "(\\cdot,\\cdot,\\cdot) : Q \\times Q \\times Q \\to Q"
},
{
"math_id": 9,
"text": "(a\\cdot b)\\cdot c = (a\\cdot (b\\cdot c))\\cdot (a,b,c)"
},
{
"math_id": 10,
"text": " a_{x,y,z} : (xy)z \\mapsto x(yz)."
}
] | https://en.wikipedia.org/wiki?curid=943321 |
943382 | Tsirelson's bound | Theoretical upper limit to non-local correlations in quantum mechanics
A Tsirelson bound is an upper limit to quantum mechanical correlations between distant events. Given that quantum mechanics violates Bell inequalities (i.e., it cannot be described by a local hidden-variable theory), a natural question to ask is how large can the violation be. The answer is precisely the Tsirelson bound for the particular Bell inequality in question. In general, this bound is lower than the bound that would be obtained if more general theories, only constrained by "no-signalling" (i.e., that they do not permit communication faster than light), were considered, and much research has been dedicated to the question of why this is the case.
The Tsirelson bounds are named after Boris S. Tsirelson (or Cirel'son, in a different transliteration), the author of the article in which the first one was derived.
Bound for the CHSH inequality.
The first Tsirelson bound was derived as an upper bound on the correlations measured in the CHSH inequality. It states that if we have four (Hermitian) dichotomic observables formula_0, formula_1, formula_2, formula_3 (i.e., two observables for Alice and two for Bob) with outcomes formula_4 such that formula_5 for all formula_6, then
formula_7
For comparison, in the classical case (or local realistic case) the upper bound is 2, whereas if any arbitrary assignment of formula_4 is allowed, it is 4. The Tsirelson bound is attained already if Alice and Bob each makes measurements on a qubit, the simplest non-trivial quantum system.
Several proofs of this bound exist, but perhaps the most enlightening one is based on the Khalfin–Tsirelson–Landau identity. If we define an observable
formula_8
and formula_9, i.e., if the observables' outcomes are formula_4, then
formula_10
If formula_11 or formula_12, which can be regarded as the classical case, it already follows that formula_13. In the quantum case, we need only notice that formula_14, and the Tsirelson bound formula_15 follows.
Other Bell inequalities.
Tsirelson also showed that for any bipartite full-correlation Bell inequality with "m" inputs for Alice and "n" inputs for Bob, the ratio between the Tsirelson bound and the local bound is at most
formula_16
where
formula_17
and formula_18 is the Grothendieck constant of order "d". Note that since formula_19, this bound implies the above result about the CHSH inequality.
In general, obtaining a Tsirelson bound for a given Bell inequality is a hard problem that has to be solved on a case-by-case basis. It is not even known to be decidable. The best known computational method for upperbounding it is a convergent hierarchy of semidefinite programs, the NPA hierarchy, that in general does not halt. The exact values are known for a few more Bell inequalities:
For the Braunstein–Caves inequalities we have that
formula_20
For the WWŻB inequalities the Tsirelson bound is
formula_21
For the formula_22 inequality the Tsirelson bound is not known exactly, but concrete realisations give a lower bound of , and the NPA hierarchy gives an upper bound of . It is conjectured that only infinite-dimensional quantum states can reach the Tsirelson bound.
Derivation from physical principles.
Significant research has been dedicated to finding a physical principle that explains why quantum correlations go only up to the Tsirelson bound and nothing more. Three such principles have been found: no-advantage for non-local computation, information causality and macroscopic locality. That is to say, if one could achieve a CHSH correlation exceeding Tsirelson's bound, all such principles would be violated.
Tsirelson's bound also follows if the Bell experiment admits a strongly positive quantal measure.
Tsirelson's problem.
There are two different ways of defining the Tsirelson bound of a Bell expression. One by demanding that the measurements are in a tensor product structure, and another by demanding only that they commute. Tsirelson's problem is the question of whether these two definitions are equivalent. More formally, let
formula_23
be a Bell expression, where formula_24 is the probability of obtaining outcomes formula_25 with the settings formula_26. The tensor product Tsirelson bound is then the supremum of the value attained in this Bell expression by making measurements formula_27 and formula_28 on a quantum state formula_29:
formula_30
The commuting Tsirelson bound is the supremum of the value attained in this Bell expression by making measurements formula_31 and formula_32 such that formula_33 on a quantum state formula_34:
formula_35
Since tensor product algebras in particular commute, formula_36. In finite dimensions commuting algebras are always isomorphic to (direct sums of) tensor product algebras, so only for infinite dimensions it is possible that formula_37. Tsirelson's problem is the question of whether for all Bell expressions formula_38.
This question was first considered by Boris Tsirelson in 1993, where he asserted without proof that formula_38. Upon being asked for a proof by Antonio Acín in 2006, he realized that the one he had in mind didn't work, and issued the question as an open problem. Together with Miguel Navascués and Stefano Pironio, Antonio Acín had developed an hierarchy of semidefinite programs, the NPA hierarchy, that converged to the commuting Tsirelson bound formula_39 from above, and wanted to know whether it also converged to the tensor product Tsirelson bound formula_40, the most physically relevant one.
Since one can produce a converging sequencing of approximations to formula_40 from below by considering finite-dimensional states and observables, if formula_38, then this procedure can be combined with the NPA hierarchy to produce a halting algorithm to compute the Tsirelson bound, making it a computable number (note that in isolation neither procedure halts in general). Conversely, if formula_40 is not computable, then formula_37. In January 2020, Ji, Natarajan, Vidick, Wright, and Yuen claimed to have proven that formula_40 is not computable, thus solving Tsirelson's problem in the negative; Tsirelson's problem has been shown to be equivalent to Connes' embedding problem, so the same proof also implies that the Connes embedding problem is false. | [
{
"math_id": 0,
"text": "A_0"
},
{
"math_id": 1,
"text": "A_1"
},
{
"math_id": 2,
"text": "B_0"
},
{
"math_id": 3,
"text": "B_1"
},
{
"math_id": 4,
"text": "+1, -1"
},
{
"math_id": 5,
"text": "[A_i, B_j] = 0"
},
{
"math_id": 6,
"text": "i, j"
},
{
"math_id": 7,
"text": " \\langle A_0 B_0 \\rangle + \\langle A_0 B_1 \\rangle + \\langle A_1 B_0 \\rangle - \\langle A_1 B_1 \\rangle \\le 2\\sqrt{2}."
},
{
"math_id": 8,
"text": " \\mathcal{B} = A_0 B_0 + A_0 B_1 + A_1 B_0 - A_1 B_1, "
},
{
"math_id": 9,
"text": "A_i^2 = B_j^2 = \\mathbb{I}"
},
{
"math_id": 10,
"text": " \\mathcal{B}^2 = 4 \\mathbb{I} - [A_0, A_1] [B_0, B_1]. "
},
{
"math_id": 11,
"text": "[A_0, A_1] = 0"
},
{
"math_id": 12,
"text": "[B_0, B_1] = 0"
},
{
"math_id": 13,
"text": "\\langle \\mathcal{B} \\rangle \\le 2"
},
{
"math_id": 14,
"text": "\\big\\|[A_0, A_1]\\big\\| \\le 2 \\|A_0\\| \\|A_1\\| \\le 2"
},
{
"math_id": 15,
"text": "\\langle \\mathcal{B} \\rangle \\le 2\\sqrt{2}"
},
{
"math_id": 16,
"text": "K_G^{\\mathbb R}(\\lfloor r\\rfloor),"
},
{
"math_id": 17,
"text": "r = \\min \\left\\{m,n,-\\frac12 + \\sqrt{\\frac14 + 2(m+n)}\\right\\},"
},
{
"math_id": 18,
"text": "K_G^{\\mathbb R}(d)"
},
{
"math_id": 19,
"text": "K_G^{\\mathbb R}(2) = \\sqrt2"
},
{
"math_id": 20,
"text": " \\langle \\text{BC}_n \\rangle \\le n \\cos\\left(\\frac{\\pi}{n}\\right). "
},
{
"math_id": 21,
"text": " \\langle \\text{WWZB}_n \\rangle \\le 2^{(n-1)/2}. "
},
{
"math_id": 22,
"text": "I_{3322}"
},
{
"math_id": 23,
"text": " B = \\sum_{abxy} \\mu_{abxy} p(ab|xy) "
},
{
"math_id": 24,
"text": "p(ab|xy)"
},
{
"math_id": 25,
"text": "a, b"
},
{
"math_id": 26,
"text": "x, y"
},
{
"math_id": 27,
"text": "A^a_x : \\mathcal{H}_A \\to \\mathcal{H}_A"
},
{
"math_id": 28,
"text": "B^b_y : \\mathcal{H}_B \\to \\mathcal{H}_B"
},
{
"math_id": 29,
"text": "|\\psi\\rangle \\in \\mathcal{H}_A \\otimes \\mathcal{H}_B"
},
{
"math_id": 30,
"text": " T_t = \\sup_{|\\psi\\rangle, A^a_x, B^b_y} \\sum_{abxy} \\mu_{abxy} \\langle \\psi | A^a_x \\otimes B^b_y |\\psi\\rangle."
},
{
"math_id": 31,
"text": "A^a_x : \\mathcal{H} \\to \\mathcal{H}"
},
{
"math_id": 32,
"text": "B^b_y : \\mathcal{H} \\to \\mathcal{H}"
},
{
"math_id": 33,
"text": "\\forall a, b, x, y; [A^a_x, B^b_y] = 0"
},
{
"math_id": 34,
"text": "|\\psi\\rangle \\in \\mathcal{H}"
},
{
"math_id": 35,
"text": " T_c = \\sup_{|\\psi\\rangle, A^a_x, B^b_y} \\sum_{abxy} \\mu_{abxy} \\langle \\psi | A^a_x B^b_y |\\psi\\rangle."
},
{
"math_id": 36,
"text": "T_t \\le T_c"
},
{
"math_id": 37,
"text": "T_t \\neq T_c"
},
{
"math_id": 38,
"text": "T_t = T_c"
},
{
"math_id": 39,
"text": "T_c"
},
{
"math_id": 40,
"text": "T_t"
}
] | https://en.wikipedia.org/wiki?curid=943382 |
9435061 | Delta-cadinene synthase | The enzyme (+)-δ-cadinene synthase (EC 4.2.3.13) catalyzes the chemical reaction
(2"E",6"E")-farnesyl diphosphate formula_0 (+)-δ-cadinene + diphosphate
This enzyme belongs to the family of lyases, specifically those carbon-oxygen lyases acting on phosphates. The systematic name of this enzyme class is (2"E",6"E")-farnesyl-diphosphate diphosphate-lyase (cyclizing, (+)-δ-cadinene-forming). This enzyme participates in terpenoid biosynthesis. It employs one cofactor, magnesium.
δ-Cadinene synthase, a sesquiterpene cyclase, is an enzyme expressed in plants that catalyzes a cyclization reaction in terpenoid biosynthesis. The enzyme cyclizes farnesyl diphosphate to δ-cadinene and releases pyrophosphate.
δ-Cadinene synthase is one of the key steps in the synthesis of gossypol, a toxic terpenoid produced in cotton seeds. Recently, cotton plants that stably underexpress the enzyme in seeds have been developed using RNA interference techniques, producing a plant that had been proposed as a rich source of dietary protein for developing countries.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] | https://en.wikipedia.org/wiki?curid=9435061 |
943637 | Sub-Riemannian manifold | Type of generalization of a Riemannian manifold
In mathematics, a sub-Riemannian manifold is a certain type of generalization of a Riemannian manifold. Roughly speaking, to measure distances in a sub-Riemannian manifold, you are allowed to go only along curves tangent to so-called "horizontal subspaces".
Sub-Riemannian manifolds (and so, "a fortiori", Riemannian manifolds) carry a natural intrinsic metric called the metric of Carnot–Carathéodory. The Hausdorff dimension of such metric spaces is always an integer and larger than its topological dimension (unless it is actually a Riemannian manifold).
Sub-Riemannian manifolds often occur in the study of constrained systems in classical mechanics, such as the motion of vehicles on a surface, the motion of robot arms, and the orbital dynamics of satellites. Geometric quantities such as the Berry phase may be understood in the language of sub-Riemannian geometry. The Heisenberg group, important to quantum mechanics, carries a natural sub-Riemannian structure.
Definitions.
By a "distribution" on formula_0 we mean a subbundle of the tangent bundle of formula_0 (see also distribution).
Given a distribution formula_1 a vector field in formula_2 is called "horizontal". A curve formula_3 on formula_0 is called horizontal if formula_4 for any
formula_5.
A distribution on formula_2 is called "completely non-integrable" or "bracket generating" if for any formula_6 we have that any tangent vector can be presented as a linear combination of Lie brackets of horizontal fields, i.e. vectors of the form formula_7 where all vector fields formula_8 are horizontal. This requirement is also known as Hörmander's condition.
A sub-Riemannian manifold is a triple formula_9, where formula_0 is a differentiable manifold, formula_10 is a completely non-integrable "horizontal" distribution and formula_11 is a smooth section of positive-definite quadratic forms on formula_10.
Any (connected) sub-Riemannian manifold carries a natural intrinsic metric, called the metric of Carnot–Carathéodory, defined as
formula_12
where infimum is taken along all "horizontal curves" formula_13 such that formula_14, formula_15.
Horizontal curves can be taken either Lipschitz continuous, Absolutely continuous or in the Sobolev space formula_16 producing the same metric in all cases.
The fact that the distance of two points is always finite (i.e. any two points are connected by an horizontal curve) is a consequence of Hörmander's condition known as Chow–Rashevskii theorem.
Examples.
A position of a car on the plane is determined by three parameters: two coordinates formula_17 and formula_18 for the location and an angle formula_19 which describes the orientation of the car. Therefore, the position of the car can be described by a point in a manifold
formula_20
One can ask, what is the minimal distance one should drive to get from one position to another? This defines a Carnot–Carathéodory metric on the manifold
formula_20
A closely related example of a sub-Riemannian metric can be constructed on a Heisenberg group: Take two elements formula_19 and formula_21 in the corresponding Lie algebra such that
formula_22
spans the entire algebra. The horizontal distribution formula_10 spanned by left shifts of formula_19 and formula_21 is "completely non-integrable". Then choosing any smooth positive quadratic form on formula_10 gives a sub-Riemannian metric on the group.
Properties.
For every sub-Riemannian manifold, there exists a Hamiltonian, called the sub-Riemannian Hamiltonian, constructed out of the metric for the manifold. Conversely, every such quadratic Hamiltonian induces a sub-Riemannian manifold.
Solutions of the corresponding Hamilton–Jacobi equations for the sub-Riemannian Hamiltonian are called geodesics, and generalize Riemannian geodesics. | [
{
"math_id": 0,
"text": "M"
},
{
"math_id": 1,
"text": "H(M)\\subset T(M)"
},
{
"math_id": 2,
"text": "H(M)"
},
{
"math_id": 3,
"text": "\\gamma"
},
{
"math_id": 4,
"text": "\\dot\\gamma(t)\\in H_{\\gamma(t)}(M)"
},
{
"math_id": 5,
"text": "t"
},
{
"math_id": 6,
"text": "x\\in M"
},
{
"math_id": 7,
"text": "A(x),\\ [A,B](x),\\ [A,[B,C]](x),\\ [A,[B,[C,D]]](x),\\dotsc\\in T_x(M)"
},
{
"math_id": 8,
"text": "A,B,C,D, \\dots"
},
{
"math_id": 9,
"text": "(M, H, g)"
},
{
"math_id": 10,
"text": "H"
},
{
"math_id": 11,
"text": "g"
},
{
"math_id": 12,
"text": "d(x, y) = \\inf\\int_0^1 \\sqrt{g(\\dot\\gamma(t),\\dot\\gamma(t))} \\, dt,"
},
{
"math_id": 13,
"text": "\\gamma: [0, 1] \\to M"
},
{
"math_id": 14,
"text": "\\gamma(0)=x"
},
{
"math_id": 15,
"text": "\\gamma(1)=y"
},
{
"math_id": 16,
"text": " H^1([0,1],M) "
},
{
"math_id": 17,
"text": "x"
},
{
"math_id": 18,
"text": "y"
},
{
"math_id": 19,
"text": "\\alpha"
},
{
"math_id": 20,
"text": "\\mathbb R^2\\times S^1."
},
{
"math_id": 21,
"text": "\\beta"
},
{
"math_id": 22,
"text": "\\{ \\alpha,\\beta,[\\alpha,\\beta]\\}"
}
] | https://en.wikipedia.org/wiki?curid=943637 |
943917 | Laguerre polynomials | Sequence of differential equation solutions
In mathematics, the Laguerre polynomials, named after Edmond Laguerre (1834–1886), are nontrivial solutions of Laguerre's differential equation:
formula_0
which is a second-order linear differential equation. This equation has nonsingular solutions only if n is a non-negative integer.
Sometimes the name Laguerre polynomials is used for solutions of
formula_1
where n is still a non-negative integer.
Then they are also named generalized Laguerre polynomials, as will be done here (alternatively associated Laguerre polynomials or, rarely, Sonine polynomials, after their inventor Nikolay Yakovlevich Sonin).
More generally, a Laguerre function is a solution when n is not necessarily a non-negative integer.
The Laguerre polynomials are also used for Gauss–Laguerre quadrature to numerically compute integrals of the form
formula_2
These polynomials, usually denoted "L"0, "L"1, ..., are a polynomial sequence which may be defined by the Rodrigues formula,
formula_3
reducing to the closed form of a following section.
They are orthogonal polynomials with respect to an inner product
formula_4
The rook polynomials in combinatorics are more or less the same as Laguerre polynomials, up to elementary changes of variables. Further see the Tricomi–Carlitz polynomials.
The Laguerre polynomials arise in quantum mechanics, in the radial part of the solution of the Schrödinger equation for a one-electron atom. They also describe the static Wigner functions of oscillator systems in quantum mechanics in phase space. They further enter in the quantum mechanics of the Morse potential and of the .
Physicists sometimes use a definition for the Laguerre polynomials that is larger by a factor of "n"! than the definition used here. (Likewise, some physicists may use somewhat different definitions of the so-called associated Laguerre polynomials.)
The first few polynomials.
These are the first few Laguerre polynomials:
Recursive definition, closed form, and generating function.
One can also define the Laguerre polynomials recursively, defining the first two polynomials as
formula_5
formula_6
and then using the following recurrence relation for any "k" ≥ 1:
formula_7
Furthermore,
formula_8
In solution of some boundary value problems, the characteristic values can be useful:
formula_9
The closed form is
formula_10
The generating function for them likewise follows,
formula_11The operator form is
formula_12
Polynomials of negative index can be expressed using the ones with positive index:
formula_13
Generalized Laguerre polynomials.
For arbitrary real α the polynomial solutions of the differential equation
formula_14
are called generalized Laguerre polynomials, or associated Laguerre polynomials.
One can also define the generalized Laguerre polynomials recursively, defining the first two polynomials as
formula_15
formula_16
and then using the following recurrence relation for any "k" ≥ 1:
formula_17
The simple Laguerre polynomials are the special case "α" = 0 of the generalized Laguerre polynomials:
formula_18
The Rodrigues formula for them is
formula_19
The generating function for them is
formula_20
As a contour integral.
Given the generating function specified above, the polynomials may be expressed in terms of a contour integral
formula_34
where the contour circles the origin once in a counterclockwise direction without enclosing the essential singularity at 1
Recurrence relations.
The addition formula for Laguerre polynomials:
formula_35
Laguerre's polynomials satisfy the recurrence relations
formula_36
in particular
formula_37
and
formula_38
or
formula_39
moreover
formula_40
They can be used to derive the four 3-point-rules
formula_41
combined they give this additional, useful recurrence relationsformula_42
Since formula_43 is a monic polynomial of degree formula_44 in formula_45,
there is the partial fraction decomposition
formula_46
The second equality follows by the following identity, valid for integer "i" and n and immediate from the expression of formula_43 in terms of Charlier polynomials:
formula_47
For the third equality apply the fourth and fifth identities of this section.
Derivatives of generalized Laguerre polynomials.
Differentiating the power series representation of a generalized Laguerre polynomial k times leads to
formula_48
This points to a special case ("α" = 0) of the formula above: for integer "α" = "k" the generalized polynomial may be written
formula_49
the shift by k sometimes causing confusion with the usual parenthesis notation for a derivative.
Moreover, the following equation holds:
formula_50
which generalizes with Cauchy's formula to
formula_51
The derivative with respect to the second variable α has the form,
formula_52
The generalized Laguerre polynomials obey the differential equation
formula_53
which may be compared with the equation obeyed by the "k"th derivative of the ordinary Laguerre polynomial,
formula_54
where formula_55 for this equation only.
In Sturm–Liouville form the differential equation is
formula_56
which shows that "L" is an eigenvector for the eigenvalue n.
Orthogonality.
The generalized Laguerre polynomials are orthogonal over [0, ∞) with respect to the measure with weighting function "xα" "e"−"x":
formula_57
which follows from
formula_58
If formula_59 denotes the gamma distribution then the orthogonality relation can be written as
formula_60
The associated, symmetric kernel polynomial has the representations (Christoffel–Darboux formula)
formula_61
recursively
formula_62
Moreover,
formula_63
Turán's inequalities can be derived here, which is
formula_64
The following integral is needed in the quantum mechanical treatment of the hydrogen atom,
formula_65
Series expansions.
Let a function have the (formal) series expansion
formula_66
Then
formula_67
The series converges in the associated Hilbert space "L"2[0, ∞) if and only if
formula_68
Further examples of expansions.
Monomials are represented as
formula_69
while binomials have the parametrization
formula_70
This leads directly to
formula_71
for the exponential function. The incomplete gamma function has the representation
formula_72
In quantum mechanics.
In quantum mechanics the Schrödinger equation for the hydrogen-like atom is exactly solvable by separation of variables in spherical coordinates. The radial part of the wave function is a (generalized) Laguerre polynomial.
Vibronic transitions in the Franck-Condon approximation can also be described using Laguerre polynomials.
Multiplication theorems.
Erdélyi gives the following two multiplication theorems
formula_73
Relation to Hermite polynomials.
The generalized Laguerre polynomials are related to the Hermite polynomials:
formula_74
where the "H""n"("x") are the Hermite polynomials based on the weighting function exp(−"x"2), the so-called "physicist's version."
Because of this, the generalized Laguerre polynomials arise in the treatment of the quantum harmonic oscillator.
Relation to hypergeometric functions.
The Laguerre polynomials may be defined in terms of hypergeometric functions, specifically the confluent hypergeometric functions, as
formula_75
where formula_76 is the Pochhammer symbol (which in this case represents the rising factorial).
Hardy–Hille formula.
The generalized Laguerre polynomials satisfy the Hardy–Hille formula
formula_77
where the series on the left converges for formula_78 and formula_79. Using the identity
formula_80
(see generalized hypergeometric function), this can also be written as
formula_81
This formula is a generalization of the Mehler kernel for Hermite polynomials, which can be recovered from it by using the relations between Laguerre and Hermite polynomials given above.
Physics Convention.
The generalized Laguerre polynomials are used to describe the quantum wavefunction for hydrogen atom orbitals. The convention used throughout this article expresses the generalized Laguerre polynomials as
formula_82
where formula_83 is the confluent hypergeometric function.
In the physics literature, the generalized Laguerre polynomials are instead defined as
formula_84
The physics version is related to the standard version by
formula_85
There is yet another, albeit less frequently used, convention in the physics literature
formula_86
Umbral Calculus Convention.
Generalized Laguerre polynomials are linked to Umbral calculus by being Sheffer sequences for formula_87 when multiplied by formula_88. In Umbral Calculus convention, the default Laguerre polynomials are defined to beformula_89where formula_90 are the signless Lah numbers. formula_91 is a sequence of polynomials of binomial type, "ie" they satisfyformula_92
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "xy'' + (1 - x)y' + ny = 0,\\ \ny = y(x)"
},
{
"math_id": 1,
"text": "xy'' + (\\alpha + 1 - x)y' + ny = 0~."
},
{
"math_id": 2,
"text": "\\int_0^\\infty f(x) e^{-x} \\, dx."
},
{
"math_id": 3,
"text": "L_n(x)=\\frac{e^x}{n!}\\frac{d^n}{dx^n}\\left(e^{-x} x^n\\right) =\\frac{1}{n!} \\left( \\frac{d}{dx} -1 \\right)^n x^n,"
},
{
"math_id": 4,
"text": "\\langle f,g \\rangle = \\int_0^\\infty f(x) g(x) e^{-x}\\,dx."
},
{
"math_id": 5,
"text": "L_0(x) = 1"
},
{
"math_id": 6,
"text": "L_1(x) = 1 - x"
},
{
"math_id": 7,
"text": "L_{k + 1}(x) = \\frac{(2k + 1 - x)L_k(x) - k L_{k - 1}(x)}{k + 1}. "
},
{
"math_id": 8,
"text": " x L'_n(x) = nL_n (x) - nL_{n-1}(x)."
},
{
"math_id": 9,
"text": "L_{k}(0) = 1, L_{k}'(0) = -k. "
},
{
"math_id": 10,
"text": "L_n(x)=\\sum_{k=0}^n \\binom{n}{k}\\frac{(-1)^k}{k!} x^k ."
},
{
"math_id": 11,
"text": "\\sum_{n=0}^\\infty t^n L_n(x)= \\frac{1}{1-t} e^{-tx/(1-t)}."
},
{
"math_id": 12,
"text": "L_n(x) = \\frac{1}{n!}e^x \\frac{d^n}{dx^n} (x^n e^{-x}) "
},
{
"math_id": 13,
"text": "L_{-n}(x)=e^xL_{n-1}(-x)."
},
{
"math_id": 14,
"text": "x\\,y'' + \\left(\\alpha +1 - x\\right) y' + n\\,y = 0"
},
{
"math_id": 15,
"text": "L^{(\\alpha)}_0(x) = 1"
},
{
"math_id": 16,
"text": "L^{(\\alpha)}_1(x) = 1 + \\alpha - x"
},
{
"math_id": 17,
"text": "L^{(\\alpha)}_{k + 1}(x) = \\frac{(2k + 1 + \\alpha - x)L^{(\\alpha)}_k(x) - (k + \\alpha) L^{(\\alpha)}_{k - 1}(x)}{k + 1}. "
},
{
"math_id": 18,
"text": "L^{(0)}_n(x) = L_n(x)."
},
{
"math_id": 19,
"text": "L_n^{(\\alpha)}(x) = {x^{-\\alpha} e^x \\over n!}{d^n \\over dx^n} \\left(e^{-x} x^{n+\\alpha}\\right)\n= \\frac{x^{-\\alpha}}{n!}\\left( \\frac{d}{dx}-1\\right)^nx^{n+\\alpha}."
},
{
"math_id": 20,
"text": "\\sum_{n=0}^\\infty t^n L^{(\\alpha)}_n(x)= \\frac{1}{(1-t)^{\\alpha+1}} e^{-tx/(1-t)}."
},
{
"math_id": 21,
"text": " L_n^{(\\alpha)}(x) := {n+ \\alpha \\choose n} M(-n,\\alpha+1,x)."
},
{
"math_id": 22,
"text": "{n+ \\alpha \\choose n}"
},
{
"math_id": 23,
"text": "L_n^{(\\alpha)}(x)= \\frac {(-1)^n}{n!} U(-n,\\alpha+1,x)"
},
{
"math_id": 24,
"text": " L_n^{(\\alpha)} (x) = \\sum_{i=0}^n (-1)^i {n+\\alpha \\choose n-i} \\frac{x^i}{i!} "
},
{
"math_id": 25,
"text": "D = \\frac{d}{dx}"
},
{
"math_id": 26,
"text": "M=xD^2+(\\alpha+1)D"
},
{
"math_id": 27,
"text": "\\exp(-tM)x^n=(-1)^nt^nn!L^{(\\alpha)}_n\\left(\\frac{x}{t}\\right)"
},
{
"math_id": 28,
"text": "L_n^{(\\alpha)}(0) = {n+\\alpha\\choose n} = \\frac{\\Gamma(n + \\alpha + 1)}{n!\\, \\Gamma(\\alpha + 1)};"
},
{
"math_id": 29,
"text": "\\left((-1)^{n-i} L_{n-i}^{(\\alpha)}\\right)_{i=0}^n"
},
{
"math_id": 30,
"text": "\\left( 0, n+\\alpha+ (n-1) \\sqrt{n+\\alpha} \\, \\right]."
},
{
"math_id": 31,
"text": "\n\\begin{align}\n& L_n^{(\\alpha)}(x) = \\frac{n^{\\frac{\\alpha}{2}-\\frac{1}{4}}}{\\sqrt{\\pi}} \\frac{e^{\\frac{x}{2}}}{x^{\\frac{\\alpha}{2}+\\frac{1}{4}}} \\sin\\left(2 \\sqrt{nx}- \\frac{\\pi}{2}\\left(\\alpha-\\frac{1}{2} \\right) \\right)+O\\left(n^{\\frac{\\alpha}{2}-\\frac{3}{4}}\\right), \\\\[6pt]\n& L_n^{(\\alpha)}(-x) = \\frac{(n+1)^{\\frac{\\alpha}{2}-\\frac{1}{4}}}{2\\sqrt{\\pi}} \\frac{e^{-x/2}}{x^{\\frac{\\alpha}{2}+\\frac{1}{4}}} e^{2 \\sqrt{x(n+1)}} \\cdot\\left(1+O\\left(\\frac{1}{\\sqrt{n+1}}\\right)\\right),\n\\end{align}\n"
},
{
"math_id": 32,
"text": "\\frac{L_n^{(\\alpha)}\\left(\\frac x n\\right)}{n^\\alpha}\\approx e^{x/ 2n} \\cdot \\frac{J_\\alpha\\left(2\\sqrt x\\right)}{\\sqrt x^\\alpha},"
},
{
"math_id": 33,
"text": "J_\\alpha"
},
{
"math_id": 34,
"text": "L_n^{(\\alpha)}(x)=\\frac{1}{2\\pi i}\\oint_C\\frac{e^{-xt/(1-t)}}{(1-t)^{\\alpha+1}\\,t^{n+1}} \\; dt,"
},
{
"math_id": 35,
"text": "L_n^{(\\alpha+\\beta+1)}(x+y)= \\sum_{i=0}^n L_i^{(\\alpha)}(x) L_{n-i}^{(\\beta)}(y) ."
},
{
"math_id": 36,
"text": "L_n^{(\\alpha)}(x)= \\sum_{i=0}^n L_{n-i}^{(\\alpha+i)}(y)\\frac{(y-x)^i}{i!},"
},
{
"math_id": 37,
"text": "L_n^{(\\alpha+1)}(x)= \\sum_{i=0}^n L_i^{(\\alpha)}(x)"
},
{
"math_id": 38,
"text": "L_n^{(\\alpha)}(x)= \\sum_{i=0}^n {\\alpha-\\beta+n-i-1 \\choose n-i} L_i^{(\\beta)}(x),"
},
{
"math_id": 39,
"text": "L_n^{(\\alpha)}(x)=\\sum_{i=0}^n {\\alpha-\\beta+n \\choose n-i} L_i^{(\\beta- i)}(x);"
},
{
"math_id": 40,
"text": "\\begin{align}\nL_n^{(\\alpha)}(x)- \\sum_{j=0}^{\\Delta-1} {n+\\alpha \\choose n-j} (-1)^j \\frac{x^j}{j!}&= (-1)^\\Delta\\frac{x^\\Delta}{(\\Delta-1)!} \\sum_{i=0}^{n-\\Delta} \\frac{{n+\\alpha \\choose n-\\Delta-i}}{(n-i){n \\choose i}}L_i^{(\\alpha+\\Delta)}(x)\\\\[6pt]\n&=(-1)^\\Delta\\frac{x^\\Delta}{(\\Delta-1)!} \\sum_{i=0}^{n-\\Delta} \\frac{{n+\\alpha-i-1 \\choose n-\\Delta-i}}{(n-i){n \\choose i}}L_i^{(n+\\alpha+\\Delta-i)}(x)\n\\end{align}"
},
{
"math_id": 41,
"text": "\\begin{align}\nL_n^{(\\alpha)}(x) &= L_n^{(\\alpha+1)}(x) - L_{n-1}^{(\\alpha+1)}(x) = \\sum_{j=0}^k {k \\choose j}(-1)^j L_{n-j}^{(\\alpha+k)}(x), \\\\[10pt]\nn L_n^{(\\alpha)}(x) &= (n + \\alpha )L_{n-1}^{(\\alpha)}(x) - x L_{n-1}^{(\\alpha+1)}(x), \\\\[10pt]\n& \\text{or } \\\\\n\\frac{x^k}{k!}L_n^{(\\alpha)}(x) &= \\sum_{i=0}^k (-1)^i {n+i \\choose i} {n+\\alpha \\choose k-i} L_{n+i}^{(\\alpha-k)}(x), \\\\[10pt]\nn L_n^{(\\alpha+1)}(x) &= (n-x) L_{n-1}^{(\\alpha+1)}(x) + (n+\\alpha)L_{n-1}^{(\\alpha)}(x) \\\\[10pt]\nx L_n^{(\\alpha+1)}(x) &= (n+\\alpha)L_{n-1}^{(\\alpha)}(x)-(n-x)L_n^{(\\alpha)}(x);\n\\end{align}"
},
{
"math_id": 42,
"text": "\\begin{align}\nL_n^{(\\alpha)}(x)&= \\left(2+\\frac{\\alpha-1-x}n \\right)L_{n-1}^{(\\alpha)}(x)- \\left(1+\\frac{\\alpha-1}n \\right)L_{n-2}^{(\\alpha)}(x)\\\\[10pt]\n&= \\frac{\\alpha+1-x}n L_{n-1}^{(\\alpha+1)}(x)- \\frac x n L_{n-2}^{(\\alpha+2)}(x)\n\\end{align}"
},
{
"math_id": 43,
"text": "L_n^{(\\alpha)}(x)"
},
{
"math_id": 44,
"text": "n"
},
{
"math_id": 45,
"text": "\\alpha"
},
{
"math_id": 46,
"text": "\\begin{align}\n\\frac{n!\\,L_n^{(\\alpha)}(x)}{(\\alpha+1)_n} \n&= 1- \\sum_{j=1}^n (-1)^j \\frac{j}{\\alpha + j} {n \\choose j}L_n^{(-j)}(x) \\\\\n&= 1- \\sum_{j=1}^n \\frac{x^j}{\\alpha + j}\\,\\,\\frac{L_{n-j}^{(j)}(x)}{(j-1)!} \\\\\n&= 1-x \\sum_{i=1}^n \\frac{L_{n-i}^{(-\\alpha)}(x) L_{i-1}^{(\\alpha+1)}(-x)}{\\alpha +i}.\n\\end{align}"
},
{
"math_id": 47,
"text": " \\frac{(-x)^i}{i!} L_n^{(i-n)}(x) = \\frac{(-x)^n}{n!} L_i^{(n-i)}(x)."
},
{
"math_id": 48,
"text": "\\frac{d^k}{d x^k} L_n^{(\\alpha)} (x) = \\begin{cases}\n(-1)^k L_{n-k}^{(\\alpha+k)}(x) & \\text{if } k\\le n, \\\\\n0 & \\text{otherwise.}\n\\end{cases}"
},
{
"math_id": 49,
"text": "L_n^{(k)}(x)=(-1)^k\\frac{d^kL_{n+k}(x)}{dx^k},"
},
{
"math_id": 50,
"text": "\\frac{1}{k!} \\frac{d^k}{d x^k} x^\\alpha L_n^{(\\alpha)} (x) = {n+\\alpha \\choose k} x^{\\alpha-k} L_n^{(\\alpha-k)}(x),"
},
{
"math_id": 51,
"text": "L_n^{(\\alpha')}(x) = (\\alpha'-\\alpha) {\\alpha'+ n \\choose \\alpha'-\\alpha} \\int_0^x \\frac{t^\\alpha (x-t)^{\\alpha'-\\alpha-1}}{x^{\\alpha'}} L_n^{(\\alpha)}(t)\\,dt."
},
{
"math_id": 52,
"text": "\\frac{d}{d \\alpha}L_n^{(\\alpha)}(x)= \\sum_{i=0}^{n-1} \\frac{L_i^{(\\alpha)}(x)}{n-i}."
},
{
"math_id": 53,
"text": "x L_n^{(\\alpha) \\prime\\prime}(x) + (\\alpha+1-x)L_n^{(\\alpha)\\prime}(x) + n L_n^{(\\alpha)}(x)=0,"
},
{
"math_id": 54,
"text": "x L_n^{[k] \\prime\\prime}(x) + (k+1-x)L_n^{[k]\\prime}(x) + (n-k) L_n^{[k]}(x)=0,"
},
{
"math_id": 55,
"text": "L_n^{[k]}(x)\\equiv\\frac{d^kL_n(x)}{dx^k}"
},
{
"math_id": 56,
"text": "-\\left(x^{\\alpha+1} e^{-x}\\cdot L_n^{(\\alpha)}(x)^\\prime\\right)' = n\\cdot x^\\alpha e^{-x}\\cdot L_n^{(\\alpha)}(x),"
},
{
"math_id": 57,
"text": "\\int_0^\\infty x^\\alpha e^{-x} L_n^{(\\alpha)}(x)L_m^{(\\alpha)}(x)dx=\\frac{\\Gamma(n+\\alpha+1)}{n!} \\delta_{n,m},"
},
{
"math_id": 58,
"text": "\\int_0^\\infty x^{\\alpha'-1} e^{-x} L_n^{(\\alpha)}(x)dx= {\\alpha-\\alpha'+n \\choose n} \\Gamma(\\alpha')."
},
{
"math_id": 59,
"text": "\\Gamma(x,\\alpha+1,1)"
},
{
"math_id": 60,
"text": "\\int_0^{\\infty} L_n^{(\\alpha)}(x)L_m^{(\\alpha)}(x)\\Gamma(x,\\alpha+1,1) dx={n+ \\alpha \\choose n}\\delta_{n,m},"
},
{
"math_id": 61,
"text": "\\begin{align}\nK_n^{(\\alpha)}(x,y) &:= \\frac{1}{\\Gamma(\\alpha+1)} \\sum_{i=0}^n \\frac{L_i^{(\\alpha)}(x) L_i^{(\\alpha)}(y)}{{\\alpha+i \\choose i}}\\\\[4pt]\n& =\\frac{1}{\\Gamma(\\alpha+1)} \\frac{L_n^{(\\alpha)}(x) L_{n+1}^{(\\alpha)}(y) - L_{n+1}^{(\\alpha)}(x) L_n^{(\\alpha)}(y)}{\\frac{x-y}{n+1} {n+\\alpha \\choose n}} \\\\[4pt]\n&= \\frac{1}{\\Gamma(\\alpha+1)}\\sum_{i=0}^n \\frac{x^i}{i!} \\frac{L_{n-i}^{(\\alpha+i)}(x) L_{n-i}^{(\\alpha+i+1)}(y)}{{\\alpha+n \\choose n}{n \\choose i}};\n\\end{align}"
},
{
"math_id": 62,
"text": "K_n^{(\\alpha)}(x,y)=\\frac{y}{\\alpha+1} K_{n-1}^{(\\alpha+1)}(x,y)+ \\frac{1}{\\Gamma(\\alpha+1)} \\frac{L_n^{(\\alpha+1)}(x) L_n^{(\\alpha)}(y)}{{\\alpha+n \\choose n}}."
},
{
"math_id": 63,
"text": "y^\\alpha e^{-y} K_n^{(\\alpha)}(\\cdot, y) \\to \\delta(y- \\cdot)."
},
{
"math_id": 64,
"text": "L_n^{(\\alpha)}(x)^2- L_{n-1}^{(\\alpha)}(x) L_{n+1}^{(\\alpha)}(x)= \\sum_{k=0}^{n-1} \\frac{{\\alpha+n-1\\choose n-k}}{n{n\\choose k}} L_k^{(\\alpha-1)}(x)^2>0."
},
{
"math_id": 65,
"text": "\\int_0^{\\infty}x^{\\alpha+1} e^{-x} \\left[L_n^{(\\alpha)} (x)\\right]^2 dx= \\frac{(n+\\alpha)!}{n!}(2n+\\alpha+1)."
},
{
"math_id": 66,
"text": "f(x)= \\sum_{i=0}^\\infty f_i^{(\\alpha)} L_i^{(\\alpha)}(x)."
},
{
"math_id": 67,
"text": "f_i^{(\\alpha)}=\\int_0^\\infty \\frac{L_i^{(\\alpha)}(x)}{{i+ \\alpha \\choose i}} \\cdot \\frac{x^\\alpha e^{-x}}{\\Gamma(\\alpha+1)} \\cdot f(x) \\,dx ."
},
{
"math_id": 68,
"text": "\\| f \\|_{L^2}^2 := \\int_0^\\infty \\frac{x^\\alpha e^{-x}}{\\Gamma(\\alpha+1)} | f(x)|^2 \\, dx = \\sum_{i=0}^\\infty {i+\\alpha \\choose i} |f_i^{(\\alpha)}|^2 < \\infty. "
},
{
"math_id": 69,
"text": "\\frac{x^n}{n!}= \\sum_{i=0}^n (-1)^i {n+ \\alpha \\choose n-i} L_i^{(\\alpha)}(x),"
},
{
"math_id": 70,
"text": "{n+x \\choose n}= \\sum_{i=0}^n \\frac{\\alpha^i}{i!} L_{n-i}^{(x+i)}(\\alpha)."
},
{
"math_id": 71,
"text": "e^{-\\gamma x}= \\sum_{i=0}^\\infty \\frac{\\gamma^i}{(1+\\gamma)^{i+\\alpha+1}} L_i^{(\\alpha)}(x) \\qquad \\text{convergent iff } \\Re(\\gamma) > -\\tfrac{1}{2}"
},
{
"math_id": 72,
"text": "\\Gamma(\\alpha,x)=x^\\alpha e^{-x} \\sum_{i=0}^\\infty \\frac{L_i^{(\\alpha)}(x)}{1+i} \\qquad \\left(\\Re(\\alpha)>-1 , x > 0\\right)."
},
{
"math_id": 73,
"text": "\\begin{align}\n& t^{n+1+\\alpha} e^{(1-t) z} L_n^{(\\alpha)}(z t)=\\sum_{k=n}^\\infty {k \\choose n}\\left(1-\\frac 1 t\\right)^{k-n} L_k^{(\\alpha)}(z), \\\\[6pt]\n& e^{(1-t)z} L_n^{(\\alpha)}(z t)=\\sum_{k=0}^\\infty \\frac{(1-t)^k z^k}{k!}L_n^{(\\alpha+k)}(z).\n\\end{align}"
},
{
"math_id": 74,
"text": "\\begin{align}\nH_{2n}(x) &= (-1)^n 2^{2n} n! L_n^{(-1/2)} (x^2) \\\\[4pt]\nH_{2n+1}(x) &= (-1)^n 2^{2n+1} n! x L_n^{(1/2)} (x^2)\n\\end{align}"
},
{
"math_id": 75,
"text": "L^{(\\alpha)}_n(x) = {n+\\alpha \\choose n} M(-n,\\alpha+1,x) =\\frac{(\\alpha+1)_n} {n!} \\,_1F_1(-n,\\alpha+1,x)"
},
{
"math_id": 76,
"text": "(a)_n"
},
{
"math_id": 77,
"text": "\\sum_{n=0}^\\infty \\frac{n!\\,\\Gamma\\left(\\alpha + 1\\right)}{\\Gamma\\left(n+\\alpha+1\\right)}L_n^{(\\alpha)}(x)L_n^{(\\alpha)}(y)t^n=\\frac{1}{(1-t)^{\\alpha + 1}}e^{-(x+y)t/(1-t)}\\,_0F_1\\left(;\\alpha + 1;\\frac{xyt}{(1-t)^2}\\right),"
},
{
"math_id": 78,
"text": "\\alpha>-1"
},
{
"math_id": 79,
"text": "|t|<1"
},
{
"math_id": 80,
"text": "\\,_0F_1(;\\alpha + 1;z)=\\,\\Gamma(\\alpha + 1) z^{-\\alpha/2} I_\\alpha\\left(2\\sqrt{z}\\right),"
},
{
"math_id": 81,
"text": "\\sum_{n=0}^\\infty \\frac{n!}{\\Gamma(1+\\alpha+n)}L_n^{(\\alpha)}(x)L_n^{(\\alpha)}(y) t^n = \\frac{1}{(xyt)^{\\alpha/2}(1-t)}e^{-(x+y)t/(1-t)} I_\\alpha \\left(\\frac{2\\sqrt{xyt}}{1-t}\\right)."
},
{
"math_id": 82,
"text": "L_n^{(\\alpha)}(x) = \\frac{\\Gamma(\\alpha + n + 1)}{\\Gamma(\\alpha + 1) n!} \\,_1F_1(-n; \\alpha + 1; x),"
},
{
"math_id": 83,
"text": "\\,_1F_1(a;b;x)"
},
{
"math_id": 84,
"text": "\\bar{L}_n^{(\\alpha)}(x) = \\frac{\\left[\\Gamma(\\alpha + n + 1)\\right]^2}{\\Gamma(\\alpha + 1)n!} \\,_1F_1(-n; \\alpha + 1; x)."
},
{
"math_id": 85,
"text": "\\bar{L}_n^{(\\alpha)}(x) = (n+\\alpha)! L_n^{(\\alpha)}(x)."
},
{
"math_id": 86,
"text": "\\tilde{L}_n^{(\\alpha)}(x) = (-1)^{\\alpha}\\bar{L}_{n-\\alpha}^{(\\alpha)}."
},
{
"math_id": 87,
"text": "D/(D-I)"
},
{
"math_id": 88,
"text": "n!"
},
{
"math_id": 89,
"text": "\\mathcal L_n(x) = n!L_n^{(-1)}(x) = \\sum_{k=0}^n L(n,k) (-x)^k"
},
{
"math_id": 90,
"text": "L(n,k) = \\binom{n-1}{k-1} \\frac{n!}{k!}"
},
{
"math_id": 91,
"text": "(\\mathcal L_n(x))_{n\\in\\N}"
},
{
"math_id": 92,
"text": "\\mathcal L_n(x+y) = \\sum_{k=0}^n \\binom{n}{k} \\mathcal L_k(x) \\mathcal L_{n-k}(y)"
}
] | https://en.wikipedia.org/wiki?curid=943917 |
9441268 | Computational magnetohydrodynamics | Computational magnetohydrodynamics (CMHD) is a rapidly developing branch of magnetohydrodynamics that uses numerical methods and algorithms to solve and analyze problems that involve electrically conducting fluids. Most of the methods used in CMHD are borrowed from the well established techniques employed in Computational fluid dynamics. The complexity mainly arises due to the presence of a magnetic field and its coupling with the fluid. One of the important issues is to numerically maintain the formula_0 (conservation of magnetic flux) condition, from Maxwell's equations, to avoid the presence of unrealistic effects, namely magnetic monopoles, in the solutions.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\nabla \\cdot {\\mathbf B} = 0"
}
] | https://en.wikipedia.org/wiki?curid=9441268 |
944137 | Maghemite | Iron oxide with a spinel ferrite structure
Maghemite (Fe2O3, γ-Fe2O3) is a member of the family of iron oxides. It has the same formula as hematite, but the same spinel ferrite structure as magnetite () and is also ferrimagnetic. It is sometimes spelled as "maghaemite".
"Maghemite" can be considered as an Fe(II)-deficient magnetite with formula
formula_0
where
formula_1 represents a vacancy, A indicates tetrahedral and B octahedral positioning.
Occurrence.
Maghemite forms by weathering or low-temperature oxidation of spinels containing iron(II) such as magnetite or titanomagnetite. Maghemite can also form through dehydration and transformation of certain iron oxyhydroxide minerals, such as lepidocrocite and ferrihydrite. It occurs as widespread brown or yellow pigment in terrestrial sediments and soils. It is associated with magnetite, ilmenite, anatase, pyrite, marcasite, lepidocrocite and goethite. It is known to also form in areas that have been subjected to bushfires (particularly in the Leonora area of Western Australia) magnetising iron minerals.
Maghemite was named in 1927 for an occurrence at the Iron Mountain Mine, northwest of Redding, Shasta County, California. The name alludes to somewhat intermediate character between magnetite and hematite. It can appear blue with a grey shade, white, or brown. It has isometric crystals. Maghemite is formed by the topotactic oxidation of magnetite.
Cation distribution.
There is experimental and theoretical evidence that Fe(III) cations and vacancies tend to be ordered in the octahedral sites, in a way that maximizes the homogeneity of the distribution and therefore minimizes the electrostatic energy of the crystal.
Electronic structure.
Maghemite is a semiconductor with a bandgap of ca. 2 eV, although the precise value of the gap depends on the electron spin.
Applications.
Maghemite exhibits ferrimagnetic ordering with a high Néel temperature (~950 K), which together with its low cost and chemical stability led to its wide application as a magnetic pigment in electronic recording media since the 1940s.
Maghemite nanoparticles are used in biomedicine, because they are biocompatible and non-toxic to humans, while their magnetism allows remote manipulation with external fields.
As pollutant.
It was found in 2022 that high levels of maghemite particles small enough to enter the bloodstream if inhaled, some as small as five nanometres, were present in the London Underground transport system. The presence of the particles indicated that they are suspended for long periods due to poor ventilation, particularly on platforms. The health implications presented by the particles were not investigated.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\left(\\ce{Fe^{III}8}\\right)_A\\left[\\ce{Fe^{III}_{40/3}\\square_{8/3}}\\right]_B\\ce{O32}"
},
{
"math_id": 1,
"text": "\\square"
}
] | https://en.wikipedia.org/wiki?curid=944137 |
9442947 | Carleman's condition | In mathematics, particularly, in analysis, Carleman's condition gives a sufficient condition for the determinacy of the moment problem. That is, if a measure formula_0 satisfies Carleman's condition, there is no other measure formula_1 having the same moments as formula_2 The condition was discovered by Torsten Carleman in 1922.
Hamburger moment problem.
For the Hamburger moment problem (the moment problem on the whole real line), the theorem states the following:
Let formula_0 be a measure on formula_3 such that all the moments
formula_4
are finite. If
formula_5
then the moment problem for formula_6 is "determinate"; that is, formula_0 is the only measure on formula_3 with formula_6 as its sequence of moments.
Stieltjes moment problem.
For the Stieltjes moment problem, the sufficient condition for determinacy is
formula_7
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mu"
},
{
"math_id": 1,
"text": "\\nu"
},
{
"math_id": 2,
"text": "\\mu."
},
{
"math_id": 3,
"text": "\\R"
},
{
"math_id": 4,
"text": "m_n = \\int_{-\\infty}^{+\\infty} x^n \\, d\\mu(x)~, \\quad n = 0,1,2,\\cdots"
},
{
"math_id": 5,
"text": "\\sum_{n=1}^\\infty m_{2n}^{-\\frac{1}{2n}} = + \\infty,"
},
{
"math_id": 6,
"text": "(m_n)"
},
{
"math_id": 7,
"text": "\\sum_{n=1}^\\infty m_{n}^{-\\frac{1}{2n}} = + \\infty."
}
] | https://en.wikipedia.org/wiki?curid=9442947 |