id
stringlengths
2
8
title
stringlengths
1
130
text
stringlengths
0
252k
formulas
listlengths
1
823
url
stringlengths
38
44
1039260
HOMFLY polynomial
Polynomials arising in knot theory In the mathematical field of knot theory, the HOMFLY polynomial or HOMFLYPT polynomial, sometimes called the generalized Jones polynomial, is a 2-variable knot polynomial, i.e. a knot invariant in the form of a polynomial of variables "m" and "l". A central question in the mathematical theory of knots is whether two knot diagrams represent the same knot. One tool used to answer such questions is a knot polynomial, which is computed from a diagram of the knot and can be shown to be an invariant of the knot, i.e. diagrams representing the same knot have the same polynomial. The converse may not be true. The HOMFLY polynomial is one such invariant and it generalizes two polynomials previously discovered, the Alexander polynomial and the Jones polynomial, both of which can be obtained by appropriate substitutions from HOMFLY. The HOMFLY polynomial is also a quantum invariant. The name "HOMFLY" combines the initials of its co-discoverers: Jim Hoste, Adrian Ocneanu, Kenneth Millett, Peter J. Freyd, W. B. R. Lickorish, and David N. Yetter. The addition of "PT" recognizes independent work carried out by Józef H. Przytycki and Paweł Traczyk. Definition. The polynomial is defined using skein relations: formula_0 formula_1 where formula_2 are links formed by crossing and smoothing changes on a local region of a link diagram, as indicated in the figure. The HOMFLY polynomial of a link "L" that is a split union of two links formula_3 and formula_4 is given by formula_5 See the page on skein relation for an example of a computation using such relations. Other HOMFLY skein relations. This polynomial can be obtained also using other skein relations: formula_6 formula_7 formula_8, where # denotes the knot sum; thus the HOMFLY polynomial of a composite knot is the product of the HOMFLY polynomials of its components. formula_9, so the HOMFLY polynomial can often be used to distinguish between two knots of different chirality. However there exist chiral pairs of knots that have the same HOMFLY polynomial, e.g. knots 942 and 1071 together with their respective mirror images. Main properties. The Jones polynomial, "V"("t"), and the Alexander polynomial, formula_10 can be computed in terms of the HOMFLY polynomial (the version in formula_11 and formula_12 variables) as follows: formula_13 formula_14 References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "P( \\mathrm{unknot} ) = 1,\\," }, { "math_id": 1, "text": "\\ell P(L_+) + \\ell^{-1}P(L_-) + mP(L_0)=0,\\," }, { "math_id": 2, "text": "L_+, L_-, L_0" }, { "math_id": 3, "text": "L_1" }, { "math_id": 4, "text": "L_2" }, { "math_id": 5, "text": "P(L) = \\frac{-(\\ell+\\ell^{-1})}{m} P(L_1)P(L_2)." }, { "math_id": 6, "text": "\\alpha P(L_+) - \\alpha^{-1}P(L_-) = zP(L_0),\\," }, { "math_id": 7, "text": "xP(L_+) + yP(L_-) + zP(L_0)=0,\\," }, { "math_id": 8, "text": "P(L_1 \\# L_2)=P(L_1)P(L_2),\\," }, { "math_id": 9, "text": "P_K(\\ell,m)=P_{\\text{Mirror Image}(K)}(\\ell^{-1},m),\\," }, { "math_id": 10, "text": "\\Delta(t)\\," }, { "math_id": 11, "text": "\\alpha" }, { "math_id": 12, "text": "z" }, { "math_id": 13, "text": "V(t)=P(\\alpha=t^{-1},z=t^{1/2}-t^{-1/2}),\\," }, { "math_id": 14, "text": "\\Delta(t)=P(\\alpha=1,z=t^{1/2}-t^{-1/2}),\\," } ]
https://en.wikipedia.org/wiki?curid=1039260
10393682
Hemi-icosahedron
Abstract regular polyhedron with 10 triangular faces In geometry, a hemi-icosahedron is an abstract regular polyhedron, containing half the faces of a regular icosahedron. It can be realized as a projective polyhedron (a tessellation of the real projective plane by 10 triangles), which can be visualized by constructing the projective plane as a hemisphere where opposite points along the boundary are connected and dividing the hemisphere into three equal parts. Geometry. It has 10 triangular faces, 15 edges, and 6 vertices. It is also related to the nonconvex uniform polyhedron, the tetrahemihexahedron, which could be topologically identical to the hemi-icosahedron if each of the 3 square faces were divided into two triangles. Graphs. It can be represented symmetrically on faces, and vertices as Schlegel diagrams: The complete graph K6. It has the same vertices and edges as the 5-dimensional 5-simplex which has a complete graph of edges, but only contains half of the (20) faces. From the point of view of graph theory this is an embedding of formula_0 (the complete graph with 6 vertices) on a real projective plane. With this embedding, the dual graph is the Petersen graph --- see hemi-dodecahedron.
[ { "math_id": 0, "text": "K_6" } ]
https://en.wikipedia.org/wiki?curid=10393682
1039646
Wilshire 5000
Stock market index The Wilshire 5000 Total Market Index, or more simply the Wilshire 5000, is a market-capitalization-weighted index of the market value of all American stocks actively traded in the United States. As of December 31, 2023, the index contained 3,403 components. The index is intended to measure the performance of most publicly traded companies headquartered in the United States, with readily available price data (Bulletin Board/penny stocks and stocks of extremely small companies are excluded). Hence, the index includes a majority of the common stocks and REITs traded primarily through New York Stock Exchange, NASDAQ, or the American Stock Exchange. Limited partnerships and ADRs are not included. It can be tracked by following the ticker ^FTW5000. Versions. There are five versions of the index: The difference between the total return and price versions of the index is that the total return versions accounts for reinvestment of dividends. The difference between the full capitalization, float-adjusted, and equal weight versions is in how the index components are weighted. The full cap index uses the total shares outstanding for each company. The float-adjusted index uses shares adjusted for free float. The equal-weighted index assigns each security in the index the same weight. Calculation. Let: The value of the index is then: formula_1 At present, one index point corresponds to a little more than US$1 billion of market capitalization. The list of issues included in the index is updated monthly to add new listings resulting from corporate spin-offs and initial public offerings, and to remove issues which move to the pink sheets or that have ceased trading for at least 10 consecutive days. Alternatives. The CRSP U.S. Total Market Index (ticker CRSPTM1) is a very similar comprehensive index of U.S. stocks supplied by the Center for Research in Security Prices. It was especially designed for use by index funds. After Dow Jones and Wilshire split up, Dow Jones made their own total stock market index, called the Dow Jones U.S. Total Stock Market Index, similar to the Wilshire 5000. Of the popular indexes, the Wilshire 5000 has been found to be the best index to use as a benchmark for US stock valuations. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\alpha" }, { "math_id": 1, "text": " \\alpha \\sum_{i=1}^{M} N_i P_i " } ]
https://en.wikipedia.org/wiki?curid=1039646
10396781
Extravagant number
In number theory, an extravagant number (also known as a wasteful number) is a natural number in a given number base that has fewer digits than the number of digits in its prime factorization in the given number base (including exponents). For example, in base 10, 4 = 22, 6 = 2×3, 8 = 23, and 9 = 32 are extravagant numbers (sequence in the OEIS). There are infinitely many extravagant numbers in every base. Mathematical definition. Let formula_0 be a number base, and let formula_1 be the number of digits in a natural number formula_2 for base formula_3. A natural number formula_2 has the prime factorisation formula_4 where formula_5 is the "p"-adic valuation of formula_2, and formula_2 is an extravagant number in base formula_3 if formula_6 Notes. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "b > 1" }, { "math_id": 1, "text": "K_b(n) = \\lfloor \\log_{b}{n} \\rfloor + 1" }, { "math_id": 2, "text": "n" }, { "math_id": 3, "text": "b" }, { "math_id": 4, "text": "n = \\prod_{\\stackrel{p \\,\\mid\\, n}{p\\text{ prime}}} p^{v_p(n)}" }, { "math_id": 5, "text": "v_p(n)" }, { "math_id": 6, "text": "K_b(n) < \\sum_{{\\stackrel{p \\,\\mid\\, n}{p\\text{ prime}}}} K_b(p) + \\sum_{{\\stackrel{p^2 \\,\\mid\\, n}{p\\text{ prime}}}} K_b(v_p(n))." } ]
https://en.wikipedia.org/wiki?curid=10396781
10396838
Equidigital number
In number theory, an equidigital number is a natural number in a given number base that has the same number of digits as the number of digits in its prime factorization in the given number base, including exponents but excluding exponents equal to 1. For example, in base 10, 1, 2, 3, 5, 7, and 10 (2 × 5) are equidigital numbers (sequence in the OEIS). All prime numbers are equidigital numbers in any base. A number that is either equidigital or frugal is said to be "economical". Mathematical definition. Let formula_0 be the number base, and let formula_1 be the number of digits in a natural number formula_2 for base formula_3. A natural number formula_2 has the prime factorisation formula_4 where formula_5 is the "p"-adic valuation of formula_2, and formula_2 is an equidigital number in base formula_3 if formula_6 Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "b > 1" }, { "math_id": 1, "text": "K_b(n) = \\lfloor \\log_{b}{n} \\rfloor + 1" }, { "math_id": 2, "text": "n" }, { "math_id": 3, "text": "b" }, { "math_id": 4, "text": "n = \\prod_{\\stackrel{p \\,\\mid\\, n}{p\\text{ prime}}} p^{v_p(n)}" }, { "math_id": 5, "text": "v_p(n)" }, { "math_id": 6, "text": "K_b(n) = \\sum_{{\\stackrel{p \\,\\mid\\, n}{p\\text{ prime}}}} K_b(p) + \\sum_{{\\stackrel{p^2 \\,\\mid\\, n}{p\\text{ prime}}}} K_b(v_p(n))." } ]
https://en.wikipedia.org/wiki?curid=10396838
10396863
Frugal number
In number theory, a frugal number is a natural number in a given number base that has more digits than the number of digits in its prime factorization in the given number base (including exponents). For example, in base 10, 125 = 53, 128 = 27, 243 = 35, and 256 = 28 are frugal numbers (sequence in the OEIS). The first frugal number which is not a prime power is 1029 = 3 × 73. In base 2, thirty-two is a frugal number, since 32 = 25 is written in base 2 as 100000 = 10101. The term economical number has been used for a frugal number, but also for a number which is either frugal or equidigital. Mathematical definition. Let formula_0 be a number base, and let formula_1 be the number of digits in a natural number formula_2 for base formula_3. A natural number formula_2 has the prime factorisation formula_4 where formula_5 is the "p"-adic valuation of formula_2, and formula_2 is an frugal number in base formula_3 if formula_6 Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "b > 1" }, { "math_id": 1, "text": "K_b(n) = \\lfloor \\log_b{n} \\rfloor + 1" }, { "math_id": 2, "text": "n" }, { "math_id": 3, "text": "b" }, { "math_id": 4, "text": "n = \\prod_{\\stackrel{p \\,\\mid\\, n}{p\\text{ prime}}} p^{v_p(n)}" }, { "math_id": 5, "text": "v_p(n)" }, { "math_id": 6, "text": "K_b(n) > \\sum_{{\\stackrel{p \\,\\mid\\, n}{p\\text{ prime}}}} K_b(p) + \\sum_{{\\stackrel{p^2 \\,\\mid\\, n}{p\\text{ prime}}}} K_b(v_p(n))." } ]
https://en.wikipedia.org/wiki?curid=10396863
1039777
Vis-viva equation
Equation to model the motion of orbiting bodies In astrodynamics, the "vis-viva" equation, also referred to as orbital-energy-invariance law or Burgas formula, is one of the equations that model the motion of orbiting bodies. It is the direct result of the principle of conservation of mechanical energy which applies when the only force acting on an object is its own weight which is the gravitational force determined by the product of the mass of the object and the strength of the surrounding gravitational field. "Vis viva" (Latin for "living force") is a term from the history of mechanics, and it survives in this sole context. It represents the principle that the difference between the total work of the accelerating forces of a system and that of the retarding forces is equal to one half the "vis viva" accumulated or lost in the system while the work is being done. Equation. For any Keplerian orbit (elliptic, parabolic, hyperbolic, or radial), the "vis-viva" equation is as follows: formula_0 where: The product of "GM" can also be expressed as the standard gravitational parameter using the Greek letter μ. Derivation for elliptic orbits (0 ≤ eccentricity &lt; 1). In the vis-viva equation the mass m of the orbiting body (e.g., a spacecraft) is taken to be negligible in comparison to the mass M of the central body (e.g., the Earth). The central body and orbiting body are also often referred to as the primary and a particle respectively. In the specific cases of an elliptical or circular orbit, the vis-viva equation may be readily derived from conservation of energy and momentum. Specific total energy is constant throughout the orbit. Thus, using the subscripts "a" and "p" to denote apoapsis (apogee) and periapsis (perigee), respectively, formula_1 Rearranging, formula_2 Recalling that for an elliptical orbit (and hence also a circular orbit) the velocity and radius vectors are perpendicular at apoapsis and periapsis, conservation of angular momentum requires specific angular momentum formula_3, thus formula_4: formula_5 formula_6 Isolating the kinetic energy at apoapsis and simplifying, formula_7 From the geometry of an ellipse, formula_8 where "a" is the length of the semimajor axis. Thus, formula_9 Substituting this into our original expression for specific orbital energy, formula_10 Thus, formula_11 and the vis-viva equation may be written formula_12 or formula_13 Therefore, the conserved angular momentum "L" = "mh" can be derived using formula_14 and formula_15, where a is semi-major axis and b is semi-minor axis of the elliptical orbit, as follows: formula_16 and alternately, formula_17 Therefore, specific angular momentum formula_18, and Total angular momentum formula_19 Practical applications. Given the total mass and the scalars r and v at a single point of the orbit, one can compute: The formula for escape velocity can be obtained from the Vis-viva equation by taking the limit as formula_21 approaches formula_22: formula_23 Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "v^2 = GM \\left({ 2 \\over r} - {1 \\over a}\\right)" }, { "math_id": 1, "text": " \\varepsilon = \\frac{v_a^2}{2} - \\frac{GM}{r_a} = \\frac{v_p^2}{2} - \\frac{GM}{r_p} " }, { "math_id": 2, "text": " \\frac{v_a^2}{2} - \\frac{v_p^2}{2} = \\frac{GM}{r_a} - \\frac{GM}{r_p} " }, { "math_id": 3, "text": " h = r_pv_p = r_av_a = \\text{constant}" }, { "math_id": 4, "text": "v_p = \\frac{r_a}{r_p}v_a" }, { "math_id": 5, "text": " \\frac{1}{2} \\left( 1-\\frac{r_a^2}{r_p^2} \\right) v_a^2 = \\frac{GM}{r_a} - \\frac{GM}{r_p} " }, { "math_id": 6, "text": " \\frac{1}{2} \\left( \\frac{r_p^2 - r_a^2}{r_p^2} \\right) v_a^2 = \\frac{GM}{r_a} - \\frac{GM}{r_p} " }, { "math_id": 7, "text": "\\begin{align}\n \\frac{1}{2}v_a^2 &= \\left( \\frac{GM}{r_a} - \\frac{GM}{r_p}\\right) \\cdot \\frac{r_p^2}{r_p^2-r_a^2} \\\\\n \\frac{1}{2}v_a^2 &= GM \\left( \\frac{r_p - r_a}{r_ar_p} \\right) \\frac{r_p^2}{r_p^2-r_a^2} \\\\\n \\frac{1}{2}v_a^2 &= GM \\frac{r_p}{r_a(r_p+r_a)}\n\\end{align}" }, { "math_id": 8, "text": "2a=r_p+r_a" }, { "math_id": 9, "text": " \\frac{1}{2} v_a^2 = GM \\frac{2a-r_a}{r_a(2a)} = GM \\left( \\frac{1}{r_a} - \\frac{1}{2a} \\right) = \\frac{GM}{r_a} - \\frac{GM}{2a} " }, { "math_id": 10, "text": " \\varepsilon = \\frac{v^2}{2} - \\frac{GM}{r} = \\frac{v_p^2}{2} - \\frac{GM}{r_p} = \\frac{v_a^2}{2} - \\frac{GM}{r_a} = - \\frac{GM}{2a} " }, { "math_id": 11, "text": " \\varepsilon = - \\frac{GM}{2a} " }, { "math_id": 12, "text": " \\frac{v^2}{2} - \\frac{GM}{r} = -\\frac{GM}{2a} " }, { "math_id": 13, "text": " v^2 = GM \\left( \\frac{2}{r} - \\frac{1}{a} \\right) " }, { "math_id": 14, "text": "r_a + r_p = 2a" }, { "math_id": 15, "text": "r_a r_p = b^2" }, { "math_id": 16, "text": "v_a^2 = GM \\left( \\frac{2}{r_a} - \\frac{1}{a} \\right) = \\frac{GM}{a} \\left( \\frac{2a-r_a}{r_a} \\right) = \\frac{GM}{a} \\left( \\frac{r_p}{r_a} \\right) = \\frac{GM}{a} \\left( \\frac{b}{r_a} \\right)^2 " }, { "math_id": 17, "text": "v_p^2 = GM \\left( \\frac{2}{r_p} - \\frac{1}{a} \\right) = \\frac{GM}{a} \\left( \\frac{2a-r_p}{r_p} \\right) = \\frac{GM}{a} \\left( \\frac{r_a}{r_p} \\right) = \\frac{GM}{a} \\left( \\frac{b}{r_p} \\right)^2 " }, { "math_id": 18, "text": "h = r_p v_p = r_a v_a = b \\sqrt{\\frac{GM}{a}}" }, { "math_id": 19, "text": "L = mh = mb \\sqrt{\\frac{GM}{a}}" }, { "math_id": 20, "text": "\\varepsilon\\,\\!" }, { "math_id": 21, "text": "a" }, { "math_id": 22, "text": "\\infty" }, { "math_id": 23, "text": "v_e^2 = GM \\left(\\frac{2}{r}-0 \\right) \\rightarrow v_e = \\sqrt{\\frac{2GM}{r}}" } ]
https://en.wikipedia.org/wiki?curid=1039777
1039889
Tangent half-angle formula
Relates the tangent of half of an angle to trigonometric functions of the entire angle In trigonometry, tangent half-angle formulas relate the tangent of half of an angle to trigonometric functions of the entire angle. Formulae. The tangent of half an angle is the stereographic projection of the circle through the point at angle formula_0 radians onto the line through the angles formula_1. Among these formulas are the following: formula_2 Identities. From these one can derive identities expressing the sine, cosine, and tangent as functions of tangents of half-angles: formula_3 Proofs. Algebraic proofs. Using double-angle formulae and the Pythagorean identity formula_4 gives formula_5 formula_6 Taking the quotient of the formulae for sine and cosine yields formula_7 Combining the Pythagorean identity with the double-angle formula for the cosine, formula_8 rearranging, and taking the square roots yields formula_9 and formula_10 which, upon division gives formula_11 Alternatively, formula_12 It turns out that the absolute value signs in these last two formulas may be dropped, regardless of which quadrant α is in. With or without the absolute value bars these formulas do not apply when both the numerator and denominator on the right-hand side are zero. Also, using the angle addition and subtraction formulae for both the sine and cosine one obtains: formula_13 Pairwise addition of the above four formulae yields: formula_14 Setting formula_15 and formula_16 and substituting yields: formula_17 Dividing the sum of sines by the sum of cosines one arrives at: formula_18 Geometric proofs. Applying the formulae derived above to the rhombus figure on the right, it is readily shown that formula_19 In the unit circle, application of the above shows that formula_20. By similarity of triangles, formula_21 It follows that formula_22 The tangent half-angle substitution in integral calculus. In various applications of trigonometry, it is useful to rewrite the trigonometric functions (such as sine and cosine) in terms of rational functions of a new variable formula_23. These identities are known collectively as the tangent half-angle formulae because of the definition of formula_23. These identities can be useful in calculus for converting rational functions in sine and cosine to functions of "t" in order to find their antiderivatives. Geometrically, the construction goes like this: for any point (cos "φ", sin "φ") on the unit circle, draw the line passing through it and the point (−1, 0). This point crosses the "y"-axis at some point "y" = "t". One can show using simple geometry that "t" = tan(φ/2). The equation for the drawn line is "y" = (1 + "x")"t". The equation for the intersection of the line and circle is then a quadratic equation involving "t". The two solutions to this equation are (−1, 0) and (cos "φ", sin "φ"). This allows us to write the latter as rational functions of "t" (solutions are given below). The parameter "t" represents the stereographic projection of the point (cos "φ", sin "φ") onto the "y"-axis with the center of projection at (−1, 0). Thus, the tangent half-angle formulae give conversions between the stereographic coordinate "t" on the unit circle and the standard angular coordinate "φ". Then we have formula_24 and formula_25 Both this expression of formula_26 and the expression formula_27 can be solved for formula_28. Equating these gives the arctangent in terms of the natural logarithm formula_29 In calculus, the tangent half-angle substitution is used to find antiderivatives of rational functions of sin "φ" and cos "φ". Differentiating formula_30 gives formula_31 and thus formula_32 Hyperbolic identities. One can play an entirely analogous game with the hyperbolic functions. A point on (the right branch of) a hyperbola is given by (cosh "ψ", sinh "ψ"). Projecting this onto "y"-axis from the center (−1, 0) gives the following: formula_33 with the identities formula_34 and formula_35 Finding "ψ" in terms of "t" leads to following relationship between the inverse hyperbolic tangent formula_36 and the natural logarithm: formula_37 The hyperbolic tangent half-angle substitution in calculus uses formula_38 The Gudermannian function. Comparing the hyperbolic identities to the circular ones, one notices that they involve the same functions of "t", just permuted. If we identify the parameter "t" in both cases we arrive at a relationship between the circular functions and the hyperbolic ones. That is, if formula_39 then formula_40 where gd("ψ") is the Gudermannian function. The Gudermannian function gives a direct relationship between the circular functions and the hyperbolic ones that does not involve complex numbers. The above descriptions of the tangent half-angle formulae (projection the unit circle and standard hyperbola onto the "y"-axis) give a geometric interpretation of this function. Rational values and Pythagorean triples. Starting with a Pythagorean triangle with side lengths a, b, and c that are positive integers and satisfy "a"2 + "b"2 "c"2, it follows immediately that each interior angle of the triangle has rational values for sine and cosine, because these are just ratios of side lengths. Thus each of these angles has a rational value for its half-angle tangent, using tan "φ"/2 sin "φ" / (1 + cos "φ"). The reverse is also true. If there are two positive angles that sum to 90°, each with a rational half-angle tangent, and the third angle is a right angle then a triangle with these interior angles can be scaled to a Pythagorean triangle. If the third angle is not required to be a right angle, but is the angle that makes the three positive angles sum to 180° then the third angle will necessarily have a rational number for its half-angle tangent when the first two do (using angle addition and subtraction formulas for tangents) and the triangle can be scaled to a Heronian triangle. Generally, if K is a subfield of the complex numbers then tan "φ"/2 ∈ "K" ∪ {∞} implies that {sin "φ", cos "φ", tan "φ", sec "φ", csc "φ", cot "φ"} ⊆ "K" ∪ {∞}. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\pi" }, { "math_id": 1, "text": "\\pm \\frac{\\pi}{2}" }, { "math_id": 2, "text": "\n\\begin{align}\n\\tan \\tfrac12( \\eta \\pm \\theta)\n&= \\frac{\\tan \\tfrac12 \\eta \\pm \\tan \\tfrac12 \\theta}{1 \\mp \\tan \\tfrac12 \\eta \\, \\tan \\tfrac12 \\theta}\n= \\frac{\\sin\\eta \\pm \\sin\\theta}{\\cos\\eta + \\cos\\theta}\n= -\\frac{\\cos\\eta - \\cos\\theta}{\\sin\\eta \\mp \\sin\\theta}, \\\\[10pt]\n\n\\tan \\tfrac12 \\theta\n&= \\frac{\\sin\\theta}{1 + \\cos\\theta}\n= \\frac{\\tan\\theta}{\\sec\\theta + 1}\n= \\frac{1}{\\csc\\theta + \\cot\\theta},\n& & (\\eta = 0) \\\\[10pt]\n\n\\tan \\tfrac12 \\theta\n&= \\frac{1-\\cos\\theta}{\\sin\\theta}\n= \\frac{\\sec\\theta-1}{\\tan\\theta}\n= \\csc\\theta-\\cot\\theta,\n& & (\\eta = 0) \\\\[10pt]\n\n\\tan \\tfrac12 \\big(\\theta \\pm \\tfrac12\\pi \\big)\n&= \\frac{1 \\pm \\sin\\theta}{\\cos\\theta}\n= \\sec\\theta \\pm \\tan\\theta\n= \\frac{\\csc\\theta \\pm 1}{\\cot\\theta},\n& & \\big(\\eta = \\tfrac12\\pi \\big) \\\\[10pt]\n\n\\tan \\tfrac12 \\big(\\theta \\pm \\tfrac12\\pi \\big)\n&= \\frac{\\cos\\theta}{1 \\mp \\sin\\theta}\n= \\frac{1}{\\sec\\theta \\mp \\tan\\theta}\n= \\frac{\\cot\\theta}{\\csc\\theta \\mp 1},\n& & \\big(\\eta = \\tfrac12\\pi \\big) \\\\[10pt]\n\n\\frac{1 - \\tan \\tfrac12\\theta}{1 + \\tan \\tfrac12\\theta}\n&= \\pm\\sqrt{\\frac{1 - \\sin\\theta}{1 + \\sin\\theta}} \\\\[10pt]\n\n\\tan \\tfrac12 \\theta\n&= \\pm \\sqrt{\\frac{1 - \\cos\\theta}{1 + \\cos\\theta}} \\\\[10pt]\n\n\\end{align}\n" }, { "math_id": 3, "text": "\n\\begin{align}\n\\sin \\alpha & = \\frac{2\\tan \\tfrac12 \\alpha}{1 + \\tan ^2 \\tfrac12 \\alpha} \\\\[7pt]\n\\cos \\alpha & = \\frac{1 - \\tan ^2 \\tfrac12 \\alpha}{1 + \\tan ^2 \\tfrac12 \\alpha} \\\\[7pt]\n\\tan \\alpha & = \\frac{2\\tan \\tfrac12 \\alpha}{1 - \\tan ^2 \\tfrac12 \\alpha}\n\\end{align}\n" }, { "math_id": 4, "text": "1 + \\tan^2 \\alpha = 1 \\big/ \\cos^2 \\alpha" }, { "math_id": 5, "text": "\n\\sin \\alpha\n= 2\\sin \\tfrac12 \\alpha \\cos \\tfrac12 \\alpha\n= \\frac{ 2 \\sin \\tfrac12 \\alpha\\, \\cos \\tfrac12 \\alpha\n \\Big/ \\cos^2 \\tfrac12 \\alpha}\n {1 + \\tan^2 \\tfrac12 \\alpha}\n= \\frac{2\\tan \\tfrac12 \\alpha}{1 + \\tan^2 \\tfrac12 \\alpha},\n\\quad \\text{and}\n" }, { "math_id": 6, "text": "\n\\cos \\alpha\n= \\cos^2 \\tfrac12 \\alpha - \\sin^2 \\tfrac12 \\alpha\n= \\frac{ \\left(\\cos^2 \\tfrac12 \\alpha - \\sin^2 \\tfrac12 \\alpha\\right)\n \\Big/ \\cos^2 \\tfrac1 2 \\alpha}\n { 1 + \\tan^2 \\tfrac12 \\alpha}\n= \\frac{1 - \\tan^2 \\tfrac12 \\alpha}{1 + \\tan^2 \\tfrac12 \\alpha}.\n" }, { "math_id": 7, "text": "\\tan \\alpha = \\frac{2\\tan \\tfrac12 \\alpha}{1 - \\tan ^2 \\tfrac12 \\alpha}." }, { "math_id": 8, "text": " \\cos 2\\alpha = \\cos^2 \\alpha - \\sin^2 \\alpha = 1 - 2\\sin^2 \\alpha = 2\\cos^2 \\alpha - 1, " }, { "math_id": 9, "text": " \\left|\\sin \\alpha\\right| = \\sqrt {\\frac{1-\\cos2\\alpha}{2}} " }, { "math_id": 10, "text": " \\left|\\cos \\alpha\\right| = \\sqrt {\\frac{1+\\cos2\\alpha}{2}} " }, { "math_id": 11, "text": " \\left|\\tan \\alpha\\right| = \\frac {\\sqrt {1 - \\cos 2\\alpha}}{\\sqrt {1 + \\cos 2\\alpha}} = \\frac { {\\sqrt {1 - \\cos 2\\alpha}}{\\sqrt {1 + \\cos 2\\alpha}} }{1 + \\cos 2\\alpha} =\\frac{{\\sqrt {1 - \\cos^2 2\\alpha}}}{1 + \\cos 2\\alpha} = \\frac{\\left|\\sin 2\\alpha\\right|}{1 + \\cos 2\\alpha}. " }, { "math_id": 12, "text": " \\left|\\tan \\alpha\\right| = \\frac {\\sqrt {1 - \\cos 2\\alpha}}{\\sqrt {1 + \\cos 2\\alpha}} = \\frac {1 - \\cos 2\\alpha}{ {\\sqrt {1 + \\cos 2\\alpha}}{\\sqrt {1 - \\cos 2\\alpha}} } = \\frac{1 - \\cos 2\\alpha}{{\\sqrt {1 - \\cos^2 2\\alpha}}} = \\frac{1 - \\cos 2\\alpha}{\\left|\\sin 2\\alpha\\right|}. " }, { "math_id": 13, "text": "\\begin{align}\n \\cos (a+b) &= \\cos a \\cos b - \\sin a \\sin b \\\\\n \\cos (a-b) &= \\cos a \\cos b + \\sin a \\sin b \\\\\n \\sin (a+b) &= \\sin a \\cos b + \\cos a \\sin b \\\\\n \\sin (a-b) &= \\sin a \\cos b - \\cos a \\sin b\n\\end{align}" }, { "math_id": 14, "text": "\n\\begin{align}\n&\\sin (a+b) + \\sin (a-b) \\\\[5mu]\n&\\quad= \\sin a \\cos b + \\cos a \\sin b + \\sin a \\cos b - \\cos a \\sin b \\\\[5mu]\n&\\quad = 2 \\sin a \\cos b \\\\[15mu]\n\n&\\cos (a+b) + \\cos (a-b) \\\\[5mu]\n&\\quad= \\cos a \\cos b - \\sin a \\sin b + \\cos a \\cos b + \\sin a \\sin b \\\\[5mu]\n&\\quad= 2 \\cos a \\cos b\n\\end{align}\n" }, { "math_id": 15, "text": "a= \\tfrac12 (p+q)" }, { "math_id": 16, "text": "b= \\tfrac12 (p-q)" }, { "math_id": 17, "text": "\n\\begin{align}\n& \\sin p + \\sin q \\\\[5mu]\n&\\quad= \\sin \\left(\\tfrac12 (p+q) + \\tfrac12 (p-q)\\right) + \\sin\\left(\\tfrac12(p+q) - \\tfrac12 (p-q)\\right) \\\\[5mu]\n&\\quad= 2 \\sin \\tfrac12(p+q) \\, \\cos \\tfrac12(p-q) \\\\[15mu]\n& \\cos p + \\cos q \\\\[5mu]\n&\\quad= \\cos\\left(\\tfrac12(p+q) + \\tfrac12 (p-q)\\right) + \\cos\\left(\\tfrac12(p+q) - \\tfrac12(p-q)\\right) \\\\[5mu]\n&\\quad= 2 \\cos\\tfrac12(p+q) \\, \\cos\\tfrac12(p-q)\n\\end{align}\n" }, { "math_id": 18, "text": "\n\\frac{\\sin p + \\sin q}{\\cos p + \\cos q}\n= \\frac{2 \\sin \\tfrac12(p+q) \\, \\cos \\tfrac12(p-q)}{2 \\cos \\tfrac12(p+q) \\, \\cos \\tfrac12(p-q)} = \\tan \\tfrac12(p+q) " }, { "math_id": 19, "text": "\\tan \\tfrac12 (a+b) = \\frac{\\sin \\tfrac12 (a + b)}{\\cos \\tfrac12 (a + b)} = \\frac{\\sin a + \\sin b}{\\cos a + \\cos b}." }, { "math_id": 20, "text": "t = \\tan \\tfrac12 \\varphi" }, { "math_id": 21, "text": "\\frac{t}{\\sin \\varphi} = \\frac{1}{1+ \\cos \\varphi}." }, { "math_id": 22, "text": "t = \\frac{\\sin \\varphi}{1+ \\cos \\varphi} = \\frac{\\sin \\varphi(1- \\cos \\varphi)}{(1+ \\cos \\varphi)(1- \\cos \\varphi)} = \\frac{1- \\cos \\varphi}{\\sin \\varphi}." }, { "math_id": 23, "text": "t" }, { "math_id": 24, "text": "\n\\begin{align}\n& \\sin\\varphi = \\frac{2t}{1 + t^2},\n& & \\cos\\varphi = \\frac{1 - t^2}{1 + t^2}, \\\\[8pt]\n& \\tan\\varphi = \\frac{2t}{1 - t^2}\n& & \\cot\\varphi = \\frac{1 - t^2}{2t}, \\\\[8pt]\n& \\sec\\varphi = \\frac{1 + t^2}{1 - t^2},\n& & \\csc\\varphi = \\frac{1 + t^2}{2t},\n\\end{align}\n" }, { "math_id": 25, "text": "e^{i \\varphi} = \\frac{1 + i t}{1 - i t}, \\qquad\ne^{-i \\varphi} = \\frac{1 - i t}{1 + i t}.\n" }, { "math_id": 26, "text": "e^{i\\varphi}" }, { "math_id": 27, "text": "t = \\tan(\\varphi/2)" }, { "math_id": 28, "text": "\\varphi" }, { "math_id": 29, "text": "\\arctan t = \\frac{-i}{2} \\ln\\frac{1+it}{1-it}." }, { "math_id": 30, "text": "t=\\tan\\tfrac12\\varphi" }, { "math_id": 31, "text": "\\frac{dt}{d\\varphi} = \\tfrac12\\sec^2 \\tfrac12\\varphi = \\tfrac12(1+\\tan^2 \\tfrac12\\varphi) = \\tfrac12(1+t^2)" }, { "math_id": 32, "text": "d\\varphi = {{2\\,dt} \\over {1 + t^2}}." }, { "math_id": 33, "text": "t = \\tanh\\tfrac12\\psi = \\frac{\\sinh\\psi}{\\cosh\\psi+1} = \\frac{\\cosh\\psi-1}{\\sinh\\psi}" }, { "math_id": 34, "text": "\n\\begin{align}\n& \\sinh\\psi = \\frac{2t}{1 - t^2},\n& & \\cosh\\psi = \\frac{1 + t^2}{1 - t^2}, \\\\[8pt]\n& \\tanh\\psi = \\frac{2t}{1 + t^2},\n& & \\coth\\psi = \\frac{1 + t^2}{2t}, \\\\[8pt]\n& \\operatorname{sech}\\,\\psi = \\frac{1 - t^2}{1 + t^2},\n& & \\operatorname{csch}\\,\\psi = \\frac{1 - t^2}{2t},\n\\end{align}\n" }, { "math_id": 35, "text": "e^\\psi = \\frac{1 + t}{1 - t}, \\qquad\ne^{-\\psi} = \\frac{1 - t}{1 + t}." }, { "math_id": 36, "text": "\\operatorname{artanh}" }, { "math_id": 37, "text": "2 \\operatorname{artanh} t = \\ln\\frac{1+t}{1-t}." }, { "math_id": 38, "text": "d\\psi = {{2\\,dt} \\over {1 - t^2}}\\,." }, { "math_id": 39, "text": "t = \\tan\\tfrac12 \\varphi = \\tanh\\tfrac12 \\psi" }, { "math_id": 40, "text": "\\varphi = 2\\arctan \\bigl(\\tanh \\tfrac12 \\psi\\,\\bigr) \\equiv \\operatorname{gd} \\psi." } ]
https://en.wikipedia.org/wiki?curid=1039889
10399346
Lagrange multipliers on Banach spaces
In the field of calculus of variations in mathematics, the method of Lagrange multipliers on Banach spaces can be used to solve certain infinite-dimensional constrained optimization problems. The method is a generalization of the classical method of Lagrange multipliers as used to find extrema of a function of finitely many variables. The Lagrange multiplier theorem for Banach spaces. Let "X" and "Y" be real Banach spaces. Let "U" be an open subset of "X" and let "f" : "U" → R be a continuously differentiable function. Let "g" : "U" → "Y" be another continuously differentiable function, the "constraint": the objective is to find the extremal points (maxima or minima) of "f" subject to the constraint that "g" is zero. Suppose that "u"0 is a "constrained extremum" of "f", i.e. an extremum of "f" on formula_0 Suppose also that the Fréchet derivative D"g"("u"0) : "X" → "Y" of "g" at "u"0 is a surjective linear map. Then there exists a Lagrange multiplier "λ" : "Y" → R in "Y"∗, the dual space to "Y", such that formula_1 Since D"f"("u"0) is an element of the dual space "X"∗, equation (L) can also be written as formula_2 where (D"g"("u"0))∗("λ") is the pullback of "λ" by D"g"("u"0), i.e. the action of the adjoint map (D"g"("u"0))∗ on "λ", as defined by formula_3 Connection to the finite-dimensional case. In the case that "X" and "Y" are both finite-dimensional (i.e. linearly isomorphic to R"m" and R"n" for some natural numbers "m" and "n") then writing out equation (L) in matrix form shows that "λ" is the usual Lagrange multiplier vector; in the case "n" = 1, "λ" is the usual Lagrange multiplier, a real number. Application. In many optimization problems, one seeks to minimize a functional defined on an infinite-dimensional space such as a Banach space. Consider, for example, the Sobolev space formula_4 and the functional formula_5 given by formula_6 Without any constraint, the minimum value of "f" would be 0, attained by "u"0("x") = 0 for all "x" between −1 and +1. One could also consider the constrained optimization problem, to minimize "f" among all those "u" ∈ "X" such that the mean value of "u" is +1. In terms of the above theorem, the constraint "g" would be given by formula_7 However this problem can be solved as in the finite dimensional case since the Lagrange multiplier formula_8 is only a scalar. References. "This article incorporates material from Lagrange multipliers on Banach spaces on PlanetMath, which is licensed under the ."
[ { "math_id": 0, "text": "g^{-1} (0) = \\{ x \\in U \\mid g(x) = 0 \\in Y \\} \\subseteq U." }, { "math_id": 1, "text": "\\mathrm{D} f (u_{0}) = \\lambda \\circ \\mathrm{D} g (u_{0}). \\quad \\mbox{(L)}" }, { "math_id": 2, "text": "\\mathrm{D} f (u_{0}) = \\left( \\mathrm{D} g (u_{0}) \\right)^{*} (\\lambda)," }, { "math_id": 3, "text": "\\left( \\mathrm{D} g (u_{0}) \\right)^{*} (\\lambda) = \\lambda \\circ \\mathrm{D} g (u_{0})." }, { "math_id": 4, "text": " X = H_0^1([-1,+1];\\mathbb{R})" }, { "math_id": 5, "text": "f : X \\rightarrow \\mathbb{R}" }, { "math_id": 6, "text": "f(u) = \\int_{-1}^{+1} u'(x)^{2} \\, \\mathrm{d} x." }, { "math_id": 7, "text": "g(u) = \\frac{1}{2} \\int_{-1}^{+1} u(x) \\, \\mathrm{d} x - 1." }, { "math_id": 8, "text": " \\lambda " } ]
https://en.wikipedia.org/wiki?curid=10399346
1039962
Return period
Estimated recurrence time of an event A return period, also known as a recurrence interval or repeat interval, is an average time or an estimated average time between events such as earthquakes, floods, landslides, or river discharge flows to occur. It is a statistical measurement typically based on historic data over an extended period, and is used usually for risk analysis. Examples include deciding whether a project should be allowed to go forward in a zone of a certain risk or designing structures to withstand events with a certain return period. The following analysis assumes that the probability of the event occurring does not vary over time and is independent of past events. Estimating a return period. Recurrence interval formula_0 "n" number of years on record; "m" is the rank of observed occurrences when arranged in descending order For floods, the event may be measured in terms of m3/s or height; for storm surges, in terms of the height of the surge, and similarly for other events. This is Weibull's Formula. Return period as the reciprocal of expected frequency. The theoretical return period between occurrences is the inverse of the average frequency of occurrence. For example, a 10-year flood has a 1/10 = 0.1 or 10% chance of being exceeded in any one year and a 50-year flood has a 0.02 or 2% chance of being exceeded in any one year. This does not mean that a 100-year flood will happen regularly every 100 years, or only once in 100 years. Despite the connotations of the name "return period". In any "given" 100-year period, a 100-year event may occur once, twice, more, or not at all, and each outcome has a probability that can be computed as below. Also, the estimated return period below is a statistic: it is computed from a set of data (the observations), as distinct from the theoretical value in an idealized distribution. One does not actually know that a certain or greater magnitude happens with 1% probability, only that it has been observed exactly once in 100 years. That distinction is significant because there are few observations of rare events: for instance, if observations go back 400 years, the most extreme event (a 400-year event by the statistical definition) may later be classed, on longer observation, as a 200-year event (if a comparable event immediately occurs) or a 500-year event (if no comparable event occurs for a further 100 years). Further, one cannot determine the size of a 1000-year event based on such records alone but instead must use a statistical model to predict the magnitude of such an (unobserved) event. Even if the historic return interval is a lot less than 1000 years, if there are a number of less-severe events of a similar nature recorded, the use of such a model is likely to provide useful information to help estimate the future return interval. Probability distributions. One would like to be able to interpret the return period in probabilistic models. The most logical interpretation for this is to take the return period as the counting rate in a Poisson distribution since it is the expectation value of the rate of occurrences. An alternative interpretation is to take it as the probability for a yearly Bernoulli trial in the binomial distribution. That is disfavoured because each year does not represent an independent Bernoulli trial but is an arbitrary measure of time. This question is mainly academic as the results obtained will be similar under both the Poisson and binomial interpretations. Poisson. The probability mass function of the Poisson distribution is formula_1 where formula_2 is the number of occurrences the probability is calculated for, formula_3 the time period of interest, formula_4 is the return period and formula_5 is the counting rate. The probability of no-occurrence can be obtained simply considering the case for formula_6. The formula is formula_7 Consequently, the probability of exceedance (i.e. the probability of an event "stronger" than the event with return period formula_4 to occur at least once within the time period of interest) is formula_8 Note that for any event with return period formula_4, the probability of exceedance within an interval equal to the return period (i.e. formula_9) is independent from the return period and it is equal to formula_10. This means, for example, that there is a 63.2% probability of a flood larger than the 50-year return flood to occur within any period of 50 year. Example. If the return period of occurrence formula_11 is 243 years (formula_12) then the probability of exactly one occurrence in ten years is formula_13 Binomial. In a given period of formula_14 for a unit time formula_15 (e.g. formula_16), the probability of a given number "r" of events of a return period formula_17 is given by the binomial distribution as follows. formula_18 This is valid only if the probability of more than one occurrence per unit time formula_15 is zero. Often that is a close approximation, in which case the probabilities yielded by this formula hold approximately. If formula_19 in such a way that formula_20 then formula_21 Take formula_22 where "T" is return interval "n" is number of years on record. "m" is the number of recorded occurrences of the event being considered Example. Given that the return period of an event is 100 years, formula_23 So the probability that such an event occurs "exactly once" in 10 successive years is: formula_24 Risk analysis. Return period is useful for risk analysis (such as natural, inherent, or hydrologic risk of failure). When dealing with structure design expectations, the return period is useful in calculating the riskiness of the structure. The probability of "at least one" event that exceeds design limits during the expected life of the structure is the complement of the probability that "no" events occur which exceed design limits. The equation for assessing this parameter is formula_25 where formula_26 is the expression for the probability of the occurrence of the event in question in a year; "n" is the expected life of the structure. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " = {n+1\\over m}" }, { "math_id": 1, "text": " P(r;t)={(\\mu t)^r \\over r!} e^{-\\mu t} = {(t/T)^r \\over r!} e^{-t/T}" }, { "math_id": 2, "text": "r" }, { "math_id": 3, "text": "t" }, { "math_id": 4, "text": "T" }, { "math_id": 5, "text": "\\mu = 1/T" }, { "math_id": 6, "text": "r=0" }, { "math_id": 7, "text": " P(r=0;t)= e^{-\\mu t} = e^{-t/T}" }, { "math_id": 8, "text": " P(t>0;t)= 1 - P(t=0;t) = 1 - e^{-\\mu t} = 1 - e^{-t/T}" }, { "math_id": 9, "text": "t = T" }, { "math_id": 10, "text": "1-\\exp(-1) \\approx 63.2\\%" }, { "math_id": 11, "text": "T" }, { "math_id": 12, "text": "\\mu = 0.0041" }, { "math_id": 13, "text": "\n\\begin{align}\nP(r;t) & = \\frac{(\\mu t)^r}{r!} e^{-\\mu t}\\\\[6pt]\nP(r=1;t=10) & = \\frac{(10/243)^1}{1!} e^{-10/243} \\approx 3.95\\%\n\\end{align}\n" }, { "math_id": 14, "text": "n\\times \\tau" }, { "math_id": 15, "text": "\\tau" }, { "math_id": 16, "text": "\\tau = 1 \\text{year}" }, { "math_id": 17, "text": "\\mu" }, { "math_id": 18, "text": " P(X = r) = {n\\choose r}\\mu^r(1-\\mu)^{n-r}." }, { "math_id": 19, "text": "n \\rightarrow \\infty, \\mu \\rightarrow 0" }, { "math_id": 20, "text": "n \\mu \\rightarrow \\lambda" }, { "math_id": 21, "text": "\\frac{n!}{(n-r)!r!} \\mu^r (1-\\mu)^{n-r} \\rightarrow e^{-\\lambda}\\frac{\\lambda^r}{r!}." }, { "math_id": 22, "text": " \\mu = \\frac 1 T = {m\\over n+1}" }, { "math_id": 23, "text": "p={1\\over 100}=0.01." }, { "math_id": 24, "text": "\n\\begin{align}\nP(X = 1) & =\\binom{10}{1} \\times 0.01^1 \\times 0.99^9 \\\\[4pt]\n& \\approx 10 \\times 0.01 \\times 0.914 \\\\[4pt]\n& \\approx 0.0914\n\\end{align}\n" }, { "math_id": 25, "text": "\\overline R = 1 - \\left(1 - {1\\over T}\\right)^n=1-(1-P(X\\ge x_T))^n" }, { "math_id": 26, "text": "{1\\over T}=P(X\\ge x_T)" } ]
https://en.wikipedia.org/wiki?curid=1039962
1040475
Ehrenfeucht–Fraïssé game
In the mathematical discipline of model theory, the Ehrenfeucht–Fraïssé game (also called back-and-forth games) is a technique based on game semantics for determining whether two structures are elementarily equivalent. The main application of Ehrenfeucht–Fraïssé games is in proving the inexpressibility of certain properties in first-order logic. Indeed, Ehrenfeucht–Fraïssé games provide a complete methodology for proving inexpressibility results for first-order logic. In this role, these games are of particular importance in finite model theory and its applications in computer science (specifically computer aided verification and database theory), since Ehrenfeucht–Fraïssé games are one of the few techniques from model theory that remain valid in the context of finite models. Other widely used techniques for proving inexpressibility results, such as the compactness theorem, do not work in finite models. Ehrenfeucht–Fraïssé-like games can also be defined for other logics, such as fixpoint logics and pebble games for finite variable logics; extensions are powerful enough to characterise definability in existential second-order logic. Main idea. The main idea behind the game is that we have two structures, and two players – "Spoiler" and "Duplicator". Duplicator wants to show that the two structures are elementarily equivalent (satisfy the same first-order sentences), whereas Spoiler wants to show that they are different. The game is played in rounds. A round proceeds as follows: Spoiler chooses any element from one of the structures, and Duplicator chooses an element from the other structure. In simplified terms, the Duplicator's task is to always pick an element "similar" to the one that the Spoiler has chosen, whereas the Spoiler's task is to choose an element for which no "similar" element exists in the other structure. Duplicator wins if there exists an isomorphism between the eventual substructures chosen from the two different structures; otherwise, Spoiler wins. The game lasts for a fixed number of steps formula_0 (which is an ordinal – usually a finite number or formula_1). Definition. Suppose that we are given two structures formula_2 and formula_3, each with no function symbols and the same set of relation symbols, and a fixed natural number "n". We can then define the Ehrenfeucht–Fraïssé game formula_4 to be a game between two players, Spoiler and Duplicator, played as follows: For each "n" we define a relation formula_16 if Duplicator wins the "n"-move game formula_4. These are all equivalence relations on the class of structures with the given relation symbols. The intersection of all these relations is again an equivalence relation formula_17. Equivalence and inexpressibility. It is easy to prove that if Duplicator wins this game for all finite "n", that is, formula_17, then formula_2 and formula_3 are elementarily equivalent. If the set of relation symbols being considered is finite, the converse is also true. If a property formula_18 is true of formula_2 but not true of formula_3, but formula_2 and formula_3 can be shown equivalent by providing a winning strategy for Duplicator, then this shows that formula_18 is inexpressible in the logic captured by this game. History. The back-and-forth method used in the Ehrenfeucht–Fraïssé game to verify elementary equivalence was given by Roland Fraïssé in his thesis; it was formulated as a game by Andrzej Ehrenfeucht. The names Spoiler and Duplicator are due to Joel Spencer. Other usual names are Eloise [sic] and Abelard (and often denoted by formula_19 and formula_20) after Heloise and Abelard, a naming scheme introduced by Wilfrid Hodges in his book "Model Theory", or alternatively Eve and Adam. Further reading. Chapter 1 of Poizat's model theory text contains an introduction to the Ehrenfeucht–Fraïssé game, and so do Chapters 6, 7, and 13 of Rosenstein's book on linear orders. A simple example of the Ehrenfeucht–Fraïssé game is given in one of Ivars Peterson's MathTrek columns. Phokion Kolaitis' slides and Neil Immerman's book chapter on Ehrenfeucht–Fraïssé games discuss applications in computer science, the methodology for proving inexpressibility results, and several simple inexpressibility proofs using this methodology. Ehrenfeucht–Fraïssé games are the basis for the operation of derivative on modeloids. Modeloids are certain equivalence relations and the derivative provides for a generalization of standard model theory.
[ { "math_id": 0, "text": "\\gamma" }, { "math_id": 1, "text": "\\omega" }, { "math_id": 2, "text": "\\mathfrak{A}" }, { "math_id": 3, "text": "\\mathfrak{B}" }, { "math_id": 4, "text": "G_n(\\mathfrak{A},\\mathfrak{B})" }, { "math_id": 5, "text": "a_1" }, { "math_id": 6, "text": "b_1" }, { "math_id": 7, "text": "a_2" }, { "math_id": 8, "text": "b_2" }, { "math_id": 9, "text": "n-2" }, { "math_id": 10, "text": "a_1, \\dots, a_n" }, { "math_id": 11, "text": "b_1, \\dots, b_n" }, { "math_id": 12, "text": "\\{1, \\dots,n\\}" }, { "math_id": 13, "text": "i" }, { "math_id": 14, "text": "a_i" }, { "math_id": 15, "text": "b_i" }, { "math_id": 16, "text": "\\mathfrak{A} \\ \\overset{n}{\\sim}\\ \\mathfrak{B}" }, { "math_id": 17, "text": "\\mathfrak{A} \\sim \\mathfrak{B}" }, { "math_id": 18, "text": "Q" }, { "math_id": 19, "text": "\\exists" }, { "math_id": 20, "text": "\\forall" } ]
https://en.wikipedia.org/wiki?curid=1040475
10405346
Superegg
Special type of superellipsoid In geometry, a superegg is a solid of revolution obtained by rotating an elongated superellipse with exponent greater than 2 around its longest axis. It is a special case of superellipsoid. Unlike an elongated ellipsoid, an elongated superegg can stand upright on a flat surface, or on top of another superegg. This is due to its curvature being zero at the tips. The shape was popularized by Danish poet and scientist Piet Hein (1905–1996). Supereggs of various materials, including brass, were sold as novelties or "executive toys" in the 1960s. Mathematical description. The superegg is a superellipsoid whose horizontal cross-sections are circles. It is defined by the inequality formula_0 where "R" is the horizontal radius at the "equator" (the widest part as defined by the circles), and "h" is one half of the height. The exponent "p" determines the degree of flattening at the tips and equator. Hein's choice was "p" = 2.5 (the same one he used for the Sergels Torg roundabout), and "R"/"h" = 6/5. The definition can be changed to have an equality rather than an inequality; this changes the superegg to being a surface of revolution rather than a solid. Volume. The volume of a superegg can be derived via squigonometry, a generalization of trigonometry to squircles. It is related to the gamma function: formula_1 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\left|\\frac{\\sqrt{x^2 + y^2}}{R}\\right|^p + \\left|\\frac{z}{h}\\right|^p \\leq 1 \\, ," }, { "math_id": 1, "text": "V = \\frac{4\\pi hR^2}{3p}\\frac{\\Gamma(1/p) \\Gamma(2/p)}{\\Gamma(3/p)} \\, ." } ]
https://en.wikipedia.org/wiki?curid=10405346
1040597
Euclidean minimum spanning tree
Shortest network connecting points A Euclidean minimum spanning tree of a finite set of points in the Euclidean plane or higher-dimensional Euclidean space connects the points by a system of line segments with the points as endpoints, minimizing the total length of the segments. In it, any two points can reach each other along a path through the line segments. It can be found as the minimum spanning tree of a complete graph with the points as vertices and the Euclidean distances between points as edge weights. The edges of the minimum spanning tree meet at angles of at least 60°, at most six to a vertex. In higher dimensions, the number of edges per vertex is bounded by the kissing number of tangent unit spheres. The total length of the edges, for points in a unit square, is at most proportional to the square root of the number of points. Each edge lies in an empty region of the plane, and these regions can be used to prove that the Euclidean minimum spanning tree is a subgraph of other geometric graphs including the relative neighborhood graph and Delaunay triangulation. By constructing the Delaunay triangulation and then applying a graph minimum spanning tree algorithm, the minimum spanning tree of formula_0 given planar points may be found in time formula_1, as expressed in big O notation. This is optimal in some models of computation, although faster randomized algorithms exist for points with integer coordinates. For points in higher dimensions, finding an optimal algorithm remains an open problem. Definition and related problems. A Euclidean minimum spanning tree, for a set of formula_0 points in the Euclidean plane or Euclidean space, is a system of line segments, having only the given points as their endpoints, whose union includes all of the points in a connected set, and which has the minimum possible total length of any such system. Such a network cannot contain a polygonal ring of segments; if one existed, the network could be shortened by removing an edge of the polygon. Therefore, the minimum-length network forms a tree. This observation leads to the equivalent definition that a Euclidean minimum spanning tree is a tree of line segments between pairs of the given points, of minimum total length. The same tree may also be described as a minimum spanning tree of a weighted complete graph, having the given points as its vertices and the distances between points as edge weights. The same points may have more than one minimum spanning tree. For instance, for the vertices of a regular polygon, removing any edge of the polygon produces a minimum spanning tree. Publications on the Euclidean minimum spanning tree commonly abbreviate it as "EMST". They may also be called "geometric minimum spanning trees", but that term may be used more generally for geometric spaces with non-Euclidean distances, such as "L""p" spaces. When the context of Euclidean point sets is clear, they may be called simply "minimum spanning trees". Several other standard geometric networks are closely related to the Euclidean minimum spanning tree: Properties. Angles and vertex degrees. Whenever two edges of a Euclidean minimum spanning tree meet at a vertex, they must form an angle of 60° or more, with equality only when they form two sides of an equilateral triangle. This is because, for two edges forming any sharper angle, one of the two edges could be replaced by the third, shorter edge of the triangle they form, forming a tree with smaller total length. In comparison, the Steiner tree problem has a stronger angle bound: an optimal Steiner tree has all angles at least 120°. The same 60° angle bound also occurs in the kissing number problem, of finding the maximum number of unit spheres in Euclidean space that can be tangent to a central unit sphere without any two spheres intersecting (beyond a point of tangency). The center points of these spheres have a minimum spanning tree in the form of a star, with the central point adjacent to all other points. Conversely, for any vertex formula_2 of any minimum spanning tree, one can construct non-overlapping unit spheres centered at formula_2 and at points two units along each of its edges, with a tangency for each neighbor of formula_2. Therefore, in formula_0-dimensional space the maximum possible degree of a vertex (the number of spanning tree edges connected to it) equals the kissing number of spheres in formula_0 dimensions. Planar minimum spanning trees have degree at most six, and when a tree has degree six there is always another minimum spanning tree with maximum degree five. Three-dimensional minimum spanning trees have degree at most twelve. The only higher dimensions in which the exact value of the kissing number is known are four, eight, and 24 dimensions. For points generated at random from a given continuous distribution, the minimum spanning tree is almost surely unique. The numbers of vertices of any given degree converge, for large number of vertices, to a constant times that number of vertices. The values of these constants depend on the degree and the distribution. However, even for simple cases—such as the number of leaves for points uniformly distributed in a unit square—their precise values are not known. Empty regions. For any edge formula_3 of any Euclidean minimum spanning tree, the lens (or vesica piscis) formed by intersecting the two circles with formula_3 as their radii cannot have any other given vertex formula_4 in its interior. Put another way, if any tree has an edge formula_3 whose lens contains a third point formula_4, then it is not of minimum length. For, by the geometry of the two circles, formula_4 would be closer to both formula_5 and formula_2 than they are to each other. If edge formula_3 were removed from the tree, formula_4 would remain connected to one of formula_5 and formula_2, but not the other. Replacing the removed edge formula_3 by formula_6 or formula_7 (whichever of these two edges reconnects formula_4 to the vertex from which it was disconnected) would produce a shorter tree. For any edge formula_3 of any Euclidean minimum spanning tree, the rhombus with angles of 60° and 120°, having formula_3 as its long diagonal, is disjoint from the rhombi formed analogously by all other edges. Two edges sharing an endpoint cannot have overlapping rhombi, because that would imply an edge angle sharper than 60°, and two disjoint edges cannot have overlapping rhombi; if they did, the longer of the two edges could be replaced by a shorter edge among the same four vertices. Supergraphs. Certain geometric graphs have definitions involving empty regions in point sets, from which it follows that they contain every edge that can be part of a Euclidean minimum spanning tree. These include: Because the empty-region criteria for these graphs are progressively weaker, these graphs form an ordered sequence of subgraphs. That is, using "⊆" to denote the subset relationship among their edges, these graphs have the relations: &lt;templatestyles src="Block indent/styles.css"/&gt;Euclidean minimum spanning tree ⊆ relative neighborhood graph ⊆ Urquhart graph ⊆ Gabriel graph ⊆ Delaunay triangulation. Another graph guaranteed to contain the minimum spanning tree is the Yao graph, determined for points in the plane by dividing the plane around each point into six 60° wedges and connecting each point to the nearest neighbor in each wedge. The resulting graph contains the relative neighborhood graph, because two vertices with an empty lens must be the nearest neighbors to each other in their wedges. As with many of the other geometric graphs above, this definition can be generalized to higher dimensions, and (unlike the Delaunay triangulation) its generalizations always include a linear number of edges. Total length. For formula_0 points in the unit square (or any other fixed shape), the total length of the minimum spanning tree edges is formula_8. Some sets of points, such as points evenly spaced in a formula_9 grid, attain this bound. For points in a unit hypercube in formula_10-dimensional space, the corresponding bound is formula_11. The same bound applies to the expected total length of the minimum spanning tree for formula_0 points chosen uniformly and independently from a unit square or unit hypercube. Returning to the unit square, the sum of squared edge lengths of the minimum spanning tree is formula_12. This bound follows from the observation that the edges have disjoint rhombi, with area proportional to the edge lengths squared. The formula_8 bound on total length follows by application of the Cauchy–Schwarz inequality. Another interpretation of these results is that the average edge length for any set of points in a unit square is formula_13, at most proportional to the spacing of points in a regular grid; and that for "random" points in a unit square the average length is proportional to formula_14. However, in the random case, with high probability the longest edge has length approximately formula_15 longer than the average by a non-constant factor. With high probability, the longest edge forms a leaf of the spanning tree, and connects a point far from all the other points to its nearest neighbor. For large numbers of points, the distribution of the longest edge length around its expected value converges to a Gumbel distribution. Any geometric spanner, a subgraph of a complete geometric graph whose shortest paths approximate the Euclidean distance, must have total edge length at least as large as the minimum spanning tree, and one of the standard quality measures for a geometric spanner is the ratio between its total length and of the minimum spanning tree for the same points. Several methods for constructing spanners, such as the greedy geometric spanner, achieve a constant bound for this ratio. It has been conjectured that the Steiner ratio, the largest possible ratio between the total length of a minimum spanning tree and Steiner tree for the same set of points in the plane, is formula_16, the ratio for three points in an equilateral triangle. Subdivision. If every edge of a Euclidean minimum spanning tree is subdivided, by adding a new point at its midpoint, then the resulting tree is still a minimum spanning tree of the augmented point set. Repeating this subdivision process allows a Euclidean minimum spanning tree to be subdivided arbitrarily finely. However, subdividing only some of the edges, or subdividing the edges at points other than the midpoint, may produce a point set for which the subdivided tree is not the minimum spanning tree. Computational complexity. For points in any dimension, the minimum spanning tree can be constructed in time formula_17 by constructing a complete graph with an edge between every pair of points, weighted by Euclidean distance, and then applying a graph minimum spanning tree algorithm such as the Prim–Dijkstra–Jarník algorithm or Borůvka's algorithm on it. These algorithms can be made to take time formula_17 on complete graphs, unlike another common choice, Kruskal's algorithm, which is slower because it involves sorting all distances. For points in low-dimensional spaces, the problem may be solved more quickly, as detailed below. Computing Euclidean distances involves a square root calculation. In any comparison of edge weights, comparing the squares of the Euclidean distances, instead of the distances themselves, yields the same ordering, and so does not change the rest of the tree's computation. This shortcut speeds up calculation and allows a minimum spanning tree for points with integer coordinates to be constructed using only integer arithmetic. Two dimensions. A faster approach to finding the minimum spanning tree of planar points uses the property that it is a subgraph of the Delaunay triangulation: The result is an algorithm taking formula_1 time, optimal in certain models of computation (see below). If the input coordinates are integers and can be used as array indices, faster algorithms are possible: the Delaunay triangulation can be constructed by a randomized algorithm in formula_20 expected time. Additionally, since the Delaunay triangulation is a planar graph, its minimum spanning tree can be found in linear time by a variant of Borůvka's algorithm that removes all but the cheapest edge between each pair of components after each stage of the algorithm. Therefore, the total expected time for this algorithm is formula_20. In the other direction, the Delaunay triangulation can be constructed from the minimum spanning tree in the near-linear time bound formula_21, where formula_22 denotes the iterated logarithm. Higher dimensions. The problem can also be generalized to formula_0 points in the formula_10-dimensional space formula_23. In higher dimensions, the connectivity determined by the Delaunay triangulation (which, likewise, partitions the convex hull into formula_10-dimensional simplices) contains the minimum spanning tree; however, the triangulation might contain the complete graph. Therefore, finding the Euclidean minimum spanning tree as a spanning tree of the complete graph or as a spanning tree of the Delaunay triangulation both take formula_24 time. For three dimensions the minimum spanning tree can be found in time formula_25, and in any greater dimension, in time formula_26 for any formula_27—faster than the quadratic time bound for the complete graph and Delaunay triangulation algorithms. The optimal time complexity for higher-dimensional minimum spanning trees remains unknown, but is closely related to the complexity of computing "bichromatic closest pairs". In the bichromatic closest pair problem, the input is a set of points, given two different colors (say, red and blue). The output is a pair of a red point and a blue point with the minimum possible distance. This pair always forms one of the edges in the minimum spanning tree. Therefore, the bichromatic closest pair problem can be solved in the amount of time that it takes to construct a minimum spanning tree and scan its edges for the shortest red–blue edge. Conversely, for any red–blue coloring of any subset of a given set of points, the bichromatic closest pair produces one edge of the minimum spanning tree of the subset. By carefully choosing a sequence of colorings of subsets, and finding the bichromatic closest pair of each subproblem, the minimum spanning tree may be found in time proportional to the optimal time for finding bichromatic closest pairs for the same number of points, whatever that optimal time turns out to be. For uniformly random point sets in any bounded dimension, the Yao graph or Delaunay triangulation have linear expected numbers of edges, are guaranteed to contain the minimum spanning tree, and can be constructed in linear expected time. From these graphs, the minimum spanning tree itself may be constructed in linear time, by using a randomized linear time algorithm for graph minimum spanning trees. However, the poor performance of these methods on inputs coming from clustered data has led algorithm engineering researchers to develop methods with a somewhat slower formula_1 time bound, for random inputs or inputs whose distances and clustering resemble those of random data, while exhibiting better performance on real-world data. A well-separated pair decomposition is a family of pairs of subsets of the given points, so that every pair of points belong to one of these pairs of subsets, and so that all pairs of points coming from the same pair of subsets have approximately the same length. It is possible to find a well-separated pair decomposition with a linear number of subsets, and a representative pair of points for each subset, in time formula_1. The minimum spanning tree of the graph formed by these representative pairs is then an approximation to the minimum spanning tree. Using these ideas, a formula_28-approximation to the minimum spanning tree may be found in formula_1 time, for constant formula_29. More precisely, by choosing each representative pair to approximate the closest pair in its equivalence class, and carefully varying the quality of this approximation for different pairs, the dependence on formula_29 in the time bound can be given as formula_30 for any fixed dimension. Dynamic and kinetic. The Euclidean minimum spanning tree has been generalized in many different ways to systems of moving or changing points: Lower bound. An asymptotic lower bound of formula_34 of the Euclidean minimum spanning tree problem can be established in restricted models of computation. These include the algebraic decision tree and algebraic computation tree models, in which the algorithm has access to the input points only through certain restricted primitives that perform simple algebraic computations on their coordinates. In these models, the closest pair of points problem requires formula_34 time, but the closest pair is necessarily an edge of the minimum spanning tree, so the minimum spanning tree also requires this much time. Therefore, algorithms for constructing the planar minimum spanning tree in time formula_1 within this model, for instance by using the Delaunay triangulation, are optimal. However, these lower bounds do not apply to models of computation with integer point coordinates, in which bitwise operations and table indexing operations on those coordinates are permitted. In these models, faster algorithms are possible, as described above. Applications. An obvious application of Euclidean minimum spanning trees is to find the cheapest network of wires or pipes to connect a set of places, assuming the links cost a fixed amount per unit length. The first publications on minimum spanning trees more generally concerned a geographic version of the problem, involving the design of an electrical grid for southern Moravia, and an application to minimizing wire lengths in circuits was described in 1957 by Loberman and Weinberger. Minimum spanning trees are closely related to single-linkage clustering, one of several methods for hierarchical clustering. The edges of a minimum spanning tree, sorted by their length, give the order in which to merge clusters into larger clusters in this clustering method. Once these edges have been found, by any algorithm, they may be used to construct the single-linkage clustering in time formula_1. Although the long thin cluster shapes produced by single-linkage clustering can be a bad fit for certain types of data, such as mixtures of Gaussian distributions, it can be a good choice in applications where the clusters themselves are expected to have long thin shapes, such as in modeling the dark matter halos of galaxies. In geographic information science, several researcher groups have used minimum spanning trees of the centroids of buildings to identify meaningful clusters of buildings, for instance by removing edges identified in some other way as inconsistent. Minimum spanning trees have also been used to infer the shape of curves in the plane, given points sampled along the curve. For a smooth curve, sampled more finely than its local feature size, the minimum spanning tree will form a path connecting consecutive points along the curve. More generally, similar methods can recognize curves drawn in a dotted or dashed style rather than as a single connected set. Applications of this curve-finding technique include particle physics, in automatically identifying the tracks left by particles in a bubble chamber. More sophisticated versions of this idea can find curves from a cloud of noisy sample points that roughly follows the curve outline, by using the topology of the spanning tree to guide a moving least squares method. Another application of minimum spanning trees is a constant-factor approximation algorithm for the Euclidean traveling salesman problem, the problem of finding the shortest polygonalization of a point set. Walking around the boundary of the minimum spanning tree can approximate the optimal traveling salesman tour within a factor of two of the optimal length. However, more accurate polynomial-time approximation schemes are known for this problem. In wireless ad hoc networks, broadcasting messages along paths in a minimum spanning tree can be an accurate approximation to the minimum-energy broadcast routing, which is, again, hard to compute exactly. Realization. The "realization problem" for Euclidean minimum spanning trees takes an abstract tree as input and seeks a geometric location for each vertex of the tree (in a space of some fixed dimension), such that the given tree equals the minimum spanning tree of those points. Not every abstract tree has such a realization; for instance, the tree must obey the kissing number bound on the degree of each vertex. Additional restrictions exist; for instance, it is not possible for a planar minimum spanning tree to have a degree-six vertex adjacent to a vertex of degree five or six. Determining whether a two-dimensional realization exists is NP-hard. However, the proof of hardness depends on the fact that degree-six vertices in a tree have a very restricted set of realizations: the neighbors of such a vertex must be placed on the vertices of a regular hexagon centered at that vertex. Indeed, for trees of maximum degree five, a planar realization always exists. Similarly, for trees of maximum degree ten, a three-dimensional realization always exists. For these realizations, some trees may require edges of exponential length and bounding boxes of exponential area relative to the length of their shortest edge. Trees of maximum degree four have smaller planar realizations, with polynomially bounded edge lengths and bounding boxes. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n" }, { "math_id": 1, "text": "O(n\\log n)" }, { "math_id": 2, "text": "v" }, { "math_id": 3, "text": "uv" }, { "math_id": 4, "text": "w" }, { "math_id": 5, "text": "u" }, { "math_id": 6, "text": "uw" }, { "math_id": 7, "text": "vw" }, { "math_id": 8, "text": "O(\\sqrt n)" }, { "math_id": 9, "text": "\\sqrt n \\times \\sqrt n" }, { "math_id": 10, "text": "d" }, { "math_id": 11, "text": "O(n^{(d-1)/d})" }, { "math_id": 12, "text": "O(1)" }, { "math_id": 13, "text": "O(1/\\sqrt n)" }, { "math_id": 14, "text": "1/\\sqrt n" }, { "math_id": 15, "text": "\\sqrt{\\frac{\\log n}{\\pi n}}," }, { "math_id": 16, "text": "2/\\sqrt{3}\\approx 1.1547" }, { "math_id": 17, "text": "O(n^2)" }, { "math_id": 18, "text": "3n-6" }, { "math_id": 19, "text": "O(n)" }, { "math_id": 20, "text": "O(n\\log\\log n)" }, { "math_id": 21, "text": "O(n\\log^* n)" }, { "math_id": 22, "text": "\\log^*" }, { "math_id": 23, "text": "\\R^d" }, { "math_id": 24, "text": "O(dn^2)" }, { "math_id": 25, "text": "O\\bigl((n\\log n)^{4/3}\\bigr)" }, { "math_id": 26, "text": "O\\left(n^{2-\\frac{2}{\\lceil d/2\\rceil+1}+\\varepsilon}\\right)" }, { "math_id": 27, "text": "\\varepsilon>0" }, { "math_id": 28, "text": "(1+\\varepsilon)" }, { "math_id": 29, "text": "\\varepsilon" }, { "math_id": 30, "text": "O(n \\log n + (\\varepsilon^{-2} \\log ^2 \\tfrac{1}{\\varepsilon})n)," }, { "math_id": 31, "text": "O(\\log^2 n)" }, { "math_id": 32, "text": "O(\\log^{10} n)" }, { "math_id": 33, "text": "n^{25/9}" }, { "math_id": 34, "text": "\\Omega(n\\log n)" } ]
https://en.wikipedia.org/wiki?curid=1040597
10406190
Standard probability space
Type of probability space In probability theory, a standard probability space, also called Lebesgue–Rokhlin probability space or just Lebesgue space (the latter term is ambiguous) is a probability space satisfying certain assumptions introduced by Vladimir Rokhlin in 1940. Informally, it is a probability space consisting of an interval and/or a finite or countable number of atoms. The theory of standard probability spaces was started by von Neumann in 1932 and shaped by Vladimir Rokhlin in 1940. Rokhlin showed that the unit interval endowed with the Lebesgue measure has important advantages over general probability spaces, yet can be effectively substituted for many of these in probability theory. The dimension of the unit interval is not an obstacle, as was clear already to Norbert Wiener. He constructed the Wiener process (also called Brownian motion) in the form of a measurable map from the unit interval to the space of continuous functions. Short history. The theory of standard probability spaces was started by von Neumann in 1932 and shaped by Vladimir Rokhlin in 1940. For modernized presentations see , , and . Nowadays standard probability spaces may be (and often are) treated in the framework of descriptive set theory, via standard Borel spaces, see for example . This approach is based on the isomorphism theorem for standard Borel spaces . An alternate approach of Rokhlin, based on measure theory, neglects null sets, in contrast to descriptive set theory. Standard probability spaces are used routinely in ergodic theory. Definition. One of several well-known equivalent definitions of the standardness is given below, after some preparations. All probability spaces are assumed to be complete. Isomorphism. An isomorphism between two probability spaces formula_0, formula_1 is an invertible map formula_2 such that formula_3 and formula_4 both are (measurable and) measure preserving maps. Two probability spaces are isomorphic if there exists an isomorphism between them. Isomorphism modulo zero. Two probability spaces formula_0, formula_1 are isomorphic formula_5 if there exist null sets formula_6, formula_7 such that the probability spaces formula_8, formula_9 are isomorphic (being endowed naturally with sigma-fields and probability measures). Standard probability space. A probability space is standard, if it is isomorphic formula_5 to an interval with Lebesgue measure, a finite or countable set of atoms, or a combination (disjoint union) of both. See , , and . See also , and . In the measure is assumed finite, not necessarily probabilistic. In atoms are not allowed. Examples of non-standard probability spaces. A naive white noise. The space of all functions formula_10 may be thought of as the product formula_11 of a continuum of copies of the real line formula_12. One may endow formula_12 with a probability measure, say, the standard normal distribution formula_13, and treat the space of functions as the product formula_14 of a continuum of identical probability spaces formula_15. The product measure formula_16 is a probability measure on formula_11. Naively it might seem that formula_16 describes white noise. However, the integral of a white noise function from 0 to 1 should be a random variable distributed "N"(0, 1). In contrast, the integral (from 0 to 1) of formula_17 is undefined. "ƒ" also fails to be almost surely measurable, and the probability of "ƒ" being measurable is undefined. Indeed, if "X" is a random variable distributed (say) uniformly on (0, 1) and independent of "ƒ", then "ƒ"("X") is not a random variable at all (it lacks measurability). A perforated interval. Let formula_18 be a set whose inner Lebesgue measure is equal to 0, but outer Lebesgue measure is equal to 1 (thus, formula_19 is nonmeasurable to extreme). There exists a probability measure formula_20 on formula_19 such that formula_21 for every Lebesgue measurable formula_22. (Here formula_23 is the Lebesgue measure.) Events and random variables on the probability space formula_24 (treated formula_5) are in a natural one-to-one correspondence with events and random variables on the probability space formula_25. It might seem that the probability space formula_24 is as good as formula_25. However, it is not. A random variable formula_26 defined by formula_27 is distributed uniformly on formula_28. The conditional measure, given formula_29, is just a single atom (at formula_30), provided that formula_25 is the underlying probability space. However, if formula_24 is used instead, then the conditional measure does not exist when formula_31. A perforated circle is constructed similarly. Its events and random variables are the same as on the usual circle. The group of rotations acts on them naturally. However, it fails to act on the perforated circle. See also . A superfluous measurable set. Let formula_18 be as in the previous example. Sets of the form formula_32 where formula_33 and formula_34 are arbitrary Lebesgue measurable sets, are a σ-algebra formula_35 it contains the Lebesgue σ-algebra and formula_36 The formula formula_37 gives the general form of a probability measure formula_20 on formula_38 that extends the Lebesgue measure; here formula_39 is a parameter. To be specific, we choose formula_40 It might seem that such an extension of the Lebesgue measure is at least harmless. However, it is the perforated interval in disguise. The map formula_41 is an isomorphism between formula_42 and the perforated interval corresponding to the set formula_43 another set of inner Lebesgue measure 0 but outer Lebesgue measure 1. See also . A criterion of standardness. Standardness of a given probability space formula_44 is equivalent to a certain property of a measurable map formula_3 from formula_44 to a measurable space formula_45 The answer (standard, or not) does not depend on the choice of formula_46 and formula_3. This fact is quite useful; one may adapt the choice of formula_46 and formula_3 to the given formula_47 No need to examine all cases. It may be convenient to examine a random variable formula_48 a random vector formula_49 a random sequence formula_50 or a sequence of events formula_51 treated as a sequence of two-valued random variables, formula_52 Two conditions will be imposed on formula_3 (to be injective, and generating). Below it is assumed that such formula_3 is given. The question of its existence will be addressed afterwards. The probability space formula_44 is assumed to be complete (otherwise it cannot be standard). A single random variable. A measurable function formula_53 induces a pushforward measure formula_54, – the probability measure formula_55 on formula_56 defined by formula_57    for Borel sets formula_58 i.e. the distribution of the random variable formula_59. The image formula_60 is always a set of full outer measure, formula_61 but its inner measure can differ (see "a perforated interval"). In other words, formula_60 need not be a set of full measure formula_62 A measurable function formula_53 is called "generating" if formula_63 is the completion with respect to formula_64 of the σ-algebra of inverse images formula_65 where formula_66 runs over all Borel sets. "Caution."   The following condition is not sufficient for formula_3 to be generating: for every formula_67 there exists a Borel set formula_66 such that formula_68 (formula_69 means symmetric difference). Theorem. Let a measurable function formula_53 be injective and generating, then the following two conditions are equivalent: See also . A random vector. The same theorem holds for any formula_73 (in place of formula_74). A measurable function formula_75 may be thought of as a finite sequence of random variables formula_76 and formula_77 is generating if and only if formula_78 is the completion of the σ-algebra generated by formula_79 A random sequence. The theorem still holds for the space formula_80 of infinite sequences. A measurable function formula_81 may be thought of as an infinite sequence of random variables formula_82 and formula_77 is generating if and only if formula_78 is the completion of the σ-algebra generated by formula_83 A sequence of events. In particular, if the random variables formula_84 take on only two values 0 and 1, we deal with a measurable function formula_85 and a sequence of sets formula_86 The function formula_77 is generating if and only if formula_78 is the completion of the σ-algebra generated by formula_87 In the pioneering work sequences formula_88 that correspond to injective, generating formula_77 are called "bases" of the probability space formula_72 (see ). A basis is called complete mod 0, if formula_89 is of full measure formula_90 see . In the same section Rokhlin proved that if a probability space is complete mod 0 with respect to some basis, then it is complete mod 0 with respect to every other basis, and defines "Lebesgue spaces" by this completeness property. See also and . Additional remarks. The four cases treated above are mutually equivalent, and can be united, since the measurable spaces formula_91 formula_92 formula_80 and formula_93 are mutually isomorphic; they all are standard measurable spaces (in other words, standard Borel spaces). Existence of an injective measurable function from formula_44 to a standard measurable space formula_46 does not depend on the choice of formula_94 Taking formula_95 we get the property well known as being "countably separated" (but called "separable" in ). Existence of a generating measurable function from formula_44 to a standard measurable space formula_46 also does not depend on the choice of formula_94 Taking formula_95 we get the property well known as being "countably generated" (mod 0), see . Every injective measurable function from a "standard" probability space to a "standard" measurable space is generating. See , , . This property does not hold for the non-standard probability space dealt with in the subsection "A superfluous measurable set" above. "Caution."   The property of being countably generated is invariant under mod 0 isomorphisms, but the property of being countably separated is not. In fact, a standard probability space formula_44 is countably separated if and only if the cardinality of formula_96 does not exceed continuum (see ). A standard probability space may contain a null set of any cardinality, thus, it need not be countably separated. However, it always contains a countably separated subset of full measure. Equivalent definitions. Let formula_44 be a complete probability space such that the cardinality of formula_96 does not exceed continuum (the general case is reduced to this special case, see the caution above). Via absolute measurability. Definition.   formula_44 is standard if it is countably separated, countably generated, and absolutely measurable. See and . "Absolutely measurable" means: measurable in every countably separated, countably generated probability space containing it. Via perfectness. Definition.   formula_44 is standard if it is countably separated and perfect. See . "Perfect" means that for every measurable function from formula_44 to formula_74 the image measure is regular. (Here the image measure is defined on all sets whose inverse images belong to formula_63, irrespective of the Borel structure of formula_74). Via topology. Definition.   formula_44 is standard if there exists a topology formula_97 on formula_96 such that See . Verifying the standardness. Every probability distribution on the space formula_102 turns it into a standard probability space. (Here, a probability distribution means a probability measure defined initially on the Borel sigma-algebra and completed.) The same holds on every Polish space, see , , , and . For example, the Wiener measure turns the Polish space formula_103 (of all continuous functions formula_104 endowed with the topology of local uniform convergence) into a standard probability space. Another example: for every sequence of random variables, their joint distribution turns the Polish space formula_105 (of sequences; endowed with the product topology) into a standard probability space. The product of two standard probability spaces is a standard probability space. The same holds for the product of countably many spaces, see , , and . A measurable subset of a standard probability space is a standard probability space. It is assumed that the set is not a null set, and is endowed with the conditional measure. See and . Every probability measure on a standard Borel space turns it into a standard probability space. Using the standardness. Regular conditional probabilities. In the discrete setup, the conditional probability is another probability measure, and the conditional expectation may be treated as the (usual) expectation with respect to the conditional measure, see conditional expectation. In the non-discrete setup, conditioning is often treated indirectly, since the condition may have probability 0, see conditional expectation. As a result, a number of well-known facts have special 'conditional' counterparts. For example: linearity of the expectation; Jensen's inequality (see conditional expectation); Hölder's inequality; the monotone convergence theorem, etc. Given a random variable formula_106 on a probability space formula_44, it is natural to try constructing a conditional measure formula_107, that is, the conditional distribution of formula_108 given formula_109. In general this is impossible (see ). However, for a "standard" probability space formula_44 this is possible, and well known as "canonical system of measures" (see ), which is basically the same as "conditional probability measures" (see ), "disintegration of measure" (see ), and "regular conditional probabilities" (see ). The conditional Jensen's inequality is just the (usual) Jensen's inequality applied to the conditional measure. The same holds for many other facts. Measure preserving transformations. Given two probability spaces formula_0, formula_1 and a measure preserving map formula_2, the image formula_110 need not cover the whole formula_111, it may miss a null set. It may seem that formula_112 has to be equal to 1, but it is not so. The outer measure of formula_110 is equal to 1, but the inner measure may differ. However, if the probability spaces formula_0, formula_1 are "standard " then formula_113, see . If formula_3 is also one-to-one then every formula_114 satisfies formula_115, formula_116. Therefore, formula_4 is measurable (and measure preserving). See and . See also . "There is a coherent way to ignore the sets of measure 0 in a measure space" . Striving to get rid of null sets, mathematicians often use equivalence classes of measurable sets or functions. Equivalence classes of measurable subsets of a probability space form a normed complete Boolean algebra called the "measure algebra" (or metric structure). Every measure preserving map formula_2 leads to a homomorphism formula_117 of measure algebras; basically, formula_118 for formula_119. It may seem that every homomorphism of measure algebras has to correspond to some measure preserving map, but it is not so. However, for "standard" probability spaces each formula_117 corresponds to some formula_3. See , , .
[ { "math_id": 0, "text": "\\textstyle (\\Omega_1,\\mathcal{F}_1,P_1) " }, { "math_id": 1, "text": "\\textstyle (\\Omega_2,\\mathcal{F}_2,P_2) " }, { "math_id": 2, "text": "\\textstyle f : \\Omega_1 \\to \\Omega_2 " }, { "math_id": 3, "text": "\\textstyle f " }, { "math_id": 4, "text": "\\textstyle f^{-1} " }, { "math_id": 5, "text": "\\textstyle \\operatorname{mod} \\, 0 " }, { "math_id": 6, "text": "\\textstyle A_1 \\subset \\Omega_1 " }, { "math_id": 7, "text": "\\textstyle A_2 \\subset \\Omega_2 " }, { "math_id": 8, "text": "\\textstyle \\Omega_1 \\setminus A_1 " }, { "math_id": 9, "text": "\\textstyle \\Omega_2 \\setminus A_2 " }, { "math_id": 10, "text": "\\textstyle f : \\mathbb{R} \\to \\mathbb{R} " }, { "math_id": 11, "text": "\\textstyle \\mathbb{R}^\\mathbb{R} " }, { "math_id": 12, "text": "\\textstyle \\mathbb{R} " }, { "math_id": 13, "text": "\\textstyle \\gamma = N(0,1) " }, { "math_id": 14, "text": "\\textstyle (\\mathbb{R},\\gamma)^\\mathbb{R} " }, { "math_id": 15, "text": "\\textstyle (\\mathbb{R},\\gamma) " }, { "math_id": 16, "text": "\\textstyle \\gamma^\\mathbb{R} " }, { "math_id": 17, "text": "\\textstyle f \\in \\textstyle (\\mathbb{R},\\gamma)^\\mathbb{R} " }, { "math_id": 18, "text": "\\textstyle Z \\subset (0,1) " }, { "math_id": 19, "text": "\\textstyle Z " }, { "math_id": 20, "text": "\\textstyle m " }, { "math_id": 21, "text": "\\textstyle m(Z \\cap A) = \\operatorname{mes} (A) " }, { "math_id": 22, "text": "\\textstyle A \\subset (0,1) " }, { "math_id": 23, "text": "\\textstyle \\operatorname{mes}" }, { "math_id": 24, "text": "\\textstyle (Z,m) " }, { "math_id": 25, "text": "\\textstyle ((0,1),\\operatorname{mes}) " }, { "math_id": 26, "text": "\\textstyle X " }, { "math_id": 27, "text": "\\textstyle X(\\omega)=\\omega " }, { "math_id": 28, "text": "\\textstyle (0,1) " }, { "math_id": 29, "text": "\\textstyle X=x " }, { "math_id": 30, "text": "\\textstyle x" }, { "math_id": 31, "text": "\\textstyle x \\notin Z " }, { "math_id": 32, "text": "\\textstyle ( A \\cap Z ) \\cup ( B \\setminus Z ), " }, { "math_id": 33, "text": "\\textstyle A " }, { "math_id": 34, "text": "\\textstyle B " }, { "math_id": 35, "text": "\\textstyle \\mathcal{F}; " }, { "math_id": 36, "text": "\\textstyle Z. " }, { "math_id": 37, "text": "\\displaystyle m \\big( ( A \\cap Z ) \\cup ( B \\setminus Z ) \\big) = p \\, \\operatorname{mes} (A) + (1-p) \\operatorname{mes} (B) " }, { "math_id": 38, "text": "\\textstyle \\big( (0,1), \\mathcal{F} \\big) " }, { "math_id": 39, "text": "\\textstyle p \\in [0,1] " }, { "math_id": 40, "text": "\\textstyle p = 0.5. " }, { "math_id": 41, "text": " f(x) = \\begin{cases}\n 0.5 x &\\text{for } x \\in Z, \\\\\n 0.5 + 0.5 x &\\text{for } x \\in (0,1) \\setminus Z\n\\end{cases} " }, { "math_id": 42, "text": "\\textstyle \\big( (0,1), \\mathcal{F}, m \\big) " }, { "math_id": 43, "text": "\\displaystyle Z_1 = \\{ 0.5 x : x \\in Z \\} \\cup \\{ 0.5 + 0.5 x : x \\in (0,1) \\setminus Z \\} \\, ," }, { "math_id": 44, "text": "\\textstyle (\\Omega,\\mathcal{F},P) " }, { "math_id": 45, "text": "\\textstyle (X,\\Sigma)." }, { "math_id": 46, "text": "\\textstyle (X,\\Sigma) " }, { "math_id": 47, "text": "\\textstyle (\\Omega,\\mathcal{F},P)." }, { "math_id": 48, "text": "\\textstyle f : \\Omega \\to \\mathbb{R}, " }, { "math_id": 49, "text": "\\textstyle f : \\Omega \\to \\mathbb{R}^n, " }, { "math_id": 50, "text": "\\textstyle f : \\Omega \\to \\mathbb{R}^\\infty, " }, { "math_id": 51, "text": "\\textstyle (A_1,A_2,\\dots) " }, { "math_id": 52, "text": "\\textstyle f : \\Omega \\to \\{0,1\\}^\\infty." }, { "math_id": 53, "text": "\\textstyle f : \\Omega \\to \\mathbb{R} " }, { "math_id": 54, "text": "f_*P" }, { "math_id": 55, "text": "\\textstyle \\mu " }, { "math_id": 56, "text": "\\textstyle \\mathbb{R}, " }, { "math_id": 57, "text": "\\displaystyle \\mu(B) = (f_*P)(B) = P \\big( f^{-1}(B) \\big) " }, { "math_id": 58, "text": "\\textstyle B \\subset \\mathbb{R}. " }, { "math_id": 59, "text": "f" }, { "math_id": 60, "text": "\\textstyle f (\\Omega) " }, { "math_id": 61, "text": "\\displaystyle \\mu^* \\big( f(\\Omega) \\big) = \\inf_{B \\supset f(\\Omega)}\\mu(B) = \\inf_{B \\supset f(\\Omega)}P(f^{-1}(B)) = P(\\Omega) = 1, " }, { "math_id": 62, "text": "\\textstyle \\mu. " }, { "math_id": 63, "text": "\\textstyle \\mathcal{F} " }, { "math_id": 64, "text": "P" }, { "math_id": 65, "text": "\\textstyle f^{-1}(B), " }, { "math_id": 66, "text": "\\textstyle B \\subset \\mathbb{R} " }, { "math_id": 67, "text": "\\textstyle A \\in \\mathcal{F} " }, { "math_id": 68, "text": "\\textstyle P ( A \\mathbin{\\Delta} f^{-1}(B) ) = 0. " }, { "math_id": 69, "text": "\\textstyle \\Delta " }, { "math_id": 70, "text": "\\mu(\\textstyle f (\\Omega)) = 1 " }, { "math_id": 71, "text": "\\textstyle f (\\Omega)" }, { "math_id": 72, "text": " (\\Omega,\\mathcal{F},P) \\," }, { "math_id": 73, "text": " \\mathbb{R}^n \\," }, { "math_id": 74, "text": " \\mathbb{R} \\," }, { "math_id": 75, "text": " f : \\Omega \\to \\mathbb{R}^n \\," }, { "math_id": 76, "text": " X_1,\\dots,X_n : \\Omega \\to \\mathbb{R}, \\," }, { "math_id": 77, "text": " f \\," }, { "math_id": 78, "text": " \\mathcal{F} \\," }, { "math_id": 79, "text": " X_1,\\dots,X_n. \\," }, { "math_id": 80, "text": " \\mathbb{R}^\\infty \\," }, { "math_id": 81, "text": " f : \\Omega \\to \\mathbb{R}^\\infty \\," }, { "math_id": 82, "text": " X_1,X_2,\\dots : \\Omega \\to \\mathbb{R}, \\," }, { "math_id": 83, "text": " X_1,X_2,\\dots. \\," }, { "math_id": 84, "text": " X_n \\," }, { "math_id": 85, "text": " f : \\Omega \\to \\{0,1\\}^\\infty \\," }, { "math_id": 86, "text": " A_1,A_2,\\ldots \\in \\mathcal{F}. \\," }, { "math_id": 87, "text": " A_1,A_2,\\dots. \\," }, { "math_id": 88, "text": " A_1,A_2,\\ldots \\," }, { "math_id": 89, "text": " f(\\Omega) \\," }, { "math_id": 90, "text": " \\mu, \\," }, { "math_id": 91, "text": " \\mathbb{R}, \\," }, { "math_id": 92, "text": " \\mathbb{R}^n, \\," }, { "math_id": 93, "text": " \\{0,1\\}^\\infty \\," }, { "math_id": 94, "text": "\\textstyle (X,\\Sigma). " }, { "math_id": 95, "text": "\\textstyle (X,\\Sigma) = \\{0,1\\}^\\infty " }, { "math_id": 96, "text": "\\textstyle \\Omega " }, { "math_id": 97, "text": "\\textstyle \\tau " }, { "math_id": 98, "text": "\\textstyle (\\Omega,\\tau) " }, { "math_id": 99, "text": "\\textstyle \\varepsilon > 0 " }, { "math_id": 100, "text": "\\textstyle K " }, { "math_id": 101, "text": "\\textstyle P(K) \\ge 1-\\varepsilon. " }, { "math_id": 102, "text": "\\textstyle \\mathbb{R}^n " }, { "math_id": 103, "text": "\\textstyle C[0,\\infty) " }, { "math_id": 104, "text": "\\textstyle [0,\\infty) \\to \\mathbb{R}, " }, { "math_id": 105, "text": "\\textstyle \\mathbb{R}^\\infty " }, { "math_id": 106, "text": "\\textstyle Y " }, { "math_id": 107, "text": "\\textstyle P_y " }, { "math_id": 108, "text": "\\textstyle \\omega \\in \\Omega " }, { "math_id": 109, "text": "\\textstyle Y(\\omega)=y " }, { "math_id": 110, "text": "\\textstyle f(\\Omega_1) " }, { "math_id": 111, "text": "\\textstyle \\Omega_2 " }, { "math_id": 112, "text": "\\textstyle P_2(f(\\Omega_1)) " }, { "math_id": 113, "text": "\\textstyle P_2(f(\\Omega_1))=1 " }, { "math_id": 114, "text": "\\textstyle A \\in \\mathcal{F}_1 " }, { "math_id": 115, "text": "\\textstyle f(A) \\in \\mathcal{F}_2 " }, { "math_id": 116, "text": "\\textstyle P_2(f(A))=P_1(A) " }, { "math_id": 117, "text": "\\textstyle F " }, { "math_id": 118, "text": "\\textstyle F(B) = f^{-1}(B) " }, { "math_id": 119, "text": "\\textstyle B\\in\\mathcal{F}_2 " } ]
https://en.wikipedia.org/wiki?curid=10406190
1040671
129 (number)
Natural number 129 (one hundred [and] twenty-nine) is the natural number following 128 and preceding 130. In mathematics. 129 is the sum of the first ten prime numbers. It is the smallest number that can be expressed as a sum of three squares in four different ways: formula_0, formula_1, formula_2, and formula_3. 129 is the product of only two primes, 3 and 43, making 129 a semiprime. Since 3 and 43 are both Gaussian primes, this means that 129 is a Blum integer. 129 is a repdigit in base 6 (333). 129 is a happy number. 129 is a centered octahedral number. In other fields. 129 is also: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "11^2+2^2+2^2" }, { "math_id": 1, "text": "10^2+5^2+2^2" }, { "math_id": 2, "text": "8^2+8^2+1^2" }, { "math_id": 3, "text": "8^2+7^2+4^2" } ]
https://en.wikipedia.org/wiki?curid=1040671
1040853
Radio window
The radio window is the region of the radio spectrum that penetrate the Earth's atmosphere. Typically, the lower limit of the radio window's range has a value of about 10 MHz (λ ≈ 30 m); the best upper limit achievable from optimal terrestrial observation sites is equal to approximately 1 THz (λ ≈ 0.3 mm). It plays an important role in astronomy; up until the 1940s, astronomers could only use the visible and near infrared spectra for their measurements and observations. With the development of radio telescopes, the radio window became more and more utilizable, leading to the development of radio astronomy that provided astrophysicists with valuable observational data. Factors affecting lower and upper limits. The lower and upper limits of the radio window's range of frequencies are not fixed; they depend on a variety of factors. Absorption of mid-IR. The upper limit is affected by the vibrational transitions of atmospheric molecules such as oxygen (O2), carbon dioxide (CO2), and water (H2O), whose energies are comparable to the energies of mid-infrared photons: these molecules largely absorb the mid-infrared radiation that heads towards Earth. Ionosphere. The radio window's lower frequency limit is greatly affected by the ionospheric refraction of the radio waves whose frequencies are approximately below 30 MHz (λ &gt; 10 m); radio waves with frequencies below the limit of 10 MHz (λ &gt; 30 m) are reflected back into space by the ionosphere. The lower limit is proportional to the density of the ionosphere's free electrons and coincides with the plasma frequency: formula_0 where formula_1 is the plasma frequency in Hz and formula_2 the electron density in electrons per cubic meter. Since it is highly dependent on sunlight, the value of formula_2 changes significantly from daytime to nighttime usually being lower during the day, leading to a decrease of the radio window's lower limit and higher during the night, causing an increase of the radio window's lower frequency end. However, this also depends on the solar activity and the geographic position. Troposphere. When performing observations, radio astronomers try to extend the upper limit of the radio window towards the 1 THz optimum, since the astronomical objects give spectral lines of greater intensity in the higher frequency range. Tropospheric water vapour greatly affects the upper limit since its resonant absorption frequency bands are 22.3 GHz (λ ≈ 1.32 cm), 183.3 GHz (λ ≈ 1.64 mm) and 323.8 GHz (λ ≈ 0.93 mm). The tropospheric oxygen's bands at 60 GHz (λ ≈ 5.00 mm) and 118.74 GHz (λ ≈ 2.52 mm) also affect the upper limit. To tackle the issue of water vapour, many observatories are built at high altitudes where the climate is more dry. However, few can be done to avoid the oxygen's interference with radio waves propagation. Radio frequency interference. The width of the radio window is also affected by radio frequency interference which hinders the observations at certain wavelength ranges and undermines the quality of the observational data of radio astronomy.
[ { "math_id": 0, "text": " f_p = 9 \\sqrt{N_e}," }, { "math_id": 1, "text": "f_p" }, { "math_id": 2, "text": "N_e" } ]
https://en.wikipedia.org/wiki?curid=1040853
1040970
Confidence region
In statistics, a confidence region is a multi-dimensional generalization of a confidence interval. It is a set of points in an "n"-dimensional space, often represented as an ellipsoid around a point which is an estimated solution to a problem, although other shapes can occur. Interpretation. The confidence region is calculated in such a way that if a set of measurements were repeated many times and a confidence region calculated in the same way on each set of measurements, then a certain percentage of the time (e.g. 95%) the confidence region would include the point representing the "true" values of the set of variables being estimated. However, unless certain assumptions about prior probabilities are made, it does not mean, when one confidence region has been calculated, that there is a 95% probability that the "true" values lie inside the region, since we do not assume any particular probability distribution of the "true" values and we may or may not have other information about where they are likely to lie. The case of independent, identically normally-distributed errors. Suppose we have found a solution formula_0 to the following overdetermined problem: formula_1 where Y is an "n"-dimensional column vector containing observed values of the dependent variable, X is an "n"-by-"p" matrix of observed values of independent variables (which can represent a physical model) which is assumed to be known exactly, formula_0 is a column vector containing the "p" parameters which are to be estimated, and formula_2 is an "n"-dimensional column vector of errors which are assumed to be independently distributed with normal distributions with zero mean and each having the same unknown variance formula_3. A joint 100(1 − "α") % confidence region for the elements of formula_0 is represented by the set of values of the vector b which satisfy the following inequality: formula_4 where the variable b represents any point in the confidence region, "p" is the number of parameters, i.e. number of elements of the vector formula_5 formula_6 is the vector of estimated parameters, and "s"2 is the reduced chi-squared, an unbiased estimate of formula_3 equal to formula_7 Further, "F" is the quantile function of the F-distribution, with "p" and formula_8 degrees of freedom, formula_9 is the statistical significance level, and the symbol formula_10 means the transpose of formula_11. The expression can be rewritten as: formula_12 where formula_13 is the least-squares scaled covariance matrix of formula_6. The above inequality defines an ellipsoidal region in the "p"-dimensional Cartesian parameter space R"p". The centre of the ellipsoid is at the estimate formula_6. According to Press et al., it is easier to plot the ellipsoid after doing singular value decomposition. The lengths of the axes of the ellipsoid are proportional to the reciprocals of the values on the diagonals of the diagonal matrix, and the directions of these axes are given by the rows of the 3rd matrix of the decomposition. Weighted and generalised least squares. Now consider the more general case where some distinct elements of formula_2 have known nonzero covariance (in other words, the errors in the observations are not independently distributed), and/or the standard deviations of the errors are not all equal. Suppose the covariance matrix of formula_2 is formula_14, where V is an "n"-by-"n" nonsingular matrix which was equal to formula_15 in the more specific case handled in the previous section, (where I is the identity matrix,) but here is allowed to have nonzero off-diagonal elements representing the covariance of pairs of individual observations, as well as not necessarily having all the diagonal elements equal. It is possible to find a nonsingular symmetric matrix P such that formula_16 In effect, P is a square root of the covariance matrix V. The least-squares problem formula_1 can then be transformed by left-multiplying each term by the inverse of P, forming the new problem formulation formula_17 where formula_18 formula_19 and formula_20 A joint confidence region for the parameters, i.e. for the elements of formula_0, is then bounded by the ellipsoid given by: formula_21 Here "F" represents the percentage point of the "F"-distribution and the quantities "p" and "n-p" are the degrees of freedom which are the parameters of this distribution. Nonlinear problems. Confidence regions can be defined for any probability distribution. The experimenter can choose the significance level and the shape of the region, and then the size of the region is determined by the probability distribution. A natural choice is to use as a boundary a set of points with constant formula_22 (chi-squared) values. One approach is to use a linear approximation to the nonlinear model, which may be a close approximation in the vicinity of the solution, and then apply the analysis for a linear problem to find an approximate confidence region. This may be a reasonable approach if the confidence region is not very large and the second derivatives of the model are also not very large. Bootstrapping approaches can also be used. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\boldsymbol{\\beta}" }, { "math_id": 1, "text": "\\mathbf{Y} = \\mathbf{X}\\boldsymbol{\\beta} + \\boldsymbol{\\varepsilon}" }, { "math_id": 2, "text": "\\boldsymbol{\\varepsilon}" }, { "math_id": 3, "text": "\\sigma^2" }, { "math_id": 4, "text": " (\\boldsymbol{\\hat{\\beta}} - \\mathbf{b})^\\operatorname{T} \\mathbf{X}^\\operatorname{T} \\mathbf{X}(\\boldsymbol{\\hat{\\beta}} - \\mathbf{b}) \\le ps^2 F_{1 - \\alpha}(p,\\nu) ," }, { "math_id": 5, "text": "\\boldsymbol{\\beta}," }, { "math_id": 6, "text": "\\boldsymbol{\\hat{\\beta}}" }, { "math_id": 7, "text": "s^2=\\frac{\\varepsilon^\\operatorname{T} \\varepsilon}{n - p}." }, { "math_id": 8, "text": " \\nu = n - p" }, { "math_id": 9, "text": "\\alpha" }, { "math_id": 10, "text": "X^\\operatorname{T} " }, { "math_id": 11, "text": "X" }, { "math_id": 12, "text": " (\\boldsymbol{\\hat{\\beta}} - \\mathbf{b})^\\operatorname{T} \\mathbf{C}_\\mathbf{\\beta}^{-1} (\\boldsymbol{\\hat{\\beta}} - \\mathbf{b}) \\le p F_{1 - \\alpha}(p,\\nu) ," }, { "math_id": 13, "text": "\\mathbf{C}_\\mathbf{\\beta} = s^2 \\left( \\mathbf{X}^\\operatorname{T} \\mathbf{X} \\right)^{-1}" }, { "math_id": 14, "text": "\\mathbf{V}\\sigma^2" }, { "math_id": 15, "text": "\\mathbf{I}" }, { "math_id": 16, "text": "\\mathbf{P}^\\prime\\mathbf{P} = \\mathbf{P}\\mathbf{P} = \\mathbf{V}" }, { "math_id": 17, "text": "\\mathbf{Z} = \\mathbf{Q}\\boldsymbol{\\beta} + \\mathbf{f} ," }, { "math_id": 18, "text": "\\mathbf{Z} = \\mathbf{P}^{-1}\\mathbf{Y}" }, { "math_id": 19, "text": "\\mathbf{Q} = \\mathbf{P}^{-1}\\mathbf{X}" }, { "math_id": 20, "text": "\\mathbf{f} = \\mathbf{P}^{-1}\\boldsymbol{\\varepsilon}" }, { "math_id": 21, "text": " (\\mathbf{b} - \\boldsymbol{\\hat{\\beta}})^\\prime \\mathbf{Q}^\\prime\\mathbf{Q}(\\mathbf{b} - \\boldsymbol{\\hat{\\beta}}) = {\\frac{p}{n - p}} (\\mathbf{Z}^\\prime\\mathbf{Z}\n- \\mathbf{b}^\\prime\\mathbf{Q}^\\prime\\mathbf{Z})F_{1 - \\alpha}(p,n-p).\n" }, { "math_id": 22, "text": "\\chi^2" } ]
https://en.wikipedia.org/wiki?curid=1040970
10409979
Paradoxes of set theory
This article contains a discussion of paradoxes of set theory. As with most mathematical paradoxes, they generally reveal surprising and counter-intuitive mathematical results, rather than actual logical contradictions within modern axiomatic set theory. Basics. Cardinal numbers. Set theory as conceived by Georg Cantor assumes the existence of infinite sets. As this assumption cannot be proved from first principles it has been introduced into axiomatic set theory by the axiom of infinity, which asserts the existence of the set N of natural numbers. Every infinite set which can be enumerated by natural numbers is the same size (cardinality) as N, and is said to be countable. Examples of countably infinite sets are the natural numbers, the even numbers, the prime numbers, and also all the rational numbers, i.e., the fractions. These sets have in common the cardinal number |N| = formula_0 (aleph-nought), a number greater than every natural number. Cardinal numbers can be defined as follows. Define two sets to "have the same size" by: there exists a bijection between the two sets (a one-to-one correspondence between the elements). Then a cardinal number is, by definition, a class consisting of "all" sets of the same size. To have the same size is an equivalence relation, and the cardinal numbers are the equivalence classes. Ordinal numbers. Besides the cardinality, which describes the size of a set, ordered sets also form a subject of set theory. The axiom of choice guarantees that every set can be well-ordered, which means that a total order can be imposed on its elements such that every nonempty subset has a first element with respect to that order. The order of a well-ordered set is described by an ordinal number. For instance, 3 is the ordinal number of the set {0, 1, 2} with the usual order 0 &lt; 1 &lt; 2; and ω is the ordinal number of the set of all natural numbers ordered the usual way. Neglecting the order, we are left with the cardinal number |N| = |ω| = formula_1. Ordinal numbers can be defined with the same method used for cardinal numbers. Define two well-ordered sets to "have the same order type" by: there exists a bijection between the two sets respecting the order: smaller elements are mapped to smaller elements. Then an ordinal number is, by definition, a class consisting of "all" well-ordered sets of the same order type. To have the same order type is an equivalence relation on the class of well-ordered sets, and the ordinal numbers are the equivalence classes. Two sets of the same order type have the same cardinality. The converse is not true in general for infinite sets: it is possible to impose different well-orderings on the set of natural numbers that give rise to different ordinal numbers. There is a natural ordering on the ordinals, which is itself a well-ordering. Given any ordinal α, one can consider the set of all ordinals less than α. This set turns out to have ordinal number α. This observation is used for a different way of introducing the ordinals, in which an ordinal is "equated" with the set of all smaller ordinals. This form of ordinal number is thus a canonical representative of the earlier form of equivalence class. Power sets. By forming all subsets of a set "S" (all possible choices of its elements), we obtain the power set "P"("S"). Georg Cantor proved that the power set is always larger than the set, i.e., |"P"("S")| &gt; |"S"|. A special case of Cantor's theorem proves that the set of all real numbers R cannot be enumerated by natural numbers. R is uncountable: |R| &gt; |N|. Paradoxes of the infinite sets. Instead of relying on ambiguous descriptions such as "that which cannot be enlarged" or "increasing without bound", set theory provides definitions for the term infinite set to give an unambiguous meaning to phrases such as "the set of all natural numbers is infinite". Just as for finite sets, the theory makes further definitions which allow us to consistently compare two infinite sets as regards whether one set is "larger than", "smaller than", or "the same size as" the other. But not every intuition regarding the size of finite sets applies to the size of infinite sets, leading to various apparently paradoxical results regarding enumeration, size, measure and order. Paradoxes of enumeration. Before set theory was introduced, the notion of the "size" of a set had been problematic. It had been discussed by Galileo Galilei and Bernard Bolzano, among others. Are there as many natural numbers as squares of natural numbers when measured by the method of enumeration? By defining the notion of the size of a set in terms of its "cardinality", the issue can be settled. Since there is a bijection between the two sets involved, this follows in fact directly from the definition of the cardinality of a set. See Hilbert's paradox of the Grand Hotel for more on paradoxes of enumeration. "Je le vois, mais je ne crois pas". "I see it but I don't believe," Cantor wrote to Richard Dedekind after proving that the set of points of a square has the same cardinality as that of the points on just an edge of the square: the cardinality of the continuum. This demonstrates that the "size" of sets as defined by cardinality alone is not the only useful way of comparing sets. Measure theory provides a more nuanced theory of size that conforms to our intuition that length and area are incompatible measures of size. The evidence strongly suggests that Cantor was quite confident in the result itself and that his comment to Dedekind refers instead to his then-still-lingering concerns about the validity of his proof of it. Nevertheless, Cantor's remark would also serve nicely to express the surprise that so many mathematicians after him have experienced on first encountering a result that is so counter-intuitive. Paradoxes of well-ordering. In 1904 Ernst Zermelo proved by means of the axiom of choice (which was introduced for this reason) that every set can be well-ordered. In 1963 Paul J. Cohen showed that in Zermelo–Fraenkel set theory without the axiom of choice it is not possible to prove the existence of a well-ordering of the real numbers. However, the ability to well order any set allows certain constructions to be performed that have been called paradoxical. One example is the Banach–Tarski paradox, a theorem widely considered to be nonintuitive. It states that it is possible to decompose a ball of a fixed radius into a finite number of pieces and then move and reassemble those pieces by ordinary translations and rotations (with no scaling) to obtain two copies from the one original copy. The construction of these pieces requires the axiom of choice; the pieces are not simple regions of the ball, but complicated subsets. Paradoxes of the Supertask. In set theory, an infinite set is not considered to be created by some mathematical process such as "adding one element" that is then carried out "an infinite number of times". Instead, a particular infinite set (such as the set of all natural numbers) is said to already exist, "by fiat", as an assumption or an axiom. Given this infinite set, other infinite sets are then proven to exist as well, as a logical consequence. But it is still a natural philosophical question to contemplate some physical action that actually completes after an infinite number of discrete steps; and the interpretation of this question using set theory gives rise to the paradoxes of the supertask. The diary of Tristram Shandy. Tristram Shandy, the hero of a novel by Laurence Sterne, writes his autobiography so conscientiously that it takes him one year to lay down the events of one day. If he is mortal he can never terminate; but if he lived forever then no part of his diary would remain unwritten, for to each day of his life a year devoted to that day's description would correspond. The Ross-Littlewood paradox. An increased version of this type of paradox shifts the infinitely remote finish to a finite time. Fill a huge reservoir with balls enumerated by numbers 1 to 10 and take off ball number 1. Then add the balls enumerated by numbers 11 to 20 and take off number 2. Continue to add balls enumerated by numbers 10"n" - 9 to 10"n" and to remove ball number "n" for all natural numbers "n" = 3, 4, 5, ... Let the first transaction last half an hour, let the second transaction last quarter an hour, and so on, so that all transactions are finished after one hour. Obviously the set of balls in the reservoir increases without bound. Nevertheless, after one hour the reservoir is empty because for every ball the time of removal is known. The paradox is further increased by the significance of the removal sequence. If the balls are not removed in the sequence 1, 2, 3, ... but in the sequence 1, 11, 21, ... after one hour infinitely many balls populate the reservoir, although the same amount of material as before has been moved. Paradoxes of proof and definability. For all its usefulness in resolving questions regarding infinite sets, naive set theory has some fatal flaws. In particular, it is prey to logical paradoxes such as those exposed by Russell's paradox. The discovery of these paradoxes revealed that not all sets which can be described in the language of naive set theory can actually be said to exist without creating a contradiction. The 20th century saw a resolution to these paradoxes in the development of the various axiomatizations of set theories such as ZFC and NBG in common use today. However, the gap between the very formalized and symbolic language of these theories and our typical informal use of mathematical language results in various paradoxical situations, as well as the philosophical question of exactly what it is that such formal systems actually propose to be talking about. Early paradoxes: the set of all sets. In 1897 the Italian mathematician Cesare Burali-Forti discovered that there is no set containing all ordinal numbers. As every ordinal number is defined by a set of smaller ordinal numbers, the well-ordered set Ω of all ordinal numbers (if it exists) fits the definition and is itself an ordinal. On the other hand, no ordinal number can contain itself, so Ω cannot be an ordinal. Therefore, the set of all ordinal numbers cannot exist. By the end of the 19th century Cantor was aware of the non-existence of the set of all cardinal numbers and the set of all ordinal numbers. In letters to David Hilbert and Richard Dedekind he wrote about inconsistent sets, the elements of which cannot be thought of as being all together, and he used this result to prove that every consistent set has a cardinal number. After all this, the version of the "set of all sets" paradox conceived by Bertrand Russell in 1903 led to a serious crisis in set theory. Russell recognized that the statement "x" = "x" is true for every set, and thus the set of all sets is defined by {"x" | "x" = "x"}. In 1906 he constructed several paradox sets, the most famous of which is the set of all sets which do not contain themselves. Russell himself explained this abstract idea by means of some very concrete pictures. One example, known as the Barber paradox, states: The male barber who shaves all and only men who do not shave themselves has to shave himself only if he does not shave himself. There are close similarities between Russell's paradox in set theory and the Grelling–Nelson paradox, which demonstrates a paradox in natural language. Paradoxes by change of language. König's paradox. In 1905, the Hungarian mathematician Julius König published a paradox based on the fact that there are only countably many finite definitions. If we imagine the real numbers as a well-ordered set, those real numbers which can be finitely defined form a subset. Hence in this well-order there should be a first real number that is not finitely definable. This is paradoxical, because this real number has just been finitely defined by the last sentence. This leads to a contradiction in naive set theory. This paradox is avoided in axiomatic set theory. Although it is possible to represent a proposition about a set as a set, by a system of codes known as Gödel numbers, there is no formula formula_2 in the language of set theory which holds exactly when formula_3 is a code for a finite proposition about a set, formula_4 is a set, and formula_3 holds for formula_4. This result is known as Tarski's indefinability theorem; it applies to a wide class of formal systems including all commonly studied axiomatizations of set theory. Richard's paradox. In the same year the French mathematician Jules Richard used a variant of Cantor's diagonal method to obtain another contradiction in naive set theory. Consider the set "A" of all finite agglomerations of words. The set "E" of all finite definitions of real numbers is a subset of "A". As "A" is countable, so is "E". Let "p" be the "n"th decimal of the "n"th real number defined by the set "E"; we form a number "N" having zero for the integral part and "p" + 1 for the "n"th decimal if "p" is not equal either to 8 or 9, and unity if "p" is equal to 8 or 9. This number "N" is not defined by the set "E" because it differs from any finitely defined real number, namely from the "n"th number by the "n"th digit. But "N" has been defined by a finite number of words in this paragraph. It should therefore be in the set "E". That is a contradiction. As with König's paradox, this paradox cannot be formalized in axiomatic set theory because it requires the ability to tell whether a description applies to a particular set (or, equivalently, to tell whether a formula is actually the definition of a single set). Paradox of Löwenheim and Skolem. Based upon work of the German mathematician Leopold Löwenheim (1915) the Norwegian logician Thoralf Skolem showed in 1922 that every consistent theory of first-order predicate calculus, such as set theory, has an at most countable model. However, Cantor's theorem proves that there are uncountable sets. The root of this seeming paradox is that the countability or noncountability of a set is not always absolute, but can depend on the model in which the cardinality is measured. It is possible for a set to be uncountable in one model of set theory but countable in a larger model (because the bijections that establish countability are in the larger model but not the smaller one). Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\aleph_0" }, { "math_id": 1, "text": " \\aleph_0" }, { "math_id": 2, "text": "\\varphi(a,x)" }, { "math_id": 3, "text": "a" }, { "math_id": 4, "text": "x" } ]
https://en.wikipedia.org/wiki?curid=10409979
10410127
Light scattering by particles
Process by which dust, particulates, etc. scatter light Light scattering by particles is the process by which small particles (e.g. ice crystals, dust, atmospheric particulates, cosmic dust, and blood cells) scatter light causing optical phenomena such as the blue color of the sky, and halos. Maxwell's equations are the basis of theoretical and computational methods describing light scattering, but since exact solutions to Maxwell's equations are only known for selected particle geometries (such as spherical), light scattering by particles is a branch of computational electromagnetics dealing with electromagnetic radiation scattering and absorption by particles. In case of geometries for which analytical solutions are known (such as spheres, cluster of spheres, infinite cylinders), the solutions are typically calculated in terms of infinite series. In case of more complex geometries and for inhomogeneous particles the original Maxwell's equations are discretized and solved. Multiple-scattering effects of light scattering by particles are treated by radiative transfer techniques (see, e.g. atmospheric radiative transfer codes). The relative size of a scattering particle is defined by its size parameter x, which is the ratio of its characteristic dimension to its wavelength: formula_0 Exact computational methods. Finite-difference time-domain method. The FDTD method belongs in the general class of grid-based differential time-domain numerical modeling methods. The time-dependent Maxwell's equations (in partial differential form) are discretized using central-difference approximations to the space and time partial derivatives. The resulting finite-difference equations are solved in either software or hardware in a leapfrog manner: the electric field vector components in a volume of space are solved at a given instant in time; then the magnetic field vector components in the same spatial volume are solved at the next instant in time; and the process is repeated over and over again until the desired transient or steady-state electromagnetic field behavior is fully evolved. T-matrix. The technique is also known as null field method and extended boundary technique method (EBCM). Matrix elements are obtained by matching boundary conditions for solutions of Maxwell equations. The incident, transmitted, and scattered field are expanded into spherical vector wave functions. Computational approximations. Mie approximation. Scattering from any spherical particles with arbitrary size parameter is explained by the Mie theory. Mie theory, also called Lorenz-Mie theory or Lorenz-Mie-Debye theory, is a complete analytical solution of Maxwell's equations for the scattering of electromagnetic radiation by spherical particles (Bohren and Huffman, 1998). For more complex shapes such as coated spheres, multispheres, spheroids, and infinite cylinders there are extensions which express the solution in terms of infinite series. There are codes available to study light scattering in Mie approximation for spheres, layered spheres, and multiple spheres and cylinders. Discrete dipole approximation. There are several techniques for computing scattering of radiation by particles of arbitrary shape. The discrete dipole approximation is an approximation of the continuum target by a finite array of polarizable points. The points acquire dipole moments in response to the local electric field. The dipoles of these points interact with one another via their electric fields. There are DDA codes available to calculate light scattering properties in DDA approximation. Approximate methods. Rayleigh scattering. Rayleigh scattering regime is the scattering of light, or other electromagnetic radiation, by particles much smaller than the wavelength of the light. Rayleigh scattering can be defined as scattering in small size parameter regime formula_1. Geometric optics (ray-tracing). Ray tracing techniques can approximate light scattering by not only spherical particles but ones of any specified shape (and orientation) so long as the size and critical dimensions of a particle are much larger than the wavelength of light. The light can be considered as a collection of rays whose widths are much larger than the wavelength but small compared to the particle itself. Each ray hitting the particle may undergo (partial) reflection and/or refraction. These rays exit in directions thereby computed with their full power or (when partial reflection is involved) with the incident power divided among two (or more) exiting rays. Just as with lenses and other optical components, ray tracing determines the light emanating from a single scatterer, and combining that result statistically for a large number of randomly oriented and positioned scatterers, one can describe atmospheric optical phenomena such as rainbows due to water droplets and halos due to ice crystals. There are atmospheric optics ray-tracing codes available.
[ { "math_id": 0, "text": "x = \\frac{2 \\pi r} {\\lambda}." }, { "math_id": 1, "text": " x \\ll 1 " } ]
https://en.wikipedia.org/wiki?curid=10410127
10410181
Strain scanning
In physics, strain scanning is the general name for various techniques that aim to measure the strain in a crystalline material through its effect on the diffraction of X-rays and neutrons. In these methods the material itself is used as a form of strain gauge. The various methods are derived from powder diffraction but look for the small shifts in the diffraction spectrum that indicate a change in a lattice parameter instead of trying to derive unknown structural information. By comparing the lattice parameter to a known reference value it is possible to determine the. If sufficient measurements are made in different directions it is possible to derive the strain tensor. If the elastic properties of the material are known, one can then compute the stress tensor. Principles. At its most basic level strain scanning uses shifts in Bragg diffraction peaks to determine the strain. Strain is defined as the change in length (shift in lattice parameter, d) divided by the original length (unstrained lattice parameter, d0). In diffraction based strain scanning this becomes the change in peak position divided by the original position. The precise equation is presented in terms of diffraction angle, energy, or - for relatively slow moving neutrons - time of flight: formula_0 Methods. The details of the technique are heavily influenced by the type of radiation used since lab X-rays, synchrotron X-rays and neutrons have very different properties. Nevertheless, there is considerable overlap between the various methods. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\epsilon = \\frac{\\Delta d}{d_0} = \\frac{\\Delta \\theta}{\\theta_0} = \\frac{\\Delta E}{E_0} = \\frac{\\Delta t}{t_0} \\, " } ]
https://en.wikipedia.org/wiki?curid=10410181
1041063
Kármán vortex street
Repeating pattern of swirling vortices In fluid dynamics, a Kármán vortex street (or a von Kármán vortex street) is a repeating pattern of swirling vortices, caused by a process known as vortex shedding, which is responsible for the unsteady separation of flow of a fluid around blunt bodies. It is named after the engineer and fluid dynamicist Theodore von Kármán, and is responsible for such phenomena as the "singing" of suspended telephone or power lines and the vibration of a car antenna at certain speeds. Mathematical modeling of von Kármán vortex street can be performed using different techniques including but not limited to solving the full Navier-Stokes equations with k-epsilon, SST, k-omega and Reynolds stress, and large eddy simulation (LES) turbulence models, by numerically solving some dynamic equations such as the Ginzburg–Landau equation, or by use of a bicomplex variable. Analysis. A vortex street forms only at a certain range of flow velocities, specified by a range of Reynolds numbers ("Re"), typically above a limiting "Re" value of about 90. The ("global") Reynolds number for a flow is a measure of the ratio of inertial to viscous forces in the flow of a fluid around a body or in a channel, and may be defined as a nondimensional parameter of the global speed of the whole fluid flow: formula_0 where: formula_5 between: For common flows (the ones which can usually be considered as incompressible or isothermal), the kinematic viscosity is everywhere uniform over all the flow field and constant in time, so there is no choice on the viscosity parameter, which becomes naturally the kinematic viscosity of the fluid being considered at the temperature being considered. On the other hand, the reference length is always an arbitrary parameter, so particular attention should be put when comparing flows around different obstacles or in channels of different shapes: the global Reynolds numbers should be referred to the same reference length. This is actually the reason for which the most precise sources for airfoil and channel flow data specify the reference length at the Reynolds number. The reference length can vary depending on the analysis to be performed: for a body with circle sections such as circular cylinders or spheres, one usually chooses the diameter; for an airfoil, a generic non-circular cylinder or a bluff body or a revolution body like a fuselage or a submarine, it is usually the profile chord or the profile thickness, or some other given widths that are in fact stable design inputs; for flow channels usually the hydraulic diameter about which the fluid is flowing. For an aerodynamic profile the reference length depends on the analysis. In fact, the profile chord is usually chosen as the reference length also for aerodynamic coefficient for wing sections and thin profiles in which the primary target is to maximize the lift coefficient or the lift/drag ratio (i.e. as usual in thin airfoil theory, one would employ the "chord Reynolds" as the flow speed parameter for comparing different profiles). On the other hand, for fairings and struts the given parameter is usually the dimension of internal structure to be streamlined (let us think for simplicity it is a beam with circular section), and the main target is to minimize the drag coefficient or the drag/lift ratio. The main design parameter which becomes naturally also a reference length is therefore the profile thickness (the profile dimension or area perpendicular to the flow direction), rather than the profile chord. The range of "Re" values varies with the size and shape of the body from which the eddies are shed, as well as with the kinematic viscosity of the fluid. For the wake of a circular cylinder, for which the reference length is conventionally the diameter "d" of the circular cylinder, the lower limit of this range is "Re" ≈ 47. Eddies are shed continuously from each side of the circle boundary, forming rows of vortices in its wake. The alternation leads to the core of a vortex in one row being opposite the point midway between two vortex cores in the other row, giving rise to the distinctive pattern shown in the picture. Ultimately, the energy of the vortices is consumed by viscosity as they move further down stream, and the regular pattern disappears. Above the "Re " value of 188.5, the flow becomes three-dimensional, with periodic variation along the cylinder. Above "Re" on the order of 105 at the drag crisis, vortex shedding becomes irregular and turbulence sets in. When a single vortex is shed, an asymmetrical flow pattern forms around the body and changes the pressure distribution. This means that the alternate shedding of vortices can create periodic lateral (sideways) forces on the body in question, causing it to vibrate. If the vortex shedding frequency is similar to the natural frequency of a body or structure, it causes resonance. It is this forced vibration that, at the correct frequency, causes suspended telephone or power lines to "sing" and the antenna on a car to vibrate more strongly at certain speeds. In meteorology. The flow of atmospheric air over obstacles such as islands or isolated mountains sometimes gives birth to von Kármán vortex streets. When a cloud layer is present at the relevant altitude, the streets become visible. Such cloud layer vortex streets have been photographed from satellites. The vortex street can reach over from the obstacle and the diameter of the vortices are normally . Engineering problems. In low turbulence, tall buildings can produce a Kármán street, so long as the structure is uniform along its height. In urban areas where there are many other tall structures nearby, the turbulence produced by these can prevent the formation of coherent vortices. Periodic crosswind forces set up by vortices along object's sides can be highly undesirable, due to the vortex-induced vibrations caused, which can damage the structure, hence it is important for engineers to account for the possible effects of vortex shedding when designing a wide range of structures, from submarine periscopes to industrial chimneys and skyscrapers. For monitoring such engineering structures, the efficient measurements of von Kármán streets can be performed using smart sensing algorithms such as compressive sensing. Even more serious instability can be created in concrete cooling towers, especially when built together in clusters. Vortex shedding caused the collapse of three towers at Ferrybridge Power Station C in 1965 during high winds. The failure of the original Tacoma Narrows Bridge was originally attributed to excessive vibration due to vortex shedding, but was actually caused by aeroelastic flutter. Kármán turbulence is also a problem for airplanes, especially when landing. Solutions. To prevent vortex shedding and mitigate the unwanted vibration of cylindrical bodies is the use of a tuned mass damper (TMD). A tuned mass damper is a device consisting of a mass-spring system that is specifically designed and tuned to counteract the vibrations induced by vortex shedding. When a tuned mass damper is installed on a cylindrical structure, such as a tall chimney or mast, it helps to reduce the vibration amplitudes caused by vortex shedding. The tuned mass damper consists of a mass that is attached to the structure through springs or dampers. In many cases, the spring is replaced by suspending the mass on cables such that it forms a pendulum system with the same resonance frequency. The mass is carefully tuned to have a natural frequency that matches the dominant frequency of the vortex shedding. As the structure is subjected to vortex shedding-induced vibrations, the tuned mass damper oscillates in an out-of-phase motion with the structure. This counteracts the vibrations, reducing their amplitudes and minimizing the potential for resonance and structural damage. The effectiveness of a tuned mass damper in mitigating vortex shedding-induced vibrations depends on factors such as the mass of the damper, its placement on the structure, and the tuning of the system. Engineers carefully analyze the structural dynamics and characteristics of the vortex shedding phenomenon to determine the optimal parameters for the tuned mass damper. Another solution to prevent the unwanted vibration of such cylindrical bodies is a longitudinal fin that can be fitted on the downstream side, which, provided it is longer than the diameter of the cylinder, prevents the eddies from interacting, and consequently they remain attached. Obviously, for a tall building or mast, the relative wind could come from any direction. For this reason, helical projections resembling large screw threads are sometimes placed at the top, which effectively create asymmetric three-dimensional flow, thereby discouraging the alternate shedding of vortices; this is also found in some car antennas. Another countermeasure with tall buildings is using variation in the diameter with height, such as tapering - that prevents the entire building from being driven at the same frequency. Formula. This formula generally holds true for the range 250 &lt; Re"d" &lt; 200000: formula_8 where: formula_9 This dimensionless parameter St is known as the Strouhal number and is named after the Czech physicist, Vincenc Strouhal (1850–1922) who first investigated the steady humming or singing of telegraph wires in 1878. History. Although named after Theodore von Kármán, he acknowledged that the vortex street had been studied earlier by Arnulph Mallock and Henri Bénard. Kármán tells the story in his book "Aerodynamics": &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;[...] Prandtl had a doctoral candidate, Karl Hiemenz, to whom he gave the task of constructing a water channel in which he could observe the separation of the flow behind a cylinder. The object was to check experimentally the separation point calculated by means of the boundary-layer theory. For this purpose, it was first necessary to know the pressure distribution around the cylinder in a steady flow. Much to his surprise, Hiemenz found that the flow in his channel oscillated violently. When he reported this to Prandtl, the latter told him: 'Obviously your cylinder is not circular.' However, even after very careful machining of the cylinder, the flow continued to oscillate. Then Hiemenz was told that possibly the channel was not symmetric, and he started to adjust it. I was not concerned with this problem, but every morning when I came in the laboratory I asked him, 'Herr Hiemenz, is the flow steady now?' He answered very sadly, 'It always oscillates.' In his autobiography, von Kármán described how his discovery was inspired by an Italian painting of St Christopher carrying the child Jesus whilst wading through water. Vortices could be seen in the water, and von Kármán noted that "The problem for historians may have been why Christopher was carrying Jesus through the water. For me it was why the vortices". It has been suggested by researchers that the painting is one from the 14th century that can be found in the museum of the San Domenico church in Bologna. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathrm{Re}_L=\\frac{U L}{\\nu_0}" }, { "math_id": 1, "text": "U" }, { "math_id": 2, "text": "U_\\infty" }, { "math_id": 3, "text": "L" }, { "math_id": 4, "text": "\\nu_0" }, { "math_id": 5, "text": "\\nu_0 =\\frac{\\mu_0} {\\rho_0} " }, { "math_id": 6, "text": "\\rho_0" }, { "math_id": 7, "text": "\\mu_0" }, { "math_id": 8, "text": "\\text{St} = 0.198\\left (1-\\frac{19.7}{\\text{Re}_d}\\right )\\ " }, { "math_id": 9, "text": "\\text{St}=\\frac {f d}{U} " } ]
https://en.wikipedia.org/wiki?curid=1041063
104113
G.711
ITU-T recommendation G.711 is a narrowband audio codec originally designed for use in telephony that provides toll-quality audio at 64 kbit/s. It is an ITU-T standard (Recommendation) for audio encoding, titled Pulse code modulation (PCM) of voice frequencies released for use in 1972. G.711 passes audio signals in the frequency band of 300–3400 Hz and samples them at the rate of 8000 Hz, with the tolerance on that rate of 50 parts per million (ppm). It uses one of two different logarithmic companding algorithms: μ-law, which is used primarily in North America and Japan, and A-law, which is in use in most other countries outside North America. Each companded sample is quantized as 8 bits, resulting in a 64 kbit/s bit rate. G.711 is a required standard in many technologies, such as in the H.320 and H.323 standards. It can also be used for fax communication over IP networks (as defined in T.38 specification). Two enhancements to G.711 have been published: G.711.0 utilizes lossless data compression to reduce the bandwidth usage and G.711.1 increases audio quality by increasing bandwidth. Types. G.711 defines two main companding algorithms, the μ-law algorithm and A-law algorithm. Both are logarithmic, but A-law was specifically designed to be simpler for a computer to process. The standard also defines a sequence of repeating code values which defines the power level of 0 dB. The μ-law and A-law algorithms encode 14-bit and 13-bit signed linear PCM samples (respectively) to logarithmic 8-bit samples. Thus, the G.711 encoder will create a 64 kbit/s bitstream for a signal sampled at 8 kHz. G.711 μ-law tends to give more resolution to higher range signals while G.711 A-law provides more quantization levels at lower signal levels. The terms "PCMU", "G711u" and "G711MU" are also used for G.711 μ-law, and "PCMA" and "G711A" for G.711 A-law. A-law. A-law encoding thus takes a 13-bit signed linear audio sample as input and converts it to an 8 bit value as follows: Where is the sign bit, codice_0 is its inverse (i.e. positive values are encoded with MSB =  = 1), and bits marked are discarded. Note that the first column of the table uses different representation of negative values than the third column. So for example, input decimal value −21 is represented in binary after bit inversion as 1000000010100, which maps to 00001010 (according to the first row of the table). When decoding, this maps back to 1000000010101, which is interpreted as output value −21 in decimal. Input value +52 (0000000110100 in binary) maps to 10011010 (according to the second row), which maps back to 0000000110101 (+53 in decimal). This can be seen as a floating-point number with 4 bits of mantissa m (equivalent to a 5-bit precision), 3 bits of exponent e and 1 sign bit s, formatted as codice_1 with the decoded linear value y given by formula formula_0 which is a 13-bit signed integer in the range ±1 to ±(212 − 26). Note that no compressed code decodes to zero due to the addition of 0.5 (half of a quantization step). In addition, the standard specifies that all resulting even bits (LSB is even) are inverted before the octet is transmitted. This is to provide plenty of 0/1 transitions to facilitate the clock recovery process in the PCM receivers. Thus, a silent A-law encoded PCM channel has the 8 bit samples coded 0xD5 instead of 0x80 in the octets. When data is sent over E0 (G.703), MSB (sign) is sent first and LSB is sent last. ITU-T STL defines the algorithm for decoding as follows (it puts the decoded values in the 13 most significant bits of the 16-bit output data type). void alaw_expand(lseg, logbuf, linbuf) long lseg; short *linbuf; short *logbuf; short ix, mant, iexp; long n; for (n = 0; n &lt; lseg; n++) ix = logbuf[n] ^ (0x0055); /* re-toggle toggled bits */ ix &amp;= (0x007F); /* remove sign bit */ iexp = ix » 4; /* extract exponent */ mant = ix &amp; (0x000F); /* now get mantissa */ if (iexp &gt; 0) mant = mant + 16; /* add leading '1', if exponent &gt; 0 */ mant = (mant « 4) + (0x0008); /* now mantissa left justified and */ /* 1/2 quantization step added */ if (iexp &gt; 1) /* now left shift according exponent */ mant = mant « (iexp - 1); linbuf[n] = logbuf[n] &gt; 127 /* invert, if negative sample */ ? mant : -mant; See also "ITU-T Software Tool Library 2009 User's manual" that can be found at. μ-law. The μ-law (sometimes referred to as ulaw, G.711Mu, or G.711μ) encoding takes a 14-bit signed linear audio sample in two's complement representation as input, inverts all bits after the sign bit if the value is negative, adds 33 (binary 100001) and converts it to an 8 bit value as follows: Where is the sign bit, and bits marked are discarded. In addition, the standard specifies that the encoded bits are inverted before the octet is transmitted. Thus, a silent μ-law encoded PCM channel has the 8 bit samples transmitted 0xFF instead of 0x00 in the octets. Adding 33 is necessary so that all values fall into a compression group and it is subtracted back when decoding. Breaking the encoded value formatted as codice_1 into 4 bits of mantissa m, 3 bits of exponent e and 1 sign bit s, the decoded linear value y is given by formula formula_1 which is a 14-bit signed integer in the range ±0 to ±8031. Note that 0 is transmitted as 0xFF, and −1 is transmitted as 0x7F, but when received the result is 0 in both cases. G.711.0. G.711.0, also known as G.711 LLC, utilizes lossless data compression to reduce the bandwidth usage by as much as 50 percent. The "Lossless compression of G.711 pulse code modulation" standard was approved by ITU-T in September 2009. G.711.1. G.711.1 "Wideband embedded extension for G.711 pulse code modulation" is a higher-fidelity extension to G.711, ratified in 2008 and further extended in 2012. G.711.1 allows a series of enhancement layers on top of a raw G.711 core stream (Layer 0): Layer 1 codes 16-bit audio in the same 4kHz narrowband, and Layer 2 allows 8kHz wideband using MDCT; each uses a fixed 16 kbps in addition to the 64 kbps core. They may be used together or singly, and each encodes the differences from the previous layer. Ratified in 2012, Layer 3 extends Layer 2 to 16kHz "superwideband," allowing another 16 kbps for the highest frequencies, while retaining layer independence. Peak bitrate becomes 96 kbps in original G.711.1, or 112 kbps with superwideband. No internal method of identifying or separating the layers is defined, leaving it to the implementation to packetize or signal them. A decoder that doesn't understand any set of fidelity layers may ignore or drop non-core packets without affecting it, enabling graceful degradation across any G.711 (or original G.711.1) telephony system with no changes. Also ratified in 2012 was G.711.0 lossless extended to the new fidelity layers. Like G.711.0, full G.711 backward compatibility is sacrificed for efficiency, though a G.711.0 aware node may still ignore or drop layer packets it doesn't understand. Licensing. The patents for G.711, released in 1972, have expired, so it may be used without the need for a licence.
[ { "math_id": 0, "text": "y = (-1)^s \\cdot (16 \\cdot \\min \\{ e, 1 \\} + m + 0.5) \\cdot 2^{\\max \\{ e, 1 \\} }," }, { "math_id": 1, "text": "y = (-1)^s \\cdot [(33 + 2m) \\cdot 2^e - 33]," } ]
https://en.wikipedia.org/wiki?curid=104113
10412
Elementary function
Mathematical function In mathematics, an elementary function is a function of a single variable (typically real or complex) that is defined as taking sums, products, roots and compositions of finitely many polynomial, rational, trigonometric, hyperbolic, and exponential functions, and their inverses (e.g., arcsin, log, or "x"1/"n"). All elementary functions are continuous on their domains. Elementary functions were introduced by Joseph Liouville in a series of papers from 1833 to 1841. An algebraic treatment of elementary functions was started by Joseph Fels Ritt in the 1930s. Many textbooks and dictionaries do not give a precise definition of the elementary functions, and mathematicians differ on it. Examples. Basic examples. Elementary functions of a single variable x include: Certain elementary functions of a single complex variable z, such as formula_8 and formula_9, may be multivalued. Additionally, certain classes of functions may be obtained by others using the final two rules. For example, the exponential function formula_10 composed with addition, subtraction, and division provides the hyperbolic functions, while initial composition with formula_11 instead provides the trigonometric functions. Composite examples. Examples of elementary functions include: The last function is equal to formula_14, the inverse cosine, in the entire complex plane. All monomials, polynomials, rational functions and algebraic functions are elementary. The absolute value function, for real formula_15, is also elementary as it can be expressed as the composition of a power and root of formula_15: formula_16. Non-elementary functions. Many mathematicians exclude non-analytic functions such as the absolute value function or discontinuous functions such as the step function, but others allow them. Some have proposed extending the set to include, for example, the Lambert W function. Some examples of functions that are "not" elementary: Closure. It follows directly from the definition that the set of elementary functions is closed under arithmetic operations, root extraction and composition. The elementary functions are closed under differentiation. They are not closed under limits and infinite sums. Importantly, the elementary functions are not closed under integration, as shown by Liouville's theorem, see nonelementary integral. The Liouvillian functions are defined as the elementary functions and, recursively, the integrals of the Liouvillian functions. Differential algebra. The mathematical definition of an elementary function, or a function in elementary form, is considered in the context of differential algebra. A differential algebra is an algebra with the extra operation of derivation (algebraic version of differentiation). Using the derivation operation new equations can be written and their solutions used in extensions of the algebra. By starting with the field of rational functions, two special types of transcendental extensions (the logarithm and the exponential) can be added to the field building a tower containing elementary functions. A differential field "F" is a field "F"0 (rational functions over the rationals Q for example) together with a derivation map "u" → ∂"u". (Here ∂"u" is a new function. Sometimes the notation "u"′ is used.) The derivation captures the properties of differentiation, so that for any two elements of the base field, the derivation is linear formula_18 and satisfies the Leibniz product rule formula_19 An element "h" is a constant if "∂h = 0". If the base field is over the rationals, care must be taken when extending the field to add the needed transcendental constants. A function "u" of a differential extension "F"["u"] of a differential field "F" is an elementary function over "F" if the function "u" Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "2,\\ \\pi,\\ e," }, { "math_id": 1, "text": "x,\\ x^2,\\ \\sqrt{x}\\ (x^\\frac{1}{2}),\\ x^\\frac{2}{3}," }, { "math_id": 2, "text": "e^x, \\ a^x" }, { "math_id": 3, "text": "\\log x, \\ \\log_a x" }, { "math_id": 4, "text": "\\sin x,\\ \\cos x,\\ \\tan x," }, { "math_id": 5, "text": "\\arcsin x,\\ \\arccos x," }, { "math_id": 6, "text": "\\sinh x,\\ \\cosh x," }, { "math_id": 7, "text": "\\operatorname{arsinh} x,\\ \\operatorname{arcosh} x," }, { "math_id": 8, "text": "\\sqrt{z}" }, { "math_id": 9, "text": "\\log z" }, { "math_id": 10, "text": "e^{z}" }, { "math_id": 11, "text": "iz" }, { "math_id": 12, "text": "\\frac{e^{\\tan x}}{1+x^2}\\sin\\left(\\sqrt{1+(\\log x)^2}\\right)" }, { "math_id": 13, "text": "-i\\log\\left(x+i\\sqrt{1-x^2}\\right) " }, { "math_id": 14, "text": "\\arccos x" }, { "math_id": 15, "text": "x" }, { "math_id": 16, "text": "|x|=\\sqrt{x^2}" }, { "math_id": 17, "text": "\\mathrm{erf}(x)=\\frac{2}{\\sqrt{\\pi}}\\int_0^x e^{-t^2}\\,dt," }, { "math_id": 18, "text": "\\partial (u + v) = \\partial u + \\partial v " }, { "math_id": 19, "text": "\\partial(u\\cdot v)=\\partial u\\cdot v+u\\cdot\\partial v\\,." } ]
https://en.wikipedia.org/wiki?curid=10412
1041204
Granular computing
Computing paradigm based on information entities ("granules") Granular computing is an emerging computing paradigm of information processing that concerns the processing of complex information entities called "information granules", which arise in the process of data abstraction and derivation of knowledge from information or data. Generally speaking, information granules are collections of entities that usually originate at the numeric level and are arranged together due to their similarity, functional or physical adjacency, indistinguishability, coherency, or the like. At present, granular computing is more a "theoretical perspective" than a coherent set of methods or principles. As a theoretical perspective, it encourages an approach to data that recognizes and exploits the knowledge present in data at various levels of resolution or scales. In this sense, it encompasses all methods which provide flexibility and adaptability in the resolution at which knowledge or information is extracted and represented. Types of granulation. As mentioned above, "granular computing" is not an algorithm or process; there is no particular method that is called "granular computing". It is rather an approach to looking at data that recognizes how different and interesting regularities in the data can appear at different levels of granularity, much as different features become salient in satellite images of greater or lesser resolution. On a low-resolution satellite image, for example, one might notice interesting cloud patterns representing cyclones or other large-scale weather phenomena, while in a higher-resolution image, one misses these large-scale atmospheric phenomena but instead notices smaller-scale phenomena, such as the interesting pattern that is the streets of Manhattan. The same is generally true of all data: At different resolutions or granularities, different features and relationships emerge. The aim of granular computing is to try to take advantage of this fact in designing more effective machine-learning and reasoning systems. There are several types of granularity that are often encountered in data mining and machine learning, and we review them below: Value granulation (discretization/quantization). One type of granulation is the quantization of variables. It is very common that in data mining or machine-learning applications the resolution of variables needs to be "decreased" in order to extract meaningful regularities. An example of this would be a variable such as "outside temperature" (temp), which in a given application might be recorded to several decimal places of precision (depending on the sensing apparatus). However, for purposes of extracting relationships between "outside temperature" and, say, "number of health-club applications" (club), it will generally be advantageous to quantize "outside temperature" into a smaller number of intervals. Motivations. There are several interrelated reasons for granulating variables in this fashion: For example, a simple learner or pattern recognition system may seek to extract regularities satisfying a conditional probability threshold such as formula_0 In the special case where formula_1 this recognition system is essentially detecting "logical implication" of the form formula_2 or, in words, "if formula_3 then formula_4". The system's ability to recognize such implications (or, in general, conditional probabilities exceeding threshold) is partially contingent on the resolution with which the system analyzes the variables. As an example of this last point, consider the feature space shown to the right. The variables may each be regarded at two different resolutions. Variable formula_5 may be regarded at a high (quaternary) resolution wherein it takes on the four values formula_6 or at a lower (binary) resolution wherein it takes on the two values formula_7 Similarly, variable formula_8 may be regarded at a high (quaternary) resolution or at a lower (binary) resolution, where it takes on the values formula_9 or formula_10 respectively. At the high resolution, there are no detectable implications of the form formula_11 since every formula_12 is associated with more than one formula_13 and thus, for all formula_14 formula_15 However, at the low (binary) variable resolution, two bilateral implications become detectable: formula_16 and formula_17, since every formula_18 occurs "iff" formula_19 and formula_20 occurs "iff" formula_21 Thus, a pattern recognition system scanning for implications of this kind would find them at the binary variable resolution, but would fail to find them at the higher quaternary variable resolution. Issues and methods. It is not feasible to exhaustively test all possible discretization resolutions on all variables in order to see which combination of resolutions yields interesting or significant results. Instead, the feature space must be preprocessed (often by an entropy analysis of some kind) so that some guidance can be given as to how the discretization process should proceed. Moreover, one cannot generally achieve good results by naively analyzing and discretizing each variable independently, since this may obliterate the very interactions that we had hoped to discover. A sample of papers that address the problem of variable discretization in general, and multiple-variable discretization in particular, is as follows: , , , , , , , , , , , , , , , , Variable granulation (clustering/aggregation/transformation). Variable granulation is a term that could describe a variety of techniques, most of which are aimed at reducing dimensionality, redundancy, and storage requirements. We briefly describe some of the ideas here, and present pointers to the literature. Variable transformation. A number of classical methods, such as principal component analysis, multidimensional scaling, factor analysis, and structural equation modeling, and their relatives, fall under the genus of "variable transformation." Also in this category are more modern areas of study such as dimensionality reduction, projection pursuit, and independent component analysis. The common goal of these methods in general is to find a representation of the data in terms of new variables, which are a linear or nonlinear transformation of the original variables, and in which important statistical relationships emerge. The resulting variable sets are almost always smaller than the original variable set, and hence these methods can be loosely said to impose a granulation on the feature space. These dimensionality reduction methods are all reviewed in the standard texts, such as , , and . Variable aggregation. A different class of variable granulation methods derive more from data clustering methodologies than from the linear systems theory informing the above methods. It was noted fairly early that one may consider "clustering" related variables in just the same way that one considers clustering related data. In data clustering, one identifies a group of similar entities (using a "measure of similarity" suitable to the domain — ), and then in some sense "replaces" those entities with a prototype of some kind. The prototype may be the simple average of the data in the identified cluster, or some other representative measure. But the key idea is that in subsequent operations, we may be able to use the single prototype for the data cluster (along with perhaps a statistical model describing how exemplars are derived from the prototype) to "stand in" for the much larger set of exemplars. These prototypes are generally such as to capture most of the information of interest concerning the entities. Similarly, it is reasonable to ask whether a large set of variables might be aggregated into a smaller set of "prototype" variables that capture the most salient relationships between the variables. Although variable clustering methods based on linear correlation have been proposed (;), more powerful methods of variable clustering are based on the mutual information between variables. Watanabe has shown (;) that for any set of variables one can construct a "polytomic" (i.e., n-ary) tree representing a series of variable agglomerations in which the ultimate "total" correlation among the complete variable set is the sum of the "partial" correlations exhibited by each agglomerating subset (see figure). Watanabe suggests that an observer might seek to thus partition a system in such a way as to minimize the interdependence between the parts "... as if they were looking for a natural division or a hidden crack." One practical approach to building such a tree is to successively choose for agglomeration the two variables (either atomic variables or previously agglomerated variables) which have the highest pairwise mutual information . The product of each agglomeration is a new (constructed) variable that reflects the local joint distribution of the two agglomerating variables, and thus possesses an entropy equal to their joint entropy. (From a procedural standpoint, this agglomeration step involves replacing two columns in the attribute-value table—representing the two agglomerating variables—with a single column that has a unique value for every unique combination of values in the replaced columns . No information is lost by such an operation; however, if one is exploring the data for inter-variable relationships, it would generally "not" be desirable to merge redundant variables in this way, since in such a context it is likely to be precisely the redundancy or "dependency" between variables that is of interest; and once redundant variables are merged, their relationship to one another can no longer be studied. System granulation (aggregation). In database systems, aggregations (see e.g. OLAP aggregation and Business intelligence systems) result in transforming original data tables (often called information systems) into the tables with different semantics of rows and columns, wherein the rows correspond to the groups (granules) of original tuples and the columns express aggregated information about original values within each of the groups. Such aggregations are usually based on SQL and its extensions. The resulting granules usually correspond to the groups of original tuples with the same values (or ranges) over some pre-selected original columns. There are also other approaches wherein the groups are defined basing on, e.g., physical adjacency of rows. For example, Infobright implemented a database engine wherein data was partitioned onto "rough rows", each consisting of 64K of physically consecutive (or almost consecutive) rows. Rough rows were automatically labeled with compact information about their values on data columns, often involving multi-column and multi-table relationships. It resulted in a higher layer of granulated information where objects corresponded to rough rows and attributes - to various aspects of rough information. Database operations could be efficiently supported within such a new framework, with an access to the original data pieces still available . Concept granulation (component analysis). The origins of the "granular computing" ideology are to be found in the rough sets and fuzzy sets literatures. One of the key insights of rough set research—although by no means unique to it—is that, in general, the selection of different sets of features or variables will yield different "concept" granulations. Here, as in elementary rough set theory, by "concept" we mean a set of entities that are "indistinguishable" or "indiscernible" to the observer (i.e., a simple concept), or a set of entities that is composed from such simple concepts (i.e., a complex concept). To put it in other words, by projecting a data set (value-attribute system) onto different sets of variables, we recognize alternative sets of equivalence-class "concepts" in the data, and these different sets of concepts will in general be conducive to the extraction of different relationships and regularities. Equivalence class granulation. We illustrate with an example. Consider the attribute-value system below: When the full set of attributes formula_23 is considered, we see that we have the following seven equivalence classes or primitive (simple) concepts: formula_24 Thus, the two objects within the first equivalence class, formula_25 cannot be distinguished from one another based on the available attributes, and the three objects within the second equivalence class, formula_26 cannot be distinguished from one another based on the available attributes. The remaining five objects are each discernible from all other objects. Now, let us imagine a projection of the attribute value system onto attribute formula_22 alone, which would represent, for example, the view from an observer which is only capable of detecting this single attribute. Then we obtain the following much coarser equivalence class structure. formula_27 This is in a certain regard the same structure as before, but at a lower degree of resolution (larger grain size). Just as in the case of value granulation (discretization/quantization), it is possible that relationships (dependencies) may emerge at one level of granularity that are not present at another. As an example of this, we can consider the effect of concept granulation on the measure known as "attribute dependency" (a simpler relative of the mutual information). To establish this notion of dependency (see also rough sets), let formula_28 represent a particular concept granulation, where each formula_29 is an equivalence class from the concept structure induced by attribute set Q. For example, if the attribute set Q consists of attribute formula_22 alone, as above, then the concept structure formula_30 will be composed of formula_31 The dependency of attribute set Q on another attribute set P, formula_32 is given by formula_33 That is, for each equivalence class formula_29 in formula_34 we add up the size of its "lower approximation" (see rough sets) by the attributes in P, i.e., formula_35 More simply, this approximation is the number of objects which on attribute set P can be positively identified as belonging to target set formula_36 Added across all equivalence classes in formula_34 the numerator above represents the total number of objects which—based on attribute set P—can be positively categorized according to the classification induced by attributes Q. The dependency ratio therefore expresses the proportion (within the entire universe) of such classifiable objects, in a sense capturing the "synchronization" of the two concept structures formula_30 and formula_37 The dependency formula_38 "can be interpreted as a proportion of such objects in the information system for which it suffices to know the values of attributes in P to determine the values of attributes in Q" (Ziarko &amp; Shan 1995). Having gotten definitions now out of the way, we can make the simple observation that the choice of concept granularity (i.e., choice of attributes) will influence the detected dependencies among attributes. Consider again the attribute value table from above: Consider the dependency of attribute set formula_39 on attribute set formula_40 That is, we wish to know what proportion of objects can be correctly classified into classes of formula_30 based on knowledge of formula_37 The equivalence classes of formula_30 and of formula_41 are shown below. The objects that can be "definitively" categorized according to concept structure formula_30 based on formula_41 are those in the set formula_42 and since there are six of these, the dependency of Q on P, formula_43 This might be considered an interesting dependency in its own right, but perhaps in a particular data mining application only stronger dependencies are desired. We might then consider the dependency of the smaller attribute set formula_44 on the attribute set formula_40 The move from formula_39 to formula_44 induces a coarsening of the class structure formula_34 as will be seen shortly. We wish again to know what proportion of objects can be correctly classified into the (now larger) classes of formula_30 based on knowledge of formula_37 The equivalence classes of the new formula_30 and of formula_41 are shown below. Clearly, formula_30 has a coarser granularity than it did earlier. The objects that can now be "definitively" categorized according to the concept structure formula_30 based on formula_41 constitute the complete universe formula_45, and thus the dependency of Q on P, formula_46 That is, knowledge of membership according to category set formula_41 is adequate to determine category membership in formula_30 with complete certainty; In this case we might say that formula_47 Thus, by coarsening the concept structure, we were able to find a stronger (deterministic) dependency. However, we also note that the classes induced in formula_30 from the reduction in resolution necessary to obtain this deterministic dependency are now themselves large and few in number; as a result, the dependency we found, while strong, may be less valuable to us than the weaker dependency found earlier under the higher resolution view of formula_48 In general it is not possible to test all sets of attributes to see which induced concept structures yield the strongest dependencies, and this search must be therefore be guided with some intelligence. Papers which discuss this issue, and others relating to intelligent use of granulation, are those by Y.Y. Yao and Lotfi Zadeh listed in the #References below. Component granulation. Another perspective on concept granulation may be obtained from work on parametric models of categories. In mixture model learning, for example, a set of data is explained as a mixture of distinct Gaussian (or other) distributions. Thus, a large amount of data is "replaced" by a small number of distributions. The choice of the number of these distributions, and their size, can again be viewed as a problem of "concept granulation". In general, a better fit to the data is obtained by a larger number of distributions or parameters, but in order to extract meaningful patterns, it is necessary to constrain the number of distributions, thus deliberately "coarsening" the concept resolution. Finding the "right" concept resolution is a tricky problem for which many methods have been proposed (e.g., AIC, BIC, MDL, etc.), and these are frequently considered under the rubric of "model regularization". Different interpretations of granular computing. Granular computing can be conceived as a framework of theories, methodologies, techniques, and tools that make use of information granules in the process of problem solving. In this sense, granular computing is used as an umbrella term to cover topics that have been studied in various fields in isolation. By examining all of these existing studies in light of the unified framework of granular computing and extracting their commonalities, it may be possible to develop a general theory for problem solving. In a more philosophical sense, granular computing can describe a way of thinking that relies on the human ability to perceive the real world under various levels of granularity (i.e., abstraction) in order to abstract and consider only those things that serve a specific interest and to switch among different granularities. By focusing on different levels of granularity, one can obtain different levels of knowledge, as well as a greater understanding of the inherent knowledge structure. Granular computing is thus essential in human problem solving and hence has a very significant impact on the design and implementation of intelligent systems. References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "p(Y=y_j|X=x_i) \\ge \\alpha ." }, { "math_id": 1, "text": "\\alpha = 1," }, { "math_id": 2, "text": "X=x_i \\rightarrow Y=y_j " }, { "math_id": 3, "text": "X=x_i," }, { "math_id": 4, "text": "Y=y_j " }, { "math_id": 5, "text": "X" }, { "math_id": 6, "text": "\\{x_1, x_2, x_3, x_4\\}" }, { "math_id": 7, "text": "\\{X_1, X_2\\}." }, { "math_id": 8, "text": "Y" }, { "math_id": 9, "text": "\\{y_1, y_2, y_3, y_4\\}" }, { "math_id": 10, "text": "\\{Y_1, Y_2\\}," }, { "math_id": 11, "text": "X=x_i \\rightarrow Y=y_j," }, { "math_id": 12, "text": "x_i" }, { "math_id": 13, "text": "y_j," }, { "math_id": 14, "text": "x_i," }, { "math_id": 15, "text": "p(Y=y_j|X=x_i) < 1." }, { "math_id": 16, "text": "X=X_1 \\leftrightarrow Y=Y_1 " }, { "math_id": 17, "text": "X=X_2 \\leftrightarrow Y=Y_2 " }, { "math_id": 18, "text": "X_1" }, { "math_id": 19, "text": "Y_1" }, { "math_id": 20, "text": "X_2" }, { "math_id": 21, "text": "Y_2." }, { "math_id": 22, "text": "P_1" }, { "math_id": 23, "text": "P = \\{P_1,P_2,P_3,P_4,P_5\\}" }, { "math_id": 24, "text": "\n\\begin{cases} \n\\{O_1,O_2\\} \\\\ \n\\{O_3,O_7,O_{10}\\} \\\\ \n\\{O_4\\} \\\\ \n\\{O_5\\} \\\\\n\\{O_6\\} \\\\\n\\{O_8\\} \\\\\n\\{O_9\\} \\end{cases}\n" }, { "math_id": 25, "text": "\\{O_1,O_2\\}," }, { "math_id": 26, "text": "\\{O_3,O_7,O_{10}\\}," }, { "math_id": 27, "text": "\n\\begin{cases} \n\\{O_1,O_2\\} \\\\ \n\\{O_3,O_5,O_7,O_9,O_{10}\\} \\\\ \n\\{O_4,O_6,O_8\\} \\end{cases}\n" }, { "math_id": 28, "text": "[x]_Q = \\{Q_1, Q_2, Q_3, \\dots, Q_N \\}" }, { "math_id": 29, "text": "Q_i" }, { "math_id": 30, "text": "[x]_Q" }, { "math_id": 31, "text": "\\begin{align}\nQ_1 &= \\{O_1,O_2\\}, \\\\\nQ_2 &= \\{O_3,O_5,O_7,O_9,O_{10}\\}, \\\\\nQ_3 &= \\{O_4,O_6,O_8\\}.\n\\end{align}" }, { "math_id": 32, "text": "\\gamma_P(Q)," }, { "math_id": 33, "text": "\n\\gamma_{P}(Q) = \\frac{\\left | \\sum_{i=1}^N {\\underline P}Q_i \\right |} {\\left | \\mathbb{U} \\right |} \\leq 1\n" }, { "math_id": 34, "text": "[x]_Q," }, { "math_id": 35, "text": "{\\underline P}Q_i." }, { "math_id": 36, "text": "Q_i." }, { "math_id": 37, "text": "[x]_P." }, { "math_id": 38, "text": "\\gamma_{P}(Q)" }, { "math_id": 39, "text": "Q = \\{P_4, P_5\\}" }, { "math_id": 40, "text": "P = \\{P_2, P_3\\}." }, { "math_id": 41, "text": "[x]_P" }, { "math_id": 42, "text": "\\{O_1,O_2,O_3,O_7,O_8,O_{10}\\}," }, { "math_id": 43, "text": "\\gamma_{P}(Q) = 6/10." }, { "math_id": 44, "text": "Q = \\{P_4\\}" }, { "math_id": 45, "text": "\\{O_1,O_2,\\ldots,O_{10}\\}" }, { "math_id": 46, "text": "\\gamma_{P}(Q) = 1." }, { "math_id": 47, "text": "P \\rightarrow Q." }, { "math_id": 48, "text": "[x]_Q." } ]
https://en.wikipedia.org/wiki?curid=1041204
1041214
Proportional counter
Gaseous ionization detector The proportional counter is a type of gaseous ionization detector device used to measure particles of ionizing radiation. The key feature is its ability to measure the energy of incident radiation, by producing a detector output pulse that is "proportional" to the radiation energy absorbed by the detector due to an ionizing event; hence the detector's name. It is widely used where energy levels of incident radiation must be known, such as in the discrimination between alpha and beta particles, or accurate measurement of X-ray radiation dose. A proportional counter uses a combination of the mechanisms of a Geiger–Müller tube and an ionization chamber, and operates in an intermediate voltage region between these. The accompanying plot shows the proportional counter operating voltage region for a co-axial cylinder arrangement. Operation. In a proportional counter the fill gas of the chamber is an inert gas which is ionized by incident radiation, and a quench gas to ensure each pulse discharge terminates; a common mixture is 90% argon, 10% methane, known as P-10. An ionizing particle entering the gas collides with an atom of the inert gas and ionizes it to produce an electron and a positively charged ion, commonly known as an "ion pair". As the ionizing particle travels through the chamber it leaves a trail of ion pairs along its trajectory, the number of which is proportional to the energy of the particle if it is fully stopped within the gas. Typically a 1 MeV stopped particle will create about 30,000 ion pairs. The chamber geometry and the applied voltage is such that in most of the chamber the electric field strength is low and the chamber acts as an ion chamber. However, the field is strong enough to prevent re-combination of the ion pairs and causes positive ions to drift towards the cathode and electrons towards the anode. This is the "ion drift" region. In the immediate vicinity of the anode wire, the field strength becomes large enough to produce Townsend avalanches. This avalanche region occurs only fractions of a millimeter from the anode wire, which itself is of a very small diameter. The purpose of this is to use the multiplication effect of the avalanche produced by each ion pair. This is the "avalanche" region. A key design goal is that each original ionizing event due to incident radiation produces only one avalanche. This is to ensure proportionality between the number of original events and the total ion current. For this reason, the applied voltage, the geometry of the chamber and the diameter of the anode wire are critical to ensure proportional operation. If avalanches start to self-multiply due to UV photons as they do in a Geiger–Muller tube, then the counter enters a region of "limited proportionality" until at a higher applied voltage the Geiger discharge mechanism occurs with complete ionization of the gas enveloping the anode wire and consequent loss of particle energy information. Therefore, it can be said that the proportional counter has the key design feature of two distinct ionization regions: The process of charge amplification greatly improves the signal-to-noise ratio of the detector and reduces the subsequent electronic amplification required. In summary, the proportional counter is an ingenious combination of two ionization mechanisms in one chamber which finds wide practical use. Gas mixtures. Usually the detector is filled with a noble gas; they have the lowest ionization voltages and do not degrade chemically. Typically neon, argon, krypton or xenon are used. Low-energy x-rays are best detected with lighter nuclei (neon), which are less sensitive to higher-energy photons. Krypton or xenon are chosen when for higher-energy x-rays or for higher desired efficiency. Often the main gas is mixed with a quenching additive. A popular mixture is P10 (10% methane, 90% argon). Typical working pressure is 1 atmosphere (about 100 kPa). Signal amplification by multiplication. In the case of a cylindrical proportional counter the multiplication, M, of the signal caused by an avalanche can be modeled as follows: formula_0 Where a is the anode wire radius, b is the radius of the counter, p is the pressure of the gas, and V is the operating voltage. K is a property of the gas used and relates the energy needed to cause an avalanche to the pressure of the gas. The final term formula_1 gives the change in voltage caused by an avalanche. Applications. Spectroscopy. The proportionality between the energy of the charged particle traveling through the chamber and the total charge created makes proportional counters useful for charged particle spectroscopy. By measuring the total charge (time integral of the electric current) between the electrodes, we can determine the particle's kinetic energy because the number of ion pairs created by the incident ionizing charged particle is proportional to its energy. The energy resolution of a proportional counter, however, is limited because both the initial ionization event and the subsequent 'multiplication' event are subject to statistical fluctuations characterized by a standard deviation equal to the square root of the average number formed. However, in practice these are not as great as would be predicted due to the effect of the empirical Fano factor which reduces these fluctuations. In the case of argon, this is experimentally about 0.2. Photon detection. Proportional counters are also useful for detection of high energy photons, such as gamma-rays, provided these can penetrate the entrance window. They are also used for the detection of X-rays to below 1 keV energy levels, using thin-walled tubes operating at or around atmospheric pressure. Radioactive contamination detection. Proportional counters in the form of large area planar detectors are used extensively to check for radioactive contamination on personnel, flat surfaces, tools, and items of clothing. This is normally in the form of installed instrumentation because of the difficulties of providing portable gas supplies for hand-held devices. They are constructed with a large area detection window made from such as metalized mylar which forms one wall of the detection chamber and is part of the cathode. The anode wire is routed in a convoluted manner within the detector chamber to optimize the detection efficiency. They are normally used to detect alpha and beta particles, and can enable discrimination between them by providing a pulse output proportional to the energy deposited in the chamber by each particle. They have a high efficiency for beta, but lower for alpha. The efficiency reduction for alpha is due to the attenuation effect of the entry window, though distance from the surface being checked also has a significant effect, and ideally a source of alpha radiation should be less than 10mm from the detector due to attenuation in air. These chambers operate at very slight positive pressure above ambient atmospheric pressure. The gas can be sealed in the chamber, or can be changed continuously, in which case they are known as "gas-flow proportional counters". Gas flow types have the advantage that they will tolerate small holes in the mylar screen which can occur in use, but they do require a continuous gas supply. Guidance on application use. In the United Kingdom the Health and Safety Executive (HSE) has issued a user guidance note on selecting the correct radiation measurement instrument for the application concerned. This covers all radiation instrument technologies and is a useful comparative guide to the use of proportional counters. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\ln M=\\frac{V}{\\ln(b/a)}\\frac{\\ln 2}{\\Delta V_{\\lambda}}\\left[\\ln\\left(\\frac{V}{pa\\ln(b/a)}\\right)-\\ln K\\right] " }, { "math_id": 1, "text": "\\Delta V_{\\lambda} " } ]
https://en.wikipedia.org/wiki?curid=1041214
10412371
Hirsch conjecture
On lengths of shortest paths in convex polytopes In mathematical programming and polyhedral combinatorics, the Hirsch conjecture is the statement that the edge-vertex graph of an "n"-facet polytope in "d"-dimensional Euclidean space has diameter no more than "n" − "d". That is, any two vertices of the polytope must be connected to each other by a path of length at most "n" − "d". The conjecture was first put forth in a letter by Warren M. Hirsch to George B. Dantzig in 1957 and was motivated by the analysis of the simplex method in linear programming, as the diameter of a polytope provides a lower bound on the number of steps needed by the simplex method. The conjecture is now known to be false in general. The Hirsch conjecture was proven for "d" &lt; 4 and for various special cases, while the best known upper bounds on the diameter are only sub-exponential in "n" and "d". After more than fifty years, a counter-example was announced in May 2010 by Francisco Santos Leal, from the University of Cantabria. The result was presented at the conference "100 Years in Seattle: the mathematics of Klee and Grünbaum" and appeared in "Annals of Mathematics". Specifically, the paper presented a 43-dimensional polytope of 86 facets with a diameter of more than 43. The counterexample has no direct consequences for the analysis of the simplex method, as it does not rule out the possibility of a larger but still linear or polynomial number of steps. Various equivalent formulations of the problem had been given, such as the "d"-step conjecture, which states that the diameter of any 2"d"-facet polytope in "d"-dimensional Euclidean space is no more than "d"; Santos Leal's counterexample also disproves this conjecture. Statement of the conjecture. The "graph" of a convex polytope formula_0 is any graph whose vertices are in bijection with the vertices of formula_0 in such a way that any two vertices of the graph are joined by an edge if and only if the two corresponding vertices of formula_0 are joined by an edge of the polytope. The diameter of formula_0, denoted formula_1, is the diameter of any one of its graphs. These definitions are well-defined since any two graphs of the same polytope must be isomorphic as graphs. We may then state the Hirsch conjecture as follows: Conjecture Let formula_0 be a "d"-dimensional convex polytope with "n" facets. Then formula_2. For example, a cube in three dimensions has six facets. The Hirsch conjecture then indicates that the diameter of this cube cannot be greater than three. Accepting the conjecture would imply that any two vertices of the cube may be connected by a path from vertex to vertex using, at most, three steps. For all polytopes of dimension at least 8, this bound is actually optimal; no polytope of dimension formula_3 has a diameter less than "n-d", with "n" being the number of its facets, as before. In other words, for nearly all cases, the conjecture provides the minimum number of steps needed to join any two vertices of a polytope by a path along its edges. Since the simplex method essentially operates by constructing a path from some vertex of the feasible region to an optimal point, the Hirsch conjecture would provide a lower bound needed for the simplex method to terminate in the worst-case scenario. The Hirsch conjecture is a special case of the "polynomial Hirsch conjecture", which claims that there exists some positive integer "k" such that, for all polytopes formula_0, formula_4, where "n" is the number of facets of "P". Progress and intermediate results. The Hirsch conjecture has been proven true for a number of cases. For example, any polytope with dimension 3 or lower satisfies the conjecture. Any "d"-dimensional polytope with "n" facets such that formula_5 satisfies the conjecture as well. Other attempts to solve the conjecture manifested out of a desire to formulate a different problem whose solution would imply the Hirsch conjecture. One example of particular importance is the "d-step conjecture", a relaxation of the Hirsch conjecture that has actually been shown to be equivalent to it. Theorem The following statements are equivalent: In other words, in order to prove or disprove the Hirsch conjecture, one only needs to consider polytopes with exactly twice as many facets as its dimension. Another significant relaxation is that the Hirsch conjecture holds for all polytopes if and only if it holds for all simple polytopes. Counterexample. Unfortunately, the Hirsch conjecture is not true in all cases, as shown by Francisco Santos in 2011. Santos' explicit construction of a counterexample comes both from the fact that the conjecture may be relaxed to only consider simple polytopes, and from equivalence between the Hirsch and "d"-step conjectures. In particular, Santos produces his counterexample by examining a particular class of polytopes called "spindles". Definition A "d"-spindle is a "d"-dimensional polytope formula_0 for which there exist a pair of distinct vertices such that every facet of formula_0 contains exactly one of these two vertices. The length of the shortest path between these two vertices is called the "length" of the spindle. The disproof of the Hirsch conjecture relies on the following theorem, referred to as the "strong d-step theorem for spindles". Theorem (Santos) Let formula_0 be a "d"-spindle. Let "n" be the number of its facets, and let "l" be its length. Then there exists an formula_8-spindle, formula_9, with formula_10 facets and a length bounded below by formula_11. In particular, if formula_12, then formula_9 violates the "d"-step conjecture. Santos then proceeds to construct a 5-dimensional spindle with length 6, hence proving that there exists another spindle that serves as a counterexample to the Hirsch conjecture. The first of these two spindles has 48 facets and 322 vertices, while the spindle that actually disproves the conjecture has 86 facets and is 43-dimensional. This counterexample does not disprove the polynomial Hirsch conjecture, which remains an open problem. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "P" }, { "math_id": 1, "text": "\\delta(P)" }, { "math_id": 2, "text": "\\delta(P)\\leq n-d" }, { "math_id": 3, "text": "d\\geq 8" }, { "math_id": 4, "text": "\\delta(P)=O(n^k)" }, { "math_id": 5, "text": " n-d\\leq 6 " }, { "math_id": 6, "text": "\\delta(P)\\leq n-d " }, { "math_id": 7, "text": "\\delta(P)\\leq d " }, { "math_id": 8, "text": "(n-d)" }, { "math_id": 9, "text": "P'" }, { "math_id": 10, "text": "2n-2d" }, { "math_id": 11, "text": "l+n-2d" }, { "math_id": 12, "text": "l>d" } ]
https://en.wikipedia.org/wiki?curid=10412371
10414062
Moisture advection
Moisture advection is the horizontal transport of water vapor by the wind. Measurement and knowledge of atmospheric water vapor, or "moisture", is crucial in the prediction of all weather elements, especially clouds, fog, temperature, humidity thermal comfort indices and precipitation. Regions of moisture advection are often co-located with regions of warm advection. Definition. Using the classical definition of advection, moisture advection is defined as: formula_0 in which V is the horizontal wind vector, and formula_1 is the density of water vapor. However, water vapor content is usually measured in terms of mixing ratio (mass fraction) in reanalyses or dew point (temperature to partial vapor pressure saturation, i.e. relative humidity to 100%) in operational forecasting. The advection of dew point itself can be thought as moisture advection: formula_2 Moisture flux. In terms of mixing ratio, horizontal transport/advection can be represented in terms of moisture flux: formula_3 in which q is the mixing ratio. The value can be integrated throughout the atmosphere to total transport of moisture through the vertical: formula_4 where formula_5 is the density of air, and P is pressure at the ground surface. For the far right definition, we have used Hydrostatic equilibrium approximation. And its divergence (convergence) imply net evapotranspiration (precipitation) as adding (removing) moisture from the column: formula_6 where P, E, and the integral term are—precipitation, evapotranspiration, and time rate of change of precipitable water, all represented in terms of mass/(unit area * unit time). One can convert to more typical units in length such as mm by multiplying the density of liquid water and the correct length unit conversion factor.
[ { "math_id": 0, "text": "Adv(\\rho_m)=-\\mathbf{V}\\cdot\\nabla \\rho_m \\!" }, { "math_id": 1, "text": "\\rho_m" }, { "math_id": 2, "text": "Adv(T_d)=-\\mathbf{V}\\cdot\\nabla T_d \\!" }, { "math_id": 3, "text": "\\mathbf{f}=q\\mathbf{V}\\!" }, { "math_id": 4, "text": "\\mathbf{F}=\\int_0^\\infty \\! \\rho \\mathbf{f}\\,dz \\,=-\\int_P^0 \\! \\frac{\\mathbf{f}}{g}\\,dp \\," }, { "math_id": 5, "text": "\\rho" }, { "math_id": 6, "text": "P-E-\\frac{\\partial (\\int_0^\\infty \\! \\rho q\\,dz \\,)}{\\partial t}=-\\nabla \\cdot \\mathbf{F}\\!" } ]
https://en.wikipedia.org/wiki?curid=10414062
10415613
Schwinger variational principle
Schwinger variational principle is a variational principle which expresses the scattering T-matrix as a functional depending on two unknown wave functions. The functional attains stationary value equal to actual scattering T-matrix. The functional is stationary if and only if the two functions satisfy the Lippmann-Schwinger equation. The development of the variational formulation of the scattering theory can be traced to works of L. Hultén and J. Schwinger in 1940s. Linear form of the functional. The T-matrix expressed in the form of stationary value of the functional reads formula_0 where formula_1 and formula_2 are the initial and the final states respectively, formula_3 is the interaction potential and formula_4 is the retarded Green's operator for collision energy formula_5. The condition for the stationary value of the functional is that the functions formula_6 and formula_7 satisfy the Lippmann-Schwinger equation formula_8 and formula_9 Fractional form of the functional. Different form of the stationary principle for T-matrix reads formula_10 The wave functions formula_6 and formula_7 must satisfy the same Lippmann-Schwinger equations to get the stationary value. Application of the principle. The principle may be used for the calculation of the scattering amplitude in the similar way like the variational principle for bound states, i.e. the form of the wave functions formula_11 is guessed, with some free parameters, that are determined from the condition of stationarity of the functional. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\langle\\phi'|T(E)|\\phi\\rangle = T[\\psi',\\psi] \\equiv\n \\langle\\psi'|V|\\phi\\rangle + \\langle\\phi'|V|\\psi\\rangle - \\langle\\psi'|V-VG_0^{(+)}(E)V|\\psi\\rangle ," }, { "math_id": 1, "text": "\\phi" }, { "math_id": 2, "text": "\\phi'" }, { "math_id": 3, "text": "V" }, { "math_id": 4, "text": "G_0^{(+)}(E)" }, { "math_id": 5, "text": "E" }, { "math_id": 6, "text": "\\psi" }, { "math_id": 7, "text": "\\psi'" }, { "math_id": 8, "text": " |\\psi\\rangle = |\\phi\\rangle + G_0^{(+)}(E)V|\\psi\\rangle" }, { "math_id": 9, "text": " |\\psi'\\rangle = |\\phi'\\rangle + G_0^{(-)}(E)V|\\psi'\\rangle ." }, { "math_id": 10, "text": " \\langle\\phi'|T(E)|\\phi\\rangle = T[\\psi',\\psi] \\equiv\n \\frac{\\langle\\psi'|V|\\phi\\rangle\\langle\\phi'|V|\\psi\\rangle}{\\langle\\psi'|V-VG_0^{(+)}(E)V|\\psi\\rangle}." }, { "math_id": 11, "text": "\\psi, \\psi'" } ]
https://en.wikipedia.org/wiki?curid=10415613
10415943
Spherical mean
In mathematics, the spherical mean of a function around a point is the average of all values of that function on a sphere of given radius centered at that point. Definition. Consider an open set "U" in the Euclidean space R"n" and a continuous function "u" defined on "U" with real or complex values. Let "x" be a point in "U" and "r" &gt; 0 be such that the closed ball "B"("x", "r") of center "x" and radius "r" is contained in "U". The spherical mean over the sphere of radius "r" centered at "x" is defined as formula_1 where ∂"B"("x", "r") is the ("n" − 1)-sphere forming the boundary of "B"("x", "r"), d"S" denotes integration with respect to spherical measure and "ω""n"−1("r") is the "surface area" of this ("n" − 1)-sphere. Equivalently, the spherical mean is given by formula_2 where "ω""n"−1 is the area of the ("n" − 1)-sphere of radius 1. The spherical mean is often denoted as formula_3 The spherical mean is also defined for Riemannian manifolds in a natural manner.
[ { "math_id": 0, "text": "u" }, { "math_id": 1, "text": "\\frac{1}{\\omega_{n-1}(r)}\\int\\limits_{\\partial B(x, r)} \\! u(y) \\, \\mathrm{d} S(y) " }, { "math_id": 2, "text": "\\frac{1}{\\omega_{n-1}}\\int\\limits_{\\|y\\|=1} \\! u(x+ry) \\, \\mathrm{d}S(y) " }, { "math_id": 3, "text": "\\int\\limits_{\\partial B(x, r)}\\!\\!\\!\\!\\!\\!\\!\\!\\!-\\, u(y) \\, \\mathrm{d} S(y). " }, { "math_id": 4, "text": "r\\to \\int\\limits_{\\partial B(x, r)}\\!\\!\\!\\!\\!\\!\\!\\!\\!-\\, u(y) \\,\\mathrm{d}S(y)" }, { "math_id": 5, "text": "r\\to 0" }, { "math_id": 6, "text": "u(x)." }, { "math_id": 7, "text": "\\partial^2_t u=c^2\\,\\Delta u" }, { "math_id": 8, "text": "\\R^n" }, { "math_id": 9, "text": "n" }, { "math_id": 10, "text": "\\R" }, { "math_id": 11, "text": "U" }, { "math_id": 12, "text": "\\mathbb R^n" }, { "math_id": 13, "text": "x" }, { "math_id": 14, "text": "r>0" }, { "math_id": 15, "text": "B(x, r)" }, { "math_id": 16, "text": "u(x)=\\int\\limits_{\\partial B(x, r)}\\!\\!\\!\\!\\!\\!\\!\\!\\!-\\, u(y) \\, \\mathrm{d}S(y)." } ]
https://en.wikipedia.org/wiki?curid=10415943
10416164
6-Phosphogluconate dehydrogenase
Class of enzymes 6-Phosphogluconate dehydrogenase (6PGD) is an enzyme in the pentose phosphate pathway. It forms ribulose 5-phosphate from 6-phosphogluconate: 6-phospho-D-gluconate + NAD(P)+ formula_0 D-Ribulose 5-phosphate + CO2 + NAD(P)H + H+ It is an oxidative carboxylase that catalyses the oxidative decarboxylation of 6-phosphogluconate into ribulose 5-phosphate in the presence of NADP. This reaction is a component of the hexose mono-phosphate shunt and pentose phosphate pathways (PPP). Prokaryotic and eukaryotic 6PGD are proteins of about 470 amino acids whose sequences are highly conserved. The protein is a homodimer in which the monomers act independently: each contains a large, mainly alpha-helical domain and a smaller beta-alpha-beta domain, containing a mixed parallel and anti-parallel 6-stranded beta sheet. NADP is bound in a cleft in the small domain, the substrate binding in an adjacent pocket. Biotechnological significance. Recently, 6PGD was demonstrated to catalyze also the reverse reaction (i.e. reductive carboxylation) "in vivo". Experiments using "Escherichia coli" selection strains revealed that this reaction was efficient enough to support the formation of biomass based solely on CO2 and pentose sugars. In the future, this property could be exploited for synthetic carbon fixation routes. Clinical significance. Mutations within the gene coding this enzyme result in 6-phosphogluconate dehydrogenase deficiency, an autosomal hereditary disease affecting the red blood cells. As a possible drug target. 6PGD is involved in cancer cell metabolism so 6PGD inhibitors have been sought. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=10416164
1041812
Superpotential
In theoretical physics, the superpotential is a function in supersymmetric quantum mechanics. Given a superpotential, two "partner potentials" are derived that can each serve as a potential in the Schrödinger equation. The partner potentials have the same spectrum, apart from a possible eigenvalue of zero, meaning that the physical systems represented by the two potentials have the same characteristic energies, apart from a possible zero-energy ground state. One-dimensional example. Consider a one-dimensional, non-relativistic particle with a two state internal degree of freedom called "spin". (This is not quite the usual notion of spin encountered in nonrelativistic quantum mechanics, because "real" spin applies only to particles in three-dimensional space.) Let "b" and its Hermitian adjoint "b"† signify operators which transform a "spin up" particle into a "spin down" particle and vice versa, respectively. Furthermore, take "b" and "b"† to be normalized such that the anticommutator {"b","b"†} equals 1, and take that "b"2 equals 0. Let "p" represent the momentum of the particle and "x" represent its position with ["x","p"]=i, where we use natural units so that formula_0. Let "W" (the superpotential) represent an arbitrary differentiable function of "x" and define the supersymmetric operators "Q"1 and "Q"2 as formula_1 formula_2 The operators "Q"1 and "Q"2 are self-adjoint. Let the Hamiltonian be formula_3 where "W"' signifies the derivative of "W". Also note that {"Q"1,"Q"2}=0. Under these circumstances, the above system is a toy model of "N"=2 supersymmetry. The spin down and spin up states are often referred to as the "bosonic" and "fermionic" states, respectively, in an analogy to quantum field theory. With these definitions, "Q"1 and "Q"2 map "bosonic" states into "fermionic" states and vice versa. Restricting to the bosonic or fermionic sectors gives two partner potentials determined by formula_4 In four spacetime dimensions. In supersymmetric quantum field theories with four spacetime dimensions, which might have some connection to nature, it turns out that scalar fields arise as the lowest component of a chiral superfield, which tends to automatically be complex valued. We may identify the complex conjugate of a chiral superfield as an anti-chiral superfield. There are two possible ways to obtain an action from a set of superfields: or The second option tells us that an arbitrary holomorphic function of a set of chiral superfields can show up as a term in a Lagrangian which is invariant under supersymmetry. In this context, holomorphic means that the function can only depend on the chiral superfields, not their complex conjugates. We may call such a function "W", the superpotential. The fact that "W" is holomorphic in the chiral superfields helps explain why supersymmetric theories are relatively tractable, as it allows one to use powerful mathematical tools from complex analysis. Indeed, it is known that "W" receives no perturbative corrections, a result referred to as the perturbative non-renormalization theorem. Note that non-perturbative processes may correct this, for example through contributions to the beta functions due to instantons.
[ { "math_id": 0, "text": "\\hbar=1" }, { "math_id": 1, "text": "Q_1=\\frac{1}{2}\\left[(p-iW)b+(p+iW)b^\\dagger\\right]" }, { "math_id": 2, "text": "Q_2=\\frac{i}{2}\\left[(p-iW)b-(p+iW)b^\\dagger\\right]" }, { "math_id": 3, "text": "H=\\{Q_1,Q_1\\}=\\{Q_2,Q_2\\}=\\frac{p^2}{2}+\\frac{W^2}{2}+\\frac{W'}{2}(bb^\\dagger-b^\\dagger b)" }, { "math_id": 4, "text": " H = \\frac{p^2}{2}+\\frac{W^2}{2} \\pm \\frac{W'}{2}" }, { "math_id": 5, "text": "x_{0,1,2,3}" }, { "math_id": 6, "text": "\\theta,\\bar\\theta" }, { "math_id": 7, "text": "\\theta" }, { "math_id": 8, "text": "\\bar\\theta" } ]
https://en.wikipedia.org/wiki?curid=1041812
1041856
Porin (protein)
Group of transport proteins Porins are beta barrel proteins that cross a cellular membrane and act as a pore, through which molecules can diffuse. Unlike other membrane transport proteins, porins are large enough to allow passive diffusion, i.e., they act as channels that are specific to different types of molecules. They are present in the outer membrane of gram-negative bacteria and some gram-positive mycobacteria (mycolic acid-containing actinomycetes), the outer membrane of mitochondria, and the outer chloroplast membrane (outer plastid membrane). Structure. Porins are composed of beta sheets (β sheets) made up of beta strands (β strands) which are linked together by beta turns (β turns) on the cytoplasmic side and long loops of amino acids on the other. The β strands lie in an antiparallel fashion and form a cylindrical tube, called a beta barrel (β barrel). The amino acid composition of the porin β strands are unique in that polar and nonpolar residues alternate along them. This means that the nonpolar residues face outward so as to interact with the nonpolar lipids of outer membrane, whereas the polar residues face inwards into the center of the beta barrel to create the aqueous channel. The specific amino acids in the channel determine the specificity of the porin to different molecules. The β barrels that make up a porin are composed of as few as eight β strands to as many as twenty-two β strands. The individual strands are joined together by loops and turns. The majority of porins are monomers; however, some dimeric porins have been discovered, as well as an octameric porin. Depending on the size of the porin, the interior of the protein may either be filled with water, have up to two β strands folded back into the interior, or contain a "stopper" segment composed of β strands. All porins form homotrimers in the outer membrane, meaning that three identical porin subunits associate together to form a porin super-structure with three channels. Hydrogen bonding and dipole-dipole interactions between each monomer in the homotrimer ensure that they do not dissociate, and remain together in the outer membrane. Several parameters have been used to describe the structure of a porin protein. They include the tilting angle (α), shear number (S), strand number (n), and barrel radius (R). The tilting angle refers to the angle relative to the membrane. The shear number (S) is the number of amino acid residues found in each β strands. Strand number (n) is the amount of β strands in the porin, and barrel radius (R) refers to the radius of the opening of the porin. These parameters are related via the following formulas: formula_0 and, formula_1 Using these formulas, the structure of a porin can be determined by knowing only a few of the available parameters. While the structure of many porins have been determined using X-ray crystallography, the alternative method of sequencing protein primary structure may also be used instead. Cellular roles. Porins are water-filled pores and channels found in the membranes of bacteria and eukaryotes. Porin-like channels have also been discovered in archaea. Note that the term "nucleoporin" refers to unrelated proteins that facilitate transport through nuclear pores in the nuclear envelope. Porins are primarily involved in passively transporting hydrophilic molecules of various sizes and charges across the membrane. For survival, certain required nutrients and substrates must be transported into the cells. Likewise, toxins and wastes must be transported out to avoid toxic accumulation. Additionally, porins can regulate permeability and prevent lysis by limiting the entry of detergents into the cell. Two types of porins exist to transport different materials– general and selective. General porins have no substrate specificities, though some exhibit slight preferences for anions or cations. Selective porins are smaller than general porins, and have specificities for chemical species. These specificities are determined by the threshold sizes of the porins, and the amino acid residues lining them. In gram-negative bacteria, the inner membrane is the major permeability barrier. The outer membrane is more permeable to hydrophilic substances, due to the presence of porins. Porins have threshold sizes of transportable molecules that depend on the type of bacteria and porin. Generally, only substances less than 600 daltons in size can diffuse through. Diversity. Porins were first discovered in gram-negative bacteria, but gram-positive bacteria with both types of porins have been found. They exhibit similar transport functions but have a more limited variety of porins, compared to the distribution found in gram-negative bacteria. Gram-positive bacteria lack outer membranes, so these porin channels are instead bound to specific lipids within the cell walls. Porins are also found in eukaryotes, specifically in the outer membranes of mitochondria and chloroplasts. The organelles contain general porins that are structurally and functionally similar to bacterial ones. These similarities have supported the Endosymbiotic theory, through which eukaryotic organelles arose from gram-negative bacteria. However, eukaryotic porins exhibit the same limited diversity as gram-positive porins, and also display a greater voltage-dependent role during metabolism. Archaea also contain ion channels that have originated from general porins. The channels are found in the cell envelope and help facilitate solute transfer. They have similar characteristics as bacterial and mitochondrial porins, indicating physiological overlaps over all three domains of life. Antibiotic resistance. Many porins are targets for host immune cells, resulting in signaling pathways that lead to bacterial degradation. Therapeutic treatments, like vaccinations and antibiotics, are used to supplement this immune response. Specific antibiotics have been designed to travel through porins in order to inhibit cellular processes. However, due to selective pressure, bacteria can develop resistance through mutations in the porin gene. The mutations may lead to a loss of porins, resulting in the antibiotics having a lower permeability or being completely excluded from transport. These changes have contributed to the global emergence of antibiotic resistance, and an increase in mortality rates from infections. Discovery. The discovery of porins has been attributed to Hiroshi Nikaido, nicknamed "the porinologist." Classification. According to TCDB, there are five evolutionarily independent superfamilies of porins. Porin superfamily I includes 47 families of porins with a range of numbers of trans-membrane β-strands (β-TMS). These include the GBP, SP and RPP porin families. While PSF I includes 47 families, PSF II-V each contain only 2 families. While PSF I derives members from gram-negative bacteria primarily one family of eukaryotic mitochondrial porins, PSF II and V porins are derived from Actinomycetota. PSF III and V are derived from eukaryotic organelle. Porin Superfamily I. 1.B.1 - The General bacterial porin family&lt;br&gt; 1.B.2 - The Chlamydial Porin (CP) Family&lt;br&gt; 1.B.3 - The Sugar porin (SP) Family&lt;br&gt; 1.B.4 - The "Brucella-Rhizobium porin" (BRP) Family&lt;br&gt; 1.B.5 - The "Pseudomonas" OprP Porin (POP) Family&lt;br&gt; 1.B.6 - OmpA-OmpF porin (OOP) family&lt;br&gt; 1.B.7 "Rhodobacter" PorCa porin (RPP) family&lt;br&gt; 1.B.8 Mitochondrial and plastid porin (MPP) family&lt;br&gt; 1.B.9 FadL outer membrane protein (FadL) family&lt;br&gt; 1.B.10 Nucleoside-specific channel-forming outer membrane porin (Tsx) family&lt;br&gt; 1.B.11 Outer membrane fimbrial usher porin (FUP) family&lt;br&gt; 1.B.12 Autotransporter-1 (AT-1) family&lt;br&gt; 1.B.13 Alginate export porin (AEP) family&lt;br&gt; 1.B.14 Outer membrane receptor (OMR) family&lt;br&gt; 1.B.15 Raffinose porin (RafY) family&lt;br&gt; 1.B.16 Short chain amide and urea porin (SAP) family&lt;br&gt; 1.B.17 Outer membrane factor (OMF) family&lt;br&gt; 1.B.18 Outer membrane auxiliary (OMA) protein family&lt;br&gt; 1.B.19 Glucose-selective OprB porin (OprB) family&lt;br&gt; 1.B.20 Two-partner secretion (TPS) family&lt;br&gt; 1.B.21 OmpG porin (OmpG) family&lt;br&gt; 1.B.22 Outer bacterial membrane secretin (secretin) family&lt;br&gt; 1.B.23 Cyanobacterial porin (CBP) family&lt;br&gt; 1.B.24 Mycobacterial porin&lt;br&gt; 1.B.25 Outer membrane porin (Opr) family&lt;br&gt; 1.B.26 Cyclodextrin porin (CDP) family&lt;br&gt; 1.B.31 "Campylobacter jejuni" major outer membrane porin (MomP) family&lt;br&gt; 1.B.32 Fusobacterial outer membrane porin (FomP) family&lt;br&gt; 1.B.33 Outer membrane protein insertion porin (Bam complex) (OmpIP) family&lt;br&gt; 1.B.34 Corynebacterial porins&lt;br&gt; 1.B.35 Oligogalacturonate-specific porin (KdgM) family&lt;br&gt; 1.B.39 Bacterial porin, OmpW (OmpW) family&lt;br&gt; 1.B.42 - The Outer Membrane Lipopolysaccharide Export Porin (LPS-EP) Family&lt;br&gt; 1.B.43 - The "Coxiella" Porin P1 (CPP1) Family&lt;br&gt; 1.B.44 - The Probable Protein Translocating "Porphyromonas gingivalis" Porin (PorT) Family&lt;br&gt; 1.B.49 - The "Anaplasma" P44 (A-P44) Porin Family&lt;br&gt; 1.B.54 - Intimin/Invasin (Int/Inv) or Autotransporter-3 family &lt;br&gt; 1.B.55 - The Poly Acetyl Glucosamine Porin (PgaA) Family&lt;br&gt; 1.B.57 - The Legionella Major-Outer Membrane Protein (LM-OMP) Family&lt;br&gt; 1.B.60 - The Omp50 Porin (Omp50 Porin) Family&lt;br&gt; 1.B.61 - The Delta-Proteobacterial Porin (Delta-Porin) Family&lt;br&gt; 1.B.62 - The Putative Bacterial Porin (PBP) Family&lt;br&gt; 1.B.66 - The Putative Beta-Barrel Porin-2 (BBP2) Family&lt;br&gt; 1.B.67 - The Putative Beta Barrel Porin-4 (BBP4) Family&lt;br&gt; 1.B.68 - The Putative Beta Barrel Porin-5 (BBP5) Superfamily&lt;br&gt; 1.B.70 - The Outer Membrane Channel (OMC) Family&lt;br&gt; 1.B.71 - The Proteobacterial/Verrucomicrobial Porin (PVP) Family&lt;br&gt; 1.B.72 - The Protochlamydial Outer Membrane Porin (PomS/T) Family&lt;br&gt; 1.B.73 - The Capsule Biogenesis/Assembly (CBA) Family&lt;br&gt; 1.B.78 - The DUF3374 Electron Transport-associated Porin (ETPorin) Family Porin Superfamily II (MspA Superfamily). 1.B.24 - Mycobacterial porin&lt;br&gt; 1.B.58 - Nocardial Hetero-oligomeric Cell Wall Channel (NfpA/B) Family Porin Superfamily III. 1.B.28 - The Plastid Outer Envelope Porin of 24 kDa (OEP24) Family&lt;br&gt; 1.B.47 - The Plastid Outer Envelope Porin of 37 kDa (OEP37) Family Porin Superfamily IV (Tim17/OEP16/PxMPL (TOP) Superfamily). This superfamily includes protein that comprise pores in multicomponent protein translocases as follows: 3.A.8 - [Tim17 (P39515) Tim22 (Q12328) Tim23 (P32897)]; 1.B.69 - [PXMP4 (Q9Y6I8) PMP24 (A2R8R0)]; 3.D.9 - [NDH 21.3 kDa component (P25710)] 1.B.30 - The Plastid Outer Envelope Porin of 16 kDa (OEP16) Family&lt;br&gt; 1.B.69 - The Peroxysomal Membrane Porin 4 (PxMP4) Family&lt;br&gt; 3.A.8 - The Mitochondrial Protein Translocase (MPT) Family Porin Superfamily V (Corynebacterial PorA/PorH Superfamily). 1.B.34 - The Corynebacterial Porin A (PorA) Family 1.B.59 - The Outer Membrane Porin, PorH (PorH) Family References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " 2 \\pi R = \\frac{nb}{2 \\cos( \\alpha )} " }, { "math_id": 1, "text": "\\tan(\\alpha) = \\frac{S a}{nb}" } ]
https://en.wikipedia.org/wiki?curid=1041856
1042053
Neutron scattering
Physical phenomenon Neutron scattering, the irregular dispersal of free neutrons by matter, can refer to either the naturally occurring physical process itself or to the man-made experimental techniques that use the natural process for investigating materials. The natural/physical phenomenon is of elemental importance in nuclear engineering and the nuclear sciences. Regarding the experimental technique, understanding and manipulating neutron scattering is fundamental to the applications used in crystallography, physics, physical chemistry, biophysics, and materials research. Neutron scattering is practiced at research reactors and spallation neutron sources that provide neutron radiation of varying intensities. Neutron diffraction (elastic scattering) techniques are used for analyzing structures; where inelastic neutron scattering is used in studying atomic vibrations and other excitations. Scattering of fast neutrons. "Fast neutrons" (see neutron temperature) have a kinetic energy above 1 MeV. They can be scattered by condensed matter—nuclei having kinetic energies far below 1 eV—as a valid experimental approximation of an elastic collision with a particle at rest. With each collision, the fast neutron transfers a significant part of its kinetic energy to the scattering nucleus (condensed matter), the more so the lighter the nucleus. And with each collision, the "fast" neutron is slowed until it reaches thermal equilibrium with the material in which it is scattered. Neutron moderators are used to produce thermal neutrons, which have kinetic energies below 1 eV (T &lt; 500K). Thermal neutrons are used to maintain a nuclear chain reaction in a nuclear reactor, and as a research tool in neutron scattering experiments and other applications of neutron science (see below). The remainder of this article concentrates on the scattering of thermal neutrons. Neutron-matter interaction. Because neutrons are electrically neutral, they penetrate more deeply into matter than electrically charged particles of comparable kinetic energy, and thus are valuable as probes of bulk properties. Neutrons interact with atomic nuclei and with magnetic fields from unpaired electrons, causing pronounced interference and energy transfer effects in neutron scattering experiments. Unlike an x-ray photon with a similar wavelength, which interacts with the electron cloud surrounding the nucleus, neutrons interact primarily with the nucleus itself, as described by Fermi's pseudopotential. Neutron scattering and absorption cross sections vary widely from isotope to isotope. Neutron scattering can be incoherent or coherent, also depending on isotope. Among all isotopes, hydrogen has the highest scattering cross section. Important elements like carbon and oxygen are quite visible in neutron scattering—this is in marked contrast to X-ray scattering where cross sections systematically increase with atomic number. Thus neutrons can be used to analyze materials with low atomic numbers, including proteins and surfactants. This can be done at synchrotron sources but very high intensities are needed, which may cause the structures to change. The nucleus provides a very short range, as isotropic potential varies randomly from isotope to isotope, which makes it possible to tune the (scattering) contrast to suit the experiment. Scattering almost always presents both elastic and inelastic components. The fraction of elastic scattering is determined by the Debye-Waller factor or the Mössbauer-Lamb factor. Depending on the research question, most measurements concentrate on either elastic or inelastic scattering. Achieving a precise velocity, i.e. a precise energy and de Broglie wavelength, of a neutron beam is important. Such single-energy beams are termed 'monochromatic', and monochromaticity is achieved either with a crystal monochromator or with a time of flight (TOF) spectrometer. In the time-of-flight technique, neutrons are sent through a sequence of two rotating slits such that only neutrons of a particular velocity are selected. Spallation sources have been developed that can create a rapid pulse of neutrons. The pulse contains neutrons of many different velocities or de Broglie wavelengths, but separate velocities of the scattered neutrons can be determined "afterwards" by measuring the time of flight of the neutrons between the sample and neutron detector. Magnetic scattering. The neutron has a net electric charge of zero, but has a significant magnetic moment, although only about 0.1% of that of the electron. Nevertheless, it is large enough to scatter from local magnetic fields inside condensed matter, providing a weakly interacting and hence penetrating probe of ordered magnetic structures and electron spin fluctuations. Inelastic neutron scattering. Inelastic neutron scattering is an experimental technique commonly used in condensed matter research to study atomic and molecular motion as well as magnetic and crystal field excitations. It distinguishes itself from other neutron scattering techniques by resolving the change in kinetic energy that occurs when the collision between neutrons and the sample is an inelastic one. Results are generally communicated as the dynamic structure factor (also called inelastic scattering law) formula_0, sometimes also as the dynamic susceptibility formula_1 where the scattering vector formula_2 is the difference between incoming and outgoing wave vector, and "formula_3" is the energy change experienced by the sample (negative that of the scattered neutron). When results are plotted as function of formula_4, they can often be interpreted in the same way as spectra obtained by conventional spectroscopic techniques; insofar as inelastic neutron scattering can be seen as a special spectroscopy. Inelastic scattering experiments normally require a monochromatization of the incident or outgoing beam and an energy analysis of the scattered neutrons. This can be done either through time-of-flight techniques (neutron time-of-flight scattering) or through Bragg reflection from single crystals (neutron triple-axis spectroscopy, neutron backscattering). Monochromatization is not needed in echo techniques (neutron spin echo, neutron resonance spin echo), which use the quantum mechanical phase of the neutrons in addition to their amplitudes. History. The first neutron diffraction experiments were performed in the 1930s. However it was not until around 1945, with the advent of nuclear reactors, that high neutron fluxes became possible, leading to the possibility of in-depth structure investigations. The first neutron-scattering instruments were installed in beam tubes at multi-purpose research reactors. In the 1960s, high-flux reactors were built that were optimized for beam-tube experiments. The development culminated in the high-flux reactor of the Institut Laue-Langevin (in operation since 1972) that achieved the highest neutron flux to this date. Besides a few high-flux sources, there were some twenty medium-flux reactor sources at universities and other research institutes. Starting in the 1980s, many of these medium-flux sources were shut down, and research concentrated at a few world-leading high-flux sources. Facilities. Today, most neutron scattering experiments are performed by research scientists who apply for beamtime at neutron sources through a formal proposal procedure. Because of the low count rates involved in neutron scattering experiments, relatively long periods of beam time (on the order of days) are usually required for usable data sets. Proposals are assessed for feasibility and scientific interest. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "S(\\mathbf{Q},\\omega)" }, { "math_id": 1, "text": " \\chi^{\\prime \\prime}(\\mathbf{Q},\\omega)" }, { "math_id": 2, "text": "\\mathbf{Q}" }, { "math_id": 3, "text": "\\hbar \\omega" }, { "math_id": 4, "text": "\\omega" } ]
https://en.wikipedia.org/wiki?curid=1042053
10420935
Well kill
A well kill is the operation of placing a column of special fluids of the required density into a well bore in order to prevent the flow of reservoir fluids without the need for pressure control equipment at the surface. It works on the principle that the hydrostatic head of the "kill fluid" or "kill mud" will be enough to suppress the pressure of the formation fluids. Well kills may be planned in the case of advanced interventions such as workovers, or be contingency operations. The situation calling for a well kill will dictate the method taken. Not all well kills are deliberate. On occasion, the unintended accumulation of fluids, either from injection of chemicals like methanol from the surface, or from liquids produced from the reservoir, can be enough to kill the well, particularly gas wells, which are notoriously easy to kill. Well control in general is an extremely expensive and dangerous operation. Extensive training, testing, proof of competence, and experience are prerequisites for planning and performing a well kill, even a seemingly simple one. Many people have died through incorrectly performed well kills. Principles. The principle of a well kill revolves around the influence of the weight of a fluid column and hence the pressure exerted at the wellbore's bottom. formula_0 Where P is the pressure at a specific depth, h, within the column, g is the acceleration of gravity and ρ is the density of the fluid. It is common in the oil industry to use weight density, which is the product of mass density and the acceleration of gravity. This reduces the equation to: formula_1 Where γ is the weight density. Weight density may also be described as the pressure gradient because it directly determines how much extra pressure will be added by increasing depth of the column of fluid. The objective of a well kill operation is to make the pressure at the bottom of the kill fluid equal (or slightly higher) compared to the pressure of the reservoir fluids. Example. The pressure of the reservoir fluids at the bottom of the hole is 38 MPa. We have a kill fluid with a weight density of 16 kN.m−3. What will need to be the height of the hydrostatic head in order to kill the well? From the equation: formula_2 formula_3 formula_4 Therefore, a column of 2375 m of this fluid is needed. This refers to the true vertical depth of the column, not the measured depth, which is always larger than true vertical depth due to deviations from vertical. Maths in the oil field. In the oil industry, a pure SI system is extremely rare. Weight densities are commonly either given as specific gravity or in pounds per gallon. Simple conversion factors (0.433 for specific gravity and 0.052 for ppg) convert these values to a pressure gradient in psi per foot. Multiplying by the depth in feet gives the pressure at the bottom of the column. Of course, when the well is being drilled in metres as the depth unit, the maths gets more complicated. Since well-kill certification is normally (in the US/UK) done in "oil field units" (feet for length, inches for diameters, oilfield barrels for volume-pumped, psi for pressures), complex workarounds are often performed to keep the planned calculations in line with local regulations and industry "best practice". Methods of well kill. During all well kills, careful attention must be paid to not exceeding the formation strength at the weakest point of the wellbore (or casing/liner pipes, as appropriate), the "fracture pressure", otherwise fluid will be lost from the wellbore to the formation. Since this lost volume is unknown, it becomes very hard to tell how the kill is proceeding, especially if gas is involved with its large volume change through different parts of the wellbore. Combining a well kill with such a "lost circulation" situation is a serious problem. Lost circulation situations can, of course, also lead to well kill situations. Reverse circulation. This is often the tidiest way of making a planned well kill. It involves pumping kill fluid down the 'A' annulus of the well, through a point of communication between it and the production tubing just above the production packer and up the tubing, displacing the lighter well bore fluids, which are allowed to flow to production. The point of communication was traditionally a device called a sliding sleeve, or sliding side door, which is a hydraulically operated device, built into the production tubing. During normal operation, it would remain closed sealing off the tubing and the annulus, but for events such as this, it would be opened to allow the free flow of fluids between the two regions. These components have fallen out of favour as they were prone to leaking. Instead, it is now more common to punch a hole in the tubing for circulation kills. Although this permanently damages the tubing, given that most planned well kills are for workovers, this is not an issue, since the tubing is being pulled for replacement anyway. Bullheading. This is the most common method of a contingency well kill. If there is a sudden need to kill a well quickly, without the time for rigging up for circulation, the more blunt instrument of bullheading may be used. This involves simply pumping the kill fluid directly down the well bore, forcing the well bore fluids back into the reservoir. This can be effective at achieving the central aim of a well kill; building up a sufficient hydrostatic head in the well bore. However, it can be limited by the burst-pressure capabilities of the tubing or casing, and can risk damaging the reservoir by forcing undesired materials into it. The principal advantage is that it can be done with little advanced planning. Forward circulation. This is similar to reverse circulation, except the kill fluid is pumped into the production tubing and circulated out through the annulus. Though effective, it is not as desirable since it is preferred that the well bore fluids be displaced out to production, rather than the annulus. Lubricate and bleed. This is the most time-consuming form of well kill. It involves repeatedly pumping in small quantities of kill mud into the well bore and then bleeding off excess pressure. It works on the principle that the heavier kill mud will sink below the lighter well bore fluids and so bleeding off the pressure will remove the latter, leaving an increasing quantity of kill mud in the well bore with successive steps. Well kills during drilling operations. During drilling, pressure control is maintained through the use of precisely concocted drilling fluid, which balances out the pressure at the bottom of the hole. In the event of suddenly encountering a high-pressure pocket, pressure due to drilling fluid may not be able to counter the high formation pressure. Allowing formation fluid to enter into the well-bore. This influx of formation fluid is called kick and then it becomes necessary to kill the well. This is done by pumping kill mud down the drill pipe, where it circulates out the bottom and into the well bore. Reversing a well kill. The intention of a well kill (or the reality of an unintentional well kill) is to stop reservoir fluids flowing to surface. This of course creates problems when it is desirable to get the well flowing again. In order to reverse the well kill, the kill fluid must be displaced from the well bore. This involves injecting a gas at high pressure, usually nitrogen since it is inert and relatively cheap. A gas can be put under sufficient pressure to allow it to push heavy kill fluid, but will then expand and become light once pressure is removed. This means that having displaced the kill fluid, it will not itself kill the well. Low-density ("light") liquids such as diesel fuel, or the "base fluid" for a "(synthetic) oil-based mud" can also be used, depending on availability and pressure-management issues for a specific well. The reservoir fluids should be able to flow to surface, displacing the gas. The cheapest way to do it is similar to bullheading, where the light fluid (nitrogen, or low-density liquid) is pumped in under high pressure to force the kill fluid into the reservoir. This, of course, runs a high risk of causing well damage. The most effective way is to use coiled tubing, pumping the gas/diesel down the coil and circulating out the bottom into the well bore, where it will displace the kill mud to production. (Of course, getting a coiled tubing spread to the location may take weeks of work and logistics.) References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "P=hg\\rho" }, { "math_id": 1, "text": "P=h\\gamma" }, { "math_id": 2, "text": "h=\\frac{P}{\\gamma}" }, { "math_id": 3, "text": "h=\\frac{38\\,MPa}{16\\,kNm^{-3}}" }, { "math_id": 4, "text": "h=2375\\,m" } ]
https://en.wikipedia.org/wiki?curid=10420935
1042164
Glossary of mathematical jargon
Collection of commonly used phrases found in mathematical fields The language of mathematics has a vast vocabulary of specialist and technical terms. It also has a certain amount of jargon: commonly used phrases which are part of the culture of mathematics, rather than of the subject. Jargon often appears in lectures, and sometimes in print, as informal shorthand for rigorous arguments or precise ideas. Much of this uses common English words, but with a specific non-obvious meaning when used in a mathematical sense. Some phrases, like "in general", appear below in more than one section. Philosophy of mathematics. &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;[The paper of Eilenberg and Mac Lane (1942)] introduced the very abstract idea of a 'category' — a subject then called 'general abstract nonsense'! &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;[Grothendieck] raised algebraic geometry to a new level of abstraction...if certain mathematicians could console themselves for a time with the hope that all these complicated structures were 'abstract nonsense'...the later papers of Grothendieck and others showed that classical problems...which had resisted efforts of several generations of talented mathematicians, could be solved in terms of...complicated concepts. &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;There are two canonical proofs that are always used to show non-mathematicians what a mathematical proof is like: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;The beauty of a mathematical theory is independent of the aesthetic qualities...of the theory's rigorous expositions. Some beautiful theories may never be given a presentation which matches their beauty...Instances can also be found of mediocre theories of questionable beauty which are given brilliant, exciting expositions...[Category theory] is rich in beautiful and insightful definitions and poor in elegant proofs...[The theorems] remain clumsy and dull...[Expositions of projective geometry] vied for one another in elegance of presentation and in cleverness of proof...In retrospect, one wonders what all the fuss was about.Mathematicians may say that a theorem is beautiful when they really mean to say that it is enlightening. We acknowledge a theorem's beauty when we see how the theorem 'fits' in its place...We say that a proof is beautiful when such a proof finally gives away the secret of the theorem... &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;Many of the results mentioned in this paper should be considered "folklore" in that they merely formally state ideas that are well-known to researchers in the area, but may not be obvious to beginners and to the best of my knowledge do not appear elsewhere in print. A term regarding statements. If a statement holds false, then it is said to exhibit "chicanery". "What do you mean a subset of formula_0 is compact if and only if it is bounded? This is chicanery!" &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;Since half a century we have seen arise a crowd of bizarre functions which seem to try to resemble as little as possible the honest functions which serve some purpose...Nay more, from the logical point of view, it is these strange functions which are the most general...to-day they are invented expressly to put at fault the reasonings of our fathers... &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;[The Dirichlet function] took on an enormous importance...as giving an incentive for the creation of new types of function whose properties departed completely from what intuitively seemed admissible. A celebrated example of such a so-called 'pathological' function...is the one provided by Weierstrass...This function is continuous but not differentiable. Note for that latter quote that as the differentiable functions are meagre in the space of continuous functions, as Banach found out in 1931, differentiable functions are colloquially speaking a rare exception among the continuous ones. Thus it can hardly be defended any-more to call non-differentiable continuous functions pathological. Descriptive informalities. Although ultimately every mathematical argument must meet a high standard of precision, mathematicians use descriptive but informal statements to discuss recurring themes or concepts with unwieldy formal statements. Note that many of the terms are completely rigorous in context. &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;Norbert A'Campo of the University of Basel once asked Grothendieck about something related to the Platonic solids. Grothendieck advised caution. The Platonic solids are so beautiful and so exceptional, he said, that one cannot assume such exceptional beauty will hold in more general situations. Proof terminology. The formal language of proof draws repeatedly from a small pool of ideas, many of which are invoked through various lexical shorthands in practice. &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;Let "V" be a finite-dimensional vector space over "k"...Let ("e""i")1≤ "i" ≤ "n" be a basis for "V"...There is an isomorphism of the polynomial algebra "k"["T""ij"]1≤ "i", "j" ≤ "n" onto the algebra Sym"k"("V" ⊗ "V"*)...It extends to an isomorphism of "k"[GL"n"] to the localized algebra Sym"k"("V" ⊗ "V"*)"D", where "D" = det("e""i" ⊗ "e""j"*)...We write "k"[GL("V")] for this last algebra. By transport of structure, we obtain a linear algebraic group GL("V") isomorphic to GL"n". Proof techniques. Mathematicians have several phrases to describe proofs or proof techniques. These are often used as hints for filling in tedious details. Miscellaneous. This section features terms used across different areas in mathematics, or terms that do not typically appear in more specialized glossaries. For the terms used only in some specific areas of mathematics, see glossaries in . B. &lt;templatestyles src="Glossary/styles.css" /&gt; C. &lt;templatestyles src="Glossary/styles.css" /&gt; D. &lt;templatestyles src="Glossary/styles.css" /&gt; F. &lt;templatestyles src="Glossary/styles.css" /&gt; I. &lt;templatestyles src="Glossary/styles.css" /&gt; M. &lt;templatestyles src="Glossary/styles.css" /&gt; P. &lt;templatestyles src="Glossary/styles.css" /&gt; S. &lt;templatestyles src="Glossary/styles.css" /&gt;
[ { "math_id": 0, "text": "\\R" }, { "math_id": 1, "text": "f \\colon A \\to C" }, { "math_id": 2, "text": "f = h \\circ g" }, { "math_id": 3, "text": "g \\colon A \\to B" }, { "math_id": 4, "text": "h \\colon B \\to C" }, { "math_id": 5, "text": "B" }, { "math_id": 6, "text": "g" }, { "math_id": 7, "text": "h" }, { "math_id": 8, "text": "\\R_{\\geq 0}\\cup\\{\\infty\\}," }, { "math_id": 9, "text": "\\N\\cup\\{\\infty\\}," }, { "math_id": 10, "text": "\\aleph_0" }, { "math_id": 11, "text": "(-1)^n" }, { "math_id": 12, "text": "x = y + 1" }, { "math_id": 13, "text": "x" }, { "math_id": 14, "text": "y + 1" } ]
https://en.wikipedia.org/wiki?curid=1042164
10422205
Classification of Fatou components
In mathematics, Fatou components are components of the Fatou set. They were named after Pierre Fatou. Rational case. If f is a rational function formula_0 defined in the extended complex plane, and if it is a nonlinear function (degree &gt; 1) formula_1 then for a periodic component formula_2 of the Fatou set, exactly one of the following holds: Attracting periodic point. The components of the map formula_3 contain the attracting points that are the solutions to formula_4. This is because the map is the one to use for finding solutions to the equation formula_4 by Newton–Raphson formula. The solutions must naturally be attracting fixed points. Herman ring. The map formula_5 and t = 0.6151732... will produce a Herman ring. It is shown by Shishikura that the degree of such map must be at least 3, as in this example. More than one type of component. If degree d is greater than 2 then there is more than one critical point and then can be more than one type of component Transcendental case. Baker domain. In case of transcendental functions there is another type of periodic Fatou components, called Baker domain: these are "domains on which the iterates tend to an essential singularity (not possible for polynomials and rational functions)" one example of such a function is: formula_6 Wandering domain. Transcendental maps may have wandering domains: these are Fatou components that are not eventually periodic.
[ { "math_id": 0, "text": "f = \\frac{P(z)}{Q(z)}" }, { "math_id": 1, "text": " d(f) = \\max(\\deg(P),\\, \\deg(Q))\\geq 2," }, { "math_id": 2, "text": "U" }, { "math_id": 3, "text": "f(z) = z - (z^3-1)/3z^2" }, { "math_id": 4, "text": "z^3=1" }, { "math_id": 5, "text": "f(z) = e^{2 \\pi i t} z^2(z - 4)/(1 - 4z)" }, { "math_id": 6, "text": "f(z) = z - 1 + (1 - 2z)e^z" } ]
https://en.wikipedia.org/wiki?curid=10422205
1042263
Poynting's theorem
Theorem in physics showing the conservation of energy for the electromagnetic field In electrodynamics, Poynting's theorem is a statement of conservation of energy for electromagnetic fields developed by British physicist John Henry Poynting. It states that in a given volume, the stored energy changes at a rate given by the work done on the charges within the volume, minus the rate at which energy leaves the volume. It is only strictly true in media which is not dispersive, but can be extended for the dispersive case. The theorem is analogous to the work-energy theorem in classical mechanics, and mathematically similar to the continuity equation. Definition. Poynting's theorem states that the rate of energy transfer per unit volume from a region of space equals the rate of work done on the charge distribution in the region, plus the energy flux leaving that region. Mathematically: formula_0 where: Integral form. Using the divergence theorem, Poynting's theorem can also be written in integral form: formula_2 formula_3 formula_4 where Continuity equation analog. In an electrical engineering context the theorem is sometimes written with the energy density term "u" expanded as shown. This form resembles the continuity equation: formula_7, where Derivation. For an individual charge in an electromagnetic field, the rate of work done by the field on the charge is given by the Lorentz Force Law as: formula_11 Extending this to a continuous distribution of charges, moving with current density J, gives: formula_12 By Ampère's circuital law: formula_13 Substituting this into the expression for rate of work gives: formula_14 Using the vector identity formula_15: formula_16 By Faraday's Law: formula_17 giving: formula_18 Continuing the derivation requires the following assumptions: It can be shown that: formula_20 and formula_21 and so: formula_22 Returning to the equation for rate of work, formula_23 Since the volume is arbitrary, this can be cast in differential form as: formula_24 where formula_25 is the Poynting vector. Poynting vector in macroscopic media. In a macroscopic medium, electromagnetic effects are described by spatially averaged (macroscopic) fields. The Poynting vector in a macroscopic medium can be defined self-consistently with microscopic theory, in such a way that the spatially averaged microscopic Poynting vector is exactly predicted by a macroscopic formalism. This result is strictly valid in the limit of low-loss and allows for the unambiguous identification of the Poynting vector form in macroscopic electrodynamics. Alternative forms. It is possible to derive alternative versions of Poynting's theorem. Instead of the flux vector E × H as above, it is possible to follow the same style of derivation, but instead choose E × B, the Minkowski form D × B, or perhaps D × H. Each choice represents the response of the propagation medium in its own way: the E × B form above has the property that the response happens only due to electric currents, while the D × H form uses only (fictitious) magnetic monopole currents. The other two forms (Abraham and Minkowski) use complementary combinations of electric and magnetic currents to represent the polarization and magnetization responses of the medium. Modification. The derivation of the statement is dependent on the assumption that the materials the equation models can be described by a set of susceptibility properties that are linear, isotropic, homogenous and independent of frequency. The assumption that the materials have no absorption must also be made. A modification to Poynting's theorem to account for variations includes a term for the rate of non-Ohmic absorption in a material, which can be calculated by a simplified approximation based on the Drude model. formula_26 Complex Poynting vector theorem. This form of the theorem is useful in Antenna theory, where one has often to consider harmonic fields propagating in the space. In this case, using phasor notation, formula_27 and formula_28. Then the following mathematical identity holds: formula_29 where formula_30 is the current density. Note that in free space, formula_31 and formula_32 are real, thus, taking the real part of the above formula, it expresses the fact that the averaged radiated power flowing through formula_33 is equal to the work on the charges. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "-\\frac{\\partial u}{\\partial t} = \\nabla\\cdot\\mathbf{S}+\\mathbf{J}\\cdot\\mathbf{E}" }, { "math_id": 1, "text": "-\\frac{\\partial u}{\\partial t}" }, { "math_id": 2, "text": "-\\frac{d}{dt} \\int_V u ~ \\mathrm{d}V=" }, { "math_id": 3, "text": "\\scriptstyle \\partial V" }, { "math_id": 4, "text": "\\mathbf{S}\\cdot \\mathrm{d}\\mathbf{A} + \\int_V\\mathbf{J}\\cdot\\mathbf{E} ~ \\mathrm{d}V" }, { "math_id": 5, "text": "u" }, { "math_id": 6, "text": "\\partial V \\!" }, { "math_id": 7, "text": "\n\\nabla\\cdot\\mathbf{S} +\n\\epsilon_0 \\mathbf{E}\\cdot\\frac{\\partial \\mathbf{E}}{\\partial t} + \\frac{\\mathbf{B}}{\\mu_0}\\cdot\\frac{\\partial\\mathbf{B}}{\\partial t} +\n\\mathbf{J}\\cdot\\mathbf{E} = 0\n" }, { "math_id": 8, "text": "\\epsilon_0 \\mathbf{E}\\cdot\\frac{\\partial \\mathbf{E}}{\\partial t}" }, { "math_id": 9, "text": "\\frac{\\mathbf{B}}{\\mu_0}\\cdot\\frac{\\partial\\mathbf{B}}{\\partial t}" }, { "math_id": 10, "text": "\\mathbf{J}\\cdot\\mathbf{E}" }, { "math_id": 11, "text": "\\frac{dW}{dt} = q \\mathbf{v} \\cdot \\mathbf{E}" }, { "math_id": 12, "text": "\\frac{dW}{dt} = \\int_V \\mathbf{J} \\cdot \\mathbf{E} ~\\mathrm d^{3}x" }, { "math_id": 13, "text": "\\mathbf{J} = \\nabla \\times \\mathbf{H} - \\frac{\\partial\\mathbf{D}}{\\partial t}" }, { "math_id": 14, "text": "\\int_V \\mathbf{J} \\cdot \\mathbf{E} ~\\mathrm d^{3}x = \\int_V \\left [ \\mathbf{E} \\cdot (\\nabla \\times \\mathbf{H}) - \\mathbf{E} \\cdot \\frac{\\partial\\mathbf{D}}{\\partial t}\\right ] ~ \\mathrm d^{3}x" }, { "math_id": 15, "text": "\\nabla \\cdot (\\mathbf{E} \\times \\mathbf{H}) =\\ (\\nabla {\\times} \\mathbf{E}) \\cdot \\mathbf{H} \\,-\\, \\mathbf{E} \\cdot (\\nabla {\\times} \\mathbf{H})" }, { "math_id": 16, "text": " \\int_V \\mathbf{J} \\cdot \\mathbf{E} ~ \\mathrm d^{3}x = - \\int_V \\left [ \\nabla \\cdot (\\mathbf{E} \\times \\mathbf{H}) - \\mathbf{H} \\cdot (\\nabla \\times \\mathbf{E}) + \\mathbf{E} \\cdot \\frac{\\partial\\mathbf{D}}{\\partial t}\\right ] ~ \\mathrm d^{3}x" }, { "math_id": 17, "text": "\\nabla \\times \\mathbf{E} = -\\frac{\\partial \\mathbf{B}} {\\partial t}" }, { "math_id": 18, "text": " \\int_V \\mathbf{J} \\cdot \\mathbf{E} ~ \\mathrm d^{3}x = - \\int_V \\left [ \\nabla \\cdot (\\mathbf{E} \\times \\mathbf{H}) + \\mathbf{E} \\cdot \\frac{\\partial\\mathbf{D}}{\\partial t} + \\mathbf{H} \\cdot \\frac{\\partial \\mathbf{B}} {\\partial t}\\right ] ~ \\mathrm d^{3}x" }, { "math_id": 19, "text": "u = \\frac{1}{2} (\\mathbf{E} \\cdot \\mathbf{D} + \\mathbf{B} \\cdot \\mathbf{H})" }, { "math_id": 20, "text": "\\frac{\\partial}{\\partial t}(\\mathbf{E} \\cdot \\mathbf{D}) = 2 \\mathbf{E} \\cdot \\frac{\\partial}{\\partial t} \\mathbf{D}" }, { "math_id": 21, "text": "\\frac{\\partial}{\\partial t}(\\mathbf{H} \\cdot \\mathbf{B}) = 2 \\mathbf{H} \\cdot \\frac{\\partial}{\\partial t} \\mathbf{B}" }, { "math_id": 22, "text": "\\frac{\\partial u}{\\partial t} = \\mathbf{E} \\cdot \\frac{\\partial\\mathbf{D}}{\\partial t} + \\mathbf{H} \\cdot \\frac{\\partial \\mathbf{B}} {\\partial t} " }, { "math_id": 23, "text": " \\int_V \\mathbf{J} \\cdot \\mathbf{E} ~ \\mathrm d^{3}x = - \\int_V \\left [ \\frac{\\partial u}{\\partial t} + \\nabla \\cdot (\\mathbf{E} \\times \\mathbf{H})\\right ] ~ \\mathrm d^{3}x" }, { "math_id": 24, "text": "-\\frac{\\partial u}{\\partial t} = \\nabla\\cdot\\mathbf{S}+\\mathbf{J}\\cdot\\mathbf{E}" }, { "math_id": 25, "text": "\\mathbf{S} = \\mathbf{E} \\times \\mathbf{H}" }, { "math_id": 26, "text": "\\frac{\\partial}{\\partial t} \\mathcal{U} + \\nabla \\cdot \\mathbf{S} + \\mathbf{E} \\cdot \\mathbf{J}_\\text{free} + \\mathcal{R}_{\\dashv\\int} = 0" }, { "math_id": 27, "text": "E(t) = E e^{j\\omega t}" }, { "math_id": 28, "text": "H(t) = H e^{j\\omega t}" }, { "math_id": 29, "text": "{1\\over 2} \\int_{\\partial \\Omega} E\\times H^* \\cdot d{\\mathbf a} = {j\\omega \\over 2}\\int_\\Omega (\\varepsilon E E^* - \\mu H H^*) dv - {1\\over 2} \\int_\\Omega EJ^* dv," }, { "math_id": 30, "text": "J" }, { "math_id": 31, "text": "\\varepsilon" }, { "math_id": 32, "text": "\\mu" }, { "math_id": 33, "text": "\\partial \\Omega" } ]
https://en.wikipedia.org/wiki?curid=1042263
10422831
Pair potential
Potential energy of two interacting objects as a function of their distance In physics, a pair potential is a function that describes the potential energy of two interacting objects solely as a function of the distance between them. Some interactions, like Coulomb's law in electrodynamics or Newton's law of universal gravitation in mechanics naturally have this form for simple spherical objects. For other types of more complex interactions or objects it is useful and common to approximate the interaction by a pair potential, for example interatomic potentials in physics and computational chemistry that use approximations like the Lennard-Jones and Morse potentials. Functional form. The total energy of a system of formula_0 objects at positions formula_1, that interact through pair potential formula_2 is given by formula_3 Equivalently, this can be expressed as formula_4 This expression uses the fact that interaction is symmetric between particles formula_5 and formula_6. It also avoids self-interaction by not including the case where formula_7. Potential range. A fundamental property of a pair potential is its range. It is expected that pair potentials go to zero for infinite distance as particles that are too far apart do not interact. In some cases the potential goes quickly to zero and the interaction for particles that are beyond a certain distance can be assumed to be zero, these are said to be short-range potentials. Other potentials, like the Coulomb or gravitational potential, are long range: they go slowly to zero and the contribution of particles at long distances still contributes to the total energy. Computational cost. The total energy expression for pair potentials is quite simple to use for analytical and computational work. It has some limitations however, as the computational cost is proportional to the square of number of particles. This might be prohibitively expensive when the interaction between large groups of objects needs to be calculated. For short-range potentials the sum can be restricted only to include particles that are close, reducing the cost to linearly proportional to the number of particles. Infinitely periodic systems. In some cases it is necessary to calculate the interaction between an infinite number of particles arranged in a periodic pattern. Beyond pair potentials. Pair potentials are very common in physics and computational chemistry and biology; exceptions are very rare. An example of a potential energy function that is "not" a pair potential is the three-body Axilrod-Teller potential. Another example is the Stillinger-Weber potential for silicon, which includes the angle in a triangle of silicon atoms as an input parameter. Common pair potentials. Some commonly used pair potentials are listed below. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "N" }, { "math_id": 1, "text": "\\vec{R}_i" }, { "math_id": 2, "text": "v" }, { "math_id": 3, "text": "\nE=\\frac12\\sum_{i = 1}^N\\sum_{j\\neq i}^Nv\\left(\\left|\\vec{R}_i - \\vec{R}_j\\right|\\right)\\ .\n" }, { "math_id": 4, "text": "\nE=\\sum_{i = 1}^N\\sum_{j = i + 1}^Nv\\left(\\left|\\vec{R}_i - \\vec{R}_j\\right|\\right)\\ .\n" }, { "math_id": 5, "text": "i" }, { "math_id": 6, "text": "j" }, { "math_id": 7, "text": "i = j" } ]
https://en.wikipedia.org/wiki?curid=10422831
1042498
Spontaneous parametric down-conversion
Optical process Spontaneous parametric down-conversion (also known as SPDC, parametric fluorescence or parametric scattering) is a nonlinear instant optical process that converts one photon of higher energy (namely, a pump photon) into a pair of photons (namely, a signal photon, and an idler photon) of lower energy, in accordance with the law of conservation of energy and law of conservation of momentum. It is an important process in quantum optics, for the generation of entangled photon pairs, and of single photons. Basic process. A nonlinear crystal is used to produce pairs of photons from a photon beam. In accordance with the law of conservation of energy and law of conservation of momentum, the pairs have combined energies and momenta equal to the energy and momentum of the original photon. Because the index of refraction changes with frequency (dispersion), only certain triplets of frequencies will be phase-matched so that simultaneous energy and momentum conservation can be achieved. Phase-matching is most commonly achieved using birefringent nonlinear materials, whose index of refraction changes with polarization. As a result of this, different types of SPDC are categorized by the polarizations of the input photon (the pump) and the two output photons (signal and idler). If the signal and idler photons share the same polarization with each other and with the destroyed pump photon it is deemed Type-0 SPDC; if the signal and idler photons share the same polarization to each other, but are orthogonal to the pump polarization, it is Type-I SPDC; and if the signal and idler photons have perpendicular polarizations, it is deemed Type II SPDC. The conversion efficiency of SPDC is typically very low, with the highest efficiency obtained on the order of 4x10-6 incoming photons for PPLN in waveguides. However, if one half of the pair is detected at any time then its partner is known to be present. The degenerate portion of the output of a Type I down converter is a squeezed vacuum that contains only even photon number terms. The nondegenerate output of the Type II down converter is a two-mode squeezed vacuum. Example. In a commonly used SPDC apparatus design, a strong laser beam, termed the "pump" beam, is directed at a BBO (beta-barium borate) or lithium niobate crystal. Most of the photons continue straight through the crystal. However, occasionally, some of the photons undergo spontaneous down-conversion with Type II polarization correlation, and the resultant correlated photon pairs have trajectories that are constrained along the sides of two cones whose axes are symmetrically arranged relative to the pump beam. Due to the conservation of momentum, the two photons are always symmetrically located on the sides of the cones, relative to the pump beam. In particular, the trajectories of a small proportion of photon pairs will lie simultaneously on the two lines where the surfaces of the two cones intersect. This results in entanglement of the polarizations of the pairs of photons emerging on those two lines. The photon pairs are in an equal weight quantum superposition of the unentangled states formula_0 and formula_1, corresponding to polarizations of left-hand side photon, right-hand side photon. Another crystal is KDP (potassium dihydrogen phosphate) which is mostly used in Type I down conversion, where both photons have the same polarization. Some of the characteristics of effective parametric down-converting nonlinear crystals include: History. SPDC was demonstrated as early as 1967 by S. E. Harris, M. K. Oshman, and R. L. Byer, as well as by D. Magde and H. Mahr. It was first applied to experiments related to coherence by two independent pairs of researchers in the late 1980s: Carroll Alley and Yanhua Shih, and Rupamanjari Ghosh and Leonard Mandel. The duality between incoherent (Van Cittert–Zernike theorem) and biphoton emissions was found. Applications. SPDC allows for the creation of optical fields containing (to a good approximation) a single photon. As of 2005, this is the predominant mechanism for an experimenter to create single photons (also known as Fock states). The single photons as well as the photon pairs are often used in quantum information experiments and applications like quantum cryptography and Bell test experiments. SPDC is widely used to create pairs of entangled photons with a high degree of spatial correlation. Such pairs are used in ghost imaging, in which information is combined from two light detectors: a conventional, multi-pixel detector that does not view the object, and a single-pixel (bucket) detector that does view the object. Alternatives. The newly observed effect of two-photon emission from electrically driven semiconductors has been proposed as a basis for more efficient sources of entangled photon pairs. Other than SPDC-generated photon pairs, the photons of a semiconductor-emitted pair usually are not identical but have different energies. Until recently, within the constraints of quantum uncertainty, the pair of emitted photons were assumed to be co-located: they are born from the same location. However, a new nonlocalized mechanism for the production of correlated photon pairs in SPDC has highlighted that occasionally the individual photons that constitute the pair can be emitted from spatially separated points. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\vert H \\rangle\\vert V \\rangle " }, { "math_id": 1, "text": " \\vert V \\rangle \\vert H\\rangle " } ]
https://en.wikipedia.org/wiki?curid=1042498
10425761
Load factor (aeronautics)
Ratio of the lift of an aircraft to its weight In aeronautics, the load factor is the ratio of the lift of an aircraft to its weight and represents a global measure of the stress ("load") to which the structure of the aircraft is subjected: formula_0 where formula_1 is the load factor, formula_2 is the lift formula_3 is the weight. Since the load factor is the ratio of two forces, it is dimensionless. However, its units are traditionally referred to as g, because of the relation between load factor and apparent acceleration of gravity felt on board the aircraft. A load factor of one, or 1 g, represents conditions in straight and level flight, where the lift is equal to the weight. Load factors greater or less than one (or even negative) are the result of maneuvers or wind gusts. Load factor and g. The fact that the load factor is commonly expressed in "g" units does not mean that it is dimensionally the same as the acceleration of gravity, also indicated with "g". The load factor is strictly non-dimensional. The use of "g" units refers to the fact that an observer on board an aircraft will experience an "apparent" acceleration of gravity (i.e. relative to their frame of reference) equal to load factor times the acceleration of gravity. For example, an observer on board an aircraft performing a turn with a load factor of 2 (i.e. a 2 g turn) will see objects falling to the floor at twice the normal acceleration of gravity. In general, whenever the term "load factor" is used, it is formally correct to express it using numbers only, as in "a maximum load factor of 4". If the term "load factor" is omitted then "g" is used instead, as in "pulling a 3 g turn". A load factor greater than 1 will cause the stall speed to increase by a factor equal to the square root of the load factor. For example, if the load factor is 2, the stall speed will increase by a ratio of formula_4, or about 140%. Positive and negative load factors. The load factor, and in particular its sign, depends not only on the forces acting on the aircraft, but also on the orientation of its vertical axis. During straight and level flight, the load factor is +1 if the aircraft is flown "the right way up", whereas it becomes −1 if the aircraft is flown "upside-down" (inverted). In both cases the lift vector is the same (as seen by an observer on the ground), but in the latter the vertical axis of the aircraft points downwards, making the lift vector's sign negative. In turning flight the load factor is normally greater than +1. For example, in a turn with a 60° angle of bank the load factor is +2. Again, if the same turn is performed with the aircraft inverted, the load factor becomes −2. In general, in a balanced turn in which the angle of bank is "θ", the load factor "n" is related to the cosine of "θ" as formula_5 Another way to achieve load factors significantly higher than +1 is to pull on the elevator control at the bottom of a dive, whereas strongly pushing the stick forward during straight and level flight is likely to produce negative load factors, by causing the lift to act in the opposite direction to normal, i.e. downwards. Load factor and lift. In the definition of load factor, the lift is not simply that one generated by the aircraft's wing, instead it is the vector sum of the lift generated by the wing, the fuselage and the tailplane, or in other words it is the component perpendicular to the airflow of the sum of all aerodynamic forces acting on the aircraft. The lift in the load factor is also intended as having a sign, which is positive if the lift vector points in, or near the same direction as the aircraft's vertical axis, or negative if it points in, or near the opposite direction. Design standards. Excessive load factors must be avoided because of the possibility of exceeding the structural strength of the aircraft. Civil aviation authorities specify the load factor limits within which different categories of aircraft are required to operate without damage. For example, the US Federal Aviation Regulations prescribe the following limits (for the most restrictive case): However, many aircraft types, in particular aerobatic airplanes, are designed so that they can tolerate load factors much higher than the minimum required. For example, the Sukhoi Su-26 family has load factor limits of −10 to +12. The maximum load factors, both positive and negative, applicable to an aircraft are usually specified in the aircraft flight manual. Human perception of load factor. When the load factor is +1, all occupants of the aircraft feel that their weight is normal. When the load factor is greater than +1 all occupants feel heavier than usual. For example, in a 2 g maneuver all occupants feel that their weight is twice normal. When the load factor is zero, or very small, all occupants feel weightless. When the load factor is negative, all occupants feel that they are upside down. Humans have limited ability to withstand a load factor significantly greater than 1, both positive and negative. Unmanned aerial vehicles can be designed for much greater load factors, both positive and negative, than conventional aircraft, allowing these vehicles to be used in maneuvers that would be incapacitating for a human pilot. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "n = \\frac{L}{W}," }, { "math_id": 1, "text": "n" }, { "math_id": 2, "text": "L" }, { "math_id": 3, "text": "W" }, { "math_id": 4, "text": "\\sqrt{2}" }, { "math_id": 5, "text": "n = \\frac{1}{\\cos\\theta}." } ]
https://en.wikipedia.org/wiki?curid=10425761
10426365
Transaldolase
Enzyme family &lt;templatestyles src="Stack/styles.css"/&gt; Transaldolase is an enzyme (EC 2.2.1.2) of the non-oxidative phase of the pentose phosphate pathway. In humans, transaldolase is encoded by the "TALDO1" gene. The following chemical reaction is catalyzed by transaldolase: sedoheptulose 7-phosphate + glyceraldehyde 3-phosphate formula_0 erythrose 4-phosphate + fructose 6-phosphate Clinical significance. The pentose phosphate pathway has two metabolic functions: (1) generation of nicotinamide adenine dinucleotide phosphate (reduced NADPH), for reductive biosynthesis, and (2) formation of ribose, which is an essential component of ATP, DNA, and RNA. Transaldolase links the pentose phosphate pathway to glycolysis. In patients with deficiency of transaldolase, there's an accumulation of erythritol (from erythrose 4-phosphate), D-arabitol, and ribitol. The deletion in 3 base pairs in the "TALDO1" gene results in the absence of serine at position 171 of the transaldolase protein, which is part of a highly conserved region, suggesting that the mutation causes the transaldolase deficiency that is found in erythrocytes and lymphoblasts. The deletion of this amino acid can lead to liver cirrhosis and hepatosplenomegaly (enlarged spleen and liver) during early infancy. Transaldolase is also a target of autoimmunity in patients with multiple sclerosis. Structure. Transaldolase is a single domain composed of 337 amino acids. The core structure is an α/β barrel, similar to other class I aldolases, made up of eight parallel β-sheets and seven α-helices. There are also seven additional α-helices that are not part of the barrel. Hydrophobic amino acids are located between the β-sheets in the barrel and the surrounding α-helices to contribute to packing, such as the area containing Leu-168, Phe-170, Phe-189, Gly-311, and Phe-315. In the crystal, human transaldolase forms a dimer, with the two subunits connected by 18 residues in each subunit. See mechanism to the left for details. The active site, located in the center of the barrel, contains three key residues: lysine-142, glutamate-106, and aspartate-27. The lysine holds the sugar in place while the glutamate and aspartate act as proton donors and acceptors. Mechanism of catalysis. The residue of lysine-142 in the active site of transaldolase forms a Schiff base with the keto group in sedoheptulose-7-phosphate after deprotonation by another active site residue, glutamate-106. The reaction mechanism is similar to the reverse reaction catalyzed by aldolase: The bond joining carbons 3 and 4 is broken, leaving dihydroxyacetone joined to the enzyme via a Schiff base. This cleavage reaction generates the unusual aldose sugar erythrose-4-phosphate. Then transaldolase catalyzes the condensation of glyceraldehyde-3-phosphate with the Schiff base of dihydroxyacetone, yielding enzyme-bound fructose 6-phosphate. Hydrolysis of the Schiff base liberates free fructose 6-phosphate, one of the products of the pentose phosphate pathway. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=10426365
1042859
131 (number)
Natural number 131 (one hundred [and] thirty-one) is the natural number following 130 and preceding 132. In mathematics. 131 is a Sophie Germain prime, an irregular prime, the second 3-digit palindromic prime, and also a permutable prime with 113 and 311. It can be expressed as the sum of three consecutive primes, 131 = 41 + 43 + 47. 131 is an Eisenstein prime with no imaginary part and real part of the form formula_0. Because the next odd number, 133, is a semiprime, 131 is a Chen prime. 131 is an Ulam number. 131 is a full reptend prime in base 10 (and also in base 2). The decimal expansion of 1/131 repeats the digits 007633587786259541984732824427480916030534351145038167938931 297709923664122137404580152671755725190839694656488549618320 6106870229 indefinitely. 131 is the fifth discriminant of imaginary quadratic fields with class number 5, where the 131st prime number 739 is the fifteenth such discriminant. Meanwhile, there are conjectured to be a total of 131 discriminants of class number 8 (only one more discriminant could exist). In other fields. 131 is also: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "3n - 1" } ]
https://en.wikipedia.org/wiki?curid=1042859
1043036
Bertrand's ballot theorem
Theorem that gives the probability that an election winner will lead the loser throughout the count In combinatorics, Bertrand's ballot problem is the question: "In an election where candidate A receives "p" votes and candidate B receives "q" votes with "p" &gt; "q", what is the probability that A will be strictly ahead of B throughout the count?" The answer is formula_0 The result was first published by W. A. Whitworth in 1878, but is named after Joseph Louis François Bertrand who rediscovered it in 1887. In Bertrand's original paper, he sketches a proof based on a general formula for the number of favourable sequences using a recursion relation. He remarks that it seems probable that such a simple result could be proved by a more direct method. Such a proof was given by Désiré André, based on the observation that the unfavourable sequences can be divided into two equally probable cases, one of which (the case where B receives the first vote) is easily computed; he proves the equality by an explicit bijection. A variation of his method is popularly known as André's reflection method, although André did not use any reflections. Bertrand's ballot theorem is related to the cycle lemma. They give similar formulas, but the cycle lemma considers circular shifts of a given ballot counting order rather than all permutations. Example. Suppose there are 5 voters, of whom 3 vote for candidate "A" and 2 vote for candidate "B" (so "p" = 3 and "q" = 2). There are ten equally likely orders in which the votes could be counted: For the order "AABAB", the tally of the votes as the election progresses is: For each column the tally for "A" is always larger than the tally for "B", so "A" is always strictly ahead of "B". For the order "AABBA" the tally of the votes as the election progresses is: For this order, "B" is tied with "A" after the fourth vote, so "A" is not always strictly ahead of "B". Of the 10 possible orders, "A" is always ahead of "B" only for "AAABB" and "AABAB". So the probability that "A" will always be strictly ahead is formula_1 and this is indeed equal to formula_2 as the theorem predicts. Equivalent problems. Favourable orders. Rather than computing the probability that a random vote counting order has the desired property, one can instead compute the number of favourable counting orders, then divide by the total number of ways in which the votes could have been counted. (This is the method used by Bertrand.) The total number of ways is the binomial coefficient formula_3; Bertrand's proof shows that the number of favourable orders in which to count the votes is formula_4 (though he does not give this number explicitly). And indeed after division this gives formula_5. Random walks. Another equivalent problem is to calculate the number of random walks on the integers that consist of "n" steps of unit length, beginning at the origin and ending at the point "m", that never become negative. As "n" and "m" have the same parity and formula_6, this number is formula_7 When formula_8 and formula_9 is even, this gives the Catalan number formula_10. Thus the probability that a random walk is never negative and returns to origin at time formula_9 is formula_11. By Stirling's formula, when formula_12, this probability is formula_13. [Note that formula_14 have the same parity as follows: let formula_15 be the number of "positive" moves, i.e., to the right, and let formula_16 be the number of "negative" moves, i.e., to the left. Since formula_17 and formula_18, we have formula_19 and formula_20. Since formula_15 and formula_16 are integers,formula_14 have the same parity] Proof by reflection. For A to be strictly ahead of B throughout the counting of the votes, there can be no ties. Separate the counting sequences according to the first vote. Any sequence that begins with a vote for B must reach a tie at some point, because A eventually wins. For any sequence that begins with A and reaches a tie, reflect the votes up to the point of the first tie (so any A becomes a B, and vice versa) to obtain a sequence that begins with B. Hence every sequence that begins with A and reaches a tie is in one-to-one correspondence with a sequence that begins with B, and the probability that a sequence begins with B is formula_21, so the probability that A always leads the vote is formula_22 the probability of sequences that tie at some point formula_22 the probability of sequences that tie at some point and begin with A or B formula_23 the probability of sequences that tie at some point and begin with B formula_24 formula_23 the probability that a sequence begins with B formula_24 formula_25 Proof by induction. Another method of proof is by mathematical induction: formula_30 Proof by the cycle lemma. A simple proof is based on the cycle lemma of Dvoretzky and Motzkin. Call a ballot sequence "dominating" if A is strictly ahead of B throughout the counting of the votes. The cycle lemma asserts that any sequence of formula_31 A's and formula_32 B's, where formula_33, has precisely formula_34 dominating cyclic permutations. To see this, just arrange the given sequence of formula_35 A's and B's in a circle and repeatedly remove adjacent pairs AB until only formula_34 A's remain. Each of these A's was the start of a dominating cyclic permutation before anything was removed. So formula_34 out of the formula_35 cyclic permutations of any arrangement of formula_31 A votes and formula_32 B votes are dominating. Proof by martingales. Let formula_36. Define the "backwards counting" stochastic process formula_37 where formula_38 is the lead of candidate A over B, after formula_39 votes have come in. Claim: formula_40 is a martingale process. Given formula_40, we know that formula_41, so of the first formula_39 votes, formula_42 were for candidate A, and formula_43 were for candidate B. So, with probability formula_44, we have formula_45, and formula_46. Similarly for the other one. Then compute to find formula_47. Define the stopping time formula_48 as either the minimum formula_49 such that formula_50, or formula_51 if there's no such formula_49. Then the probability that candidate A leads all the time is just formula_52, which by the optional stopping theorem is formula_53 Bertrand's and André's proofs. Bertrand expressed the solution as formula_54 where formula_55 is the total number of voters and formula_56 is the number of voters for the first candidate. He states that the result follows from the formula formula_57 where formula_58 is the number of favourable sequences, but "it seems probable that such a simple result could be shown in a more direct way". Indeed, a more direct proof was soon produced by Désiré André. His approach is often mistakenly labelled "the reflection principle" by modern authors but in fact uses a permutation. He shows that the "unfavourable" sequences (those that reach an intermediate tie) consist of an equal number of sequences that begin with A as those that begin with B. Every sequence that begins with B is unfavourable, and there are formula_59 such sequences with a B followed by an arbitrary sequence of ("q"-1) B's and "p" A's. Each unfavourable sequence that begins with A can be transformed to an arbitrary sequence of ("q"-1) B's and "p" A's by finding the first B that violates the rule (by causing the vote counts to tie) and deleting it, and interchanging the order of the remaining parts. To reverse the process, take any sequence of ("q"-1) B's and "p" A's and search from the end to find where the number of A's first exceeds the number of B's, and then interchange the order of the parts and place a B in between. For example, the unfavourable sequence AABBABAA corresponds uniquely to the arbitrary sequence ABAAAAB. From this, it follows that the number of favourable sequences of "p" A's and "q" B's is formula_60 and thus the required probability is formula_61 as expected. Variant: ties allowed. The original problem is to find the probability that the first candidate is always strictly ahead in the vote count. One may instead consider the problem of finding the probability that the second candidate is never ahead (that is, with ties are allowed). In this case, the answer is formula_62 The variant problem can be solved by the reflection method in a similar way to the original problem. The number of possible vote sequences is formula_63. Call a sequence "bad" if the second candidate is ever ahead, and if the number of bad sequences can be enumerated then the number of "good" sequences can be found by subtraction and the probability can be computed. Represent a voting sequence as a lattice path on the Cartesian plane as follows: Each such path corresponds to a unique sequence of votes and will end at ("p", "q"). A sequence is 'good' exactly when the corresponding path never goes above the diagonal line "y" = "x"; equivalently, a sequence is 'bad' exactly when the corresponding path touches the line "y" = "x" + 1. For each 'bad' path "P", define a new path "P"′ by reflecting the part of "P" up to the first point it touches the line across it. "P"′ is a path from (−1, 1) to ("p", "q"). The same operation applied again restores the original "P". This produces a one-to-one correspondence between the 'bad' paths and the paths from (−1, 1) to ("p", "q"). The number of these paths is formula_64 and so that is the number of 'bad' sequences. This leaves the number of 'good' sequences as formula_65 Since there are formula_63 altogether, the probability of a sequence being good is formula_66. In fact, the solutions to the original problem and the variant problem are easily related. For candidate A to be strictly ahead throughout the vote count, they must receive the first vote and for the remaining votes (ignoring the first) they must be either strictly ahead or tied throughout the count. Hence the solution to the original problem is formula_67 as required. Conversely, the tie case can be derived from the non-tie case. Note that the "number" of non-tie sequences with p+1 votes for A is equal to the number of tie sequences with p votes for A. The number of non-tie votes with p + 1 votes for A votes is formula_68, which by algebraic manipulation is formula_69, so the "fraction" of sequences with p votes for A votes is formula_70. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\frac{p-q}{p+q}." }, { "math_id": 1, "text": "\\frac{2}{10}=\\frac{1}{5}," }, { "math_id": 2, "text": "\\frac{3-2}{3+2}" }, { "math_id": 3, "text": "\\tbinom{p+q}{p}" }, { "math_id": 4, "text": "\\tbinom{p+q-1}{p-1}-\\tbinom{p+q-1}{p}" }, { "math_id": 5, "text": "\\tfrac{p}{p+q}-\\tfrac{q}{p+q}=\\tfrac{p-q}{p+q}" }, { "math_id": 6, "text": "n\\ge m\\ge 0" }, { "math_id": 7, "text": "\\binom{n}{\\tfrac{n+m}2}-\\binom{n}{\\tfrac{n+m}2+1} = \\frac{m+1}{\\tfrac{n+m}2+1}\\binom{n}{\\tfrac{n+m}2}." }, { "math_id": 8, "text": "m=0" }, { "math_id": 9, "text": "n" }, { "math_id": 10, "text": "\\frac1{\\tfrac{n}2+1}\\binom{n}{\\tfrac{n}2}" }, { "math_id": 11, "text": "2^{-n}\\frac1{\\tfrac{n}2+1}\\binom{n}{\\tfrac{n}2}" }, { "math_id": 12, "text": "n\\to \\infty" }, { "math_id": 13, "text": "\\sim \\frac{\\sqrt 2}{n^{3/2}}" }, { "math_id": 14, "text": "m,n" }, { "math_id": 15, "text": "P" }, { "math_id": 16, "text": "N" }, { "math_id": 17, "text": "P+N=n" }, { "math_id": 18, "text": "P-N=m" }, { "math_id": 19, "text": "P=\\frac{n+m}{2}" }, { "math_id": 20, "text": "N=\\frac{n-m}{2}" }, { "math_id": 21, "text": "q/(p+q)" }, { "math_id": 22, "text": " = 1- " }, { "math_id": 23, "text": " = 1- 2 \\times (" }, { "math_id": 24, "text": ")" }, { "math_id": 25, "text": " = 1-2\\frac{q}{p+q}=\\frac{p-q}{p+q}" }, { "math_id": 26, "text": "p > q" }, { "math_id": 27, "text": "p \\geq q" }, { "math_id": 28, "text": "p = q" }, { "math_id": 29, "text": "a = b" }, { "math_id": 30, "text": "{a \\over (a+b)}{(a-1)-b \\over (a+b-1)}+{b \\over (a+b)}{a-(b-1) \\over (a+b-1)}={a-b \\over a+b}." }, { "math_id": 31, "text": "p" }, { "math_id": 32, "text": "q" }, { "math_id": 33, "text": "p> q" }, { "math_id": 34, "text": "p-q" }, { "math_id": 35, "text": "p+q" }, { "math_id": 36, "text": "n= p+q" }, { "math_id": 37, "text": "X_k = \\frac{S_{n-k}}{n-k}; \\quad k = 0, 1, ..., n-1" }, { "math_id": 38, "text": "S_{n-k}" }, { "math_id": 39, "text": "n-k" }, { "math_id": 40, "text": "X_k" }, { "math_id": 41, "text": "S_{n-k} = (n-k)X_k" }, { "math_id": 42, "text": "\\frac{X_k + 1}2 (n-k)" }, { "math_id": 43, "text": "\\frac{-X_k + 1}2 (n-k)" }, { "math_id": 44, "text": "\\frac{X_k + 1}2" }, { "math_id": 45, "text": "S_{n-k-1} = S_{n-k} -1" }, { "math_id": 46, "text": "X_{k+1} = \\frac{n-k}{n-k-1}X_k - \\frac{1}{n-k-1}" }, { "math_id": 47, "text": "E[X_{k+1}|X_k] = X_k" }, { "math_id": 48, "text": "T" }, { "math_id": 49, "text": "k" }, { "math_id": 50, "text": "X_k =0" }, { "math_id": 51, "text": "n-1" }, { "math_id": 52, "text": "E[X_T]" }, { "math_id": 53, "text": "E[X_T] = E[X_0] = \\frac{p-q}{p+q}" }, { "math_id": 54, "text": "\\frac{2m-\\mu}{\\mu}" }, { "math_id": 55, "text": "\\mu=p+q" }, { "math_id": 56, "text": "m=p" }, { "math_id": 57, "text": "P_{m+1,\\mu+1}=P_{m,\\mu}+P_{m+1,\\mu}," }, { "math_id": 58, "text": "P_{m,\\mu}" }, { "math_id": 59, "text": "\\tbinom{p+q-1}{q-1}" }, { "math_id": 60, "text": "\\binom{p+q}{q}-2\\binom{p+q-1}{q-1}=\\binom{p+q}{q}\\frac{p-q}{p+q}" }, { "math_id": 61, "text": "\\frac{p-q}{p+q}" }, { "math_id": 62, "text": "\\frac{p+1-q}{p+1}." }, { "math_id": 63, "text": "\\tbinom{p+q}{q}" }, { "math_id": 64, "text": "\\tbinom{p+q}{q-1}" }, { "math_id": 65, "text": "\\binom{p+q}{q} - \\binom{p+q}{q-1} = \\binom{p+q}{q}\\frac{p+1-q}{p+1}." }, { "math_id": 66, "text": "\\tfrac{p+1-q}{p+1}" }, { "math_id": 67, "text": "\\frac{p}{p+q}\\frac{p-1+1-q}{p-1+1}=\\frac{p-q}{p+q}" }, { "math_id": 68, "text": "\\tfrac{p + 1 - q}{p + 1 + q} \\tbinom{p + 1 + q}{q} " }, { "math_id": 69, "text": "\\tfrac{p + 1 - q}{p + 1} \\tbinom{p + q}{q} " }, { "math_id": 70, "text": "\\tfrac{p + 1 - q}{p + 1}" } ]
https://en.wikipedia.org/wiki?curid=1043036
1043159
Professor's Cube
5x5x5 version of the Rubik's Cube The Professor's Cube (also known as the 5×5×5 Rubik's Cube and many other names, depending on manufacturer) is a 5×5×5 version of the original Rubik's Cube. It has qualities in common with both the 3×3×3 Rubik's Cube and the 4×4×4 Rubik's Revenge, and solution strategies for both can be applied. History. The Professor's Cube was invented by Udo Krell in 1981. Out of the many designs that were proposed, Udo Krell's design was the first 5×5×5 design that was manufactured and sold. Uwe Mèffert manufactured the cube and sold it in Hong Kong in 1983. Ideal Toys, who first popularized the original 3x3x3 Rubik's cube, marketed the puzzle in Germany as the "Rubik's Wahn" (German: "Rubik's Craze"). When the cube was marketed in Japan, it was marketed under the name "Professor's Cube". Mèffert reissued the cube under the name "Professor's Cube" in the 1990s. The early versions of the 5×5×5 cube sold at Barnes &amp; Noble were marketed under the name "Professor's Cube" but currently, Barnes and Noble sells cubes that are simply called "5×5 Cube." Mefferts.com used to sell a limited edition version of the 5×5×5 cube called the Professor's Cube. This version had colored tiles rather than stickers. Verdes Innovations sells a version called the V-Cube 5. Workings. The original Professor's Cube design by Udo Krell works by using an expanded 3×3×3 cube as a mantle with the center edge pieces and corners sticking out from the spherical center of identical mechanism to the 3×3×3 cube. All non-central pieces have extensions that fit into slots on the outer pieces of the 3×3×3, which keeps them from falling out of the cube while making a turn. The fixed centers have two sections (one visible, one hidden) which can turn independently. This feature is unique to the original design. The Eastsheen version of the puzzle uses a different mechanism. The fixed centers hold the centers next to the central edges in place, which in turn hold the outer edges. The non-central edges hold the corners in place, and the internal sections of the corner pieces do not reach the center of the cube. The V-Cube 5 mechanism, designed by Panagiotis Verdes, has elements in common with both. The corners reach to the center of the puzzle (like the original mechanism) and the center pieces hold the central edges in place (like the Eastsheen mechanism). The middle edges and center pieces adjacent to them make up the supporting frame and these have extensions which hold the rest of the pieces together. This allows smooth and fast rotation and created what was arguably the fastest and most durable version of the puzzle available at that time. Unlike the original 5×5×5 design, the V-Cube 5 mechanism was designed to allow speedcubing. Most current production 5×5×5 speed cubes have mechanisms based on Verdes' patent. Stability and durability. The original Professor's Cube is inherently more delicate than the 3×3×3 Rubik's Cube because of the much greater number of moving parts and pieces. Because of its fragile design, the Rubik's brand Professor's Cube is not suitable for Speedcubing. Applying excessive force to the cube when twisting it may result in broken pieces. Both the Eastsheen 5×5×5 and the V-Cube 5 are designed with different mechanisms in an attempt to remedy the fragility of the original design. Permutations. There are 98 pieces on the exterior of the cube: 8 corners, 36 edges, and 54 centers (48 movable, 6 fixed). Any permutation of the corners is possible, including odd permutations, giving 8! possible arrangements. Seven of the corners can be independently rotated, and the orientation of the eighth corner depends on the other seven, giving 37 (or 2,187) combinations. There are 54 centers. Six of these (the center square of each face) are fixed in position. The rest consist of two sets of 24 centers. Within each set there are four centers of each color. Each set can be arranged in 24! different ways. Assuming that the four centers of each color in each set are indistinguishable, the number of permutations of each set is reduced to 24!/(246) arrangements, all of which are possible. The reducing factor comes about because there are 24 (4!) ways to arrange the four pieces of a given color. This is raised to the sixth power because there are six colors. The total number of permutations of all movable centers is the product of the permutations of the two sets, 24!2/(2412). The 24 outer edges cannot be flipped due to the interior shape of those pieces. Corresponding outer edges are distinguishable, since the pieces are mirror images of each other. Any permutation of the outer edges is possible, including odd permutations, giving 24! arrangements. The 12 central edges can be flipped. Eleven can be flipped and arranged independently, giving 12!/2 × 211 or 12! × 210 possibilities (an odd permutation of the corners implies an odd permutation of the central edges, and vice versa, thus the division by 2). There are 24! × 12! × 210 possibilities for the inner and outer edges together. This gives a total number of permutations of formula_0 The full number is precisely 282 870 942 277 741 856 536 180 333 107 150 328 293 127 731 985 672 134 721 536 000 000 000 000 000 possible permutations (about 283 duodecillion on the long scale or 283 trevigintillion on the short scale). Some variations of the cube have one of the center pieces marked with a logo, which can be put into four different orientations. This increases the number of permutations by a factor of four to 1.13×1075, although any orientation of this piece could be regarded as correct. By comparison, the number of atoms in the observable universe is estimated at 1080. Other variations increase the difficulty by making the orientation of all center pieces visible. An example of this is shown below. Solutions. Speedcubers usually favor the Reduction method which groups the centers into one-colored blocks and grouping similar edge pieces into solid strips. This allows the cube to be quickly solved with the same methods one would use for a 3×3×3 cube, just a stretched out version. As illustrated to the right, the fixed centers, middle edges and corners can be treated as equivalent to a 3×3×3 cube. As a result, once reduction is complete the parity errors sometimes seen on the 4×4×4 cannot occur on the 5×5×5, or any cube with an odd number of edges for that matter. The Yau5 method is named after its proposer, Robert Yau. The method starts by solving the opposite centers (preferably white and yellow), then solving three cross edges (preferably white). Next, the remaining centers and last cross edge are solved. The last cross edge and the remaining unsolved edges are solved, and then it can be solved like a 3x3x3. Another frequently used strategy is to solve the edges and corners of the cube first, and the centers last. This method is referred to as the Cage method, so called because the centers appear to be in a cage after the solving of edges and corners. The corners can be placed just as they are in any previous order of cube puzzle, and the centers are manipulated with an algorithm similar to the one used in the 4×4×4 cube. A less frequently used strategy is to solve one side and one layer first, then the 2nd, 3rd and 4th layer, and finally the last side and layer. This method is referred to as Layer-by-Layer. This resembles CFOP, a well known technique used for the 3x3 Rubik's Cube, with 2 added layers and a couple of centers. ABCube Method is a direct solve method originated by Sandra Workman in 2020. It is geared to complete beginners and non-cubers. It is similar in order of operation to the Cage Method, but differs functionally in that it is mostly visual and eliminates the standardized notation. It works on all complexity of cubes, from 2x2x2 through big cubes (nxnxn) and only utilizes two easy to remember algorithms; one four twists, the other eight twists, and it eliminates long parity algorithms. World records. The world record for fastest 5×5×5 solve is 32.52 seconds, set by Max Park of the United States on March 16, 2024, At DFW Megacomp 2024, in Grapevine, Texas . The world record for fastest average of five solves (excluding fastest and slowest solves) is 34.76 seconds, also set by Max Park of the United States on July 18, 2024, at NAC 2024, in Minneapolis, Minnesota, with the times of (39.71) 35.10 (33.55) 35.44, and 33.75 The record fastest time for solving a 5×5×5 cube blindfolded is 2 minutes, 4.41 seconds (including inspection), set by Stanley Chapel of the United States on November 10, 2023, at Virginia Championship 2023 in Richmond, Virginia. The record for mean of three solves solving a 5x5x5 cube blindfolded is 2 minutes, 27.63 seconds (including inspection), set by Stanley Chapel of the United States on December 15, 2019, with the times of 2:32.48, 2:28.80, and 2:21.62. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\frac{8! \\times 3^7 \\times 12! \\times 2^{10} \\times 24!^3}{24^{12}} \\approx 2.83 \\times 10^{74}" } ]
https://en.wikipedia.org/wiki?curid=1043159
10433995
Sanskrit verbs
Aspect of Sanskrit grammar Sanskrit has inherited from its parent, the Proto-Indo-European language, an elaborate system of verbal morphology, much of which has been preserved in Sanskrit as a whole, unlike in other kindred languages, such as Ancient Greek or Latin. Sanskrit verbs thus have an inflection system for different combinations of tense, aspect, mood, voice, number, and person. Non-finite forms such as participles are also extensively used. Some of the features of the verbal system, however, have been lost in the classical language, compared to the older Vedic Sanskrit, and in other cases, distinctions that have existed between different tenses have been blurred in the later language. Classical Sanskrit thus does not have the subjunctive or the injunctive mood, has dropped a variety of infinitive forms, and the distinctions in meaning between the imperfect, perfect and aorist forms are barely maintained and ultimately lost. Basics. Verb conjugation in Sanskrit involves the interplay of five 'dimensions', number, person, voice, mood and tense, with the following variables: Further, participles are considered part of the verbal systems although they are not verbs themselves, and as with other Sanskrit nouns, they can be declined across seven or eight cases, for three genders and three numbers. Classical Sanskrit has only one infinitive, of accusative case-form. Building blocks. Roots. The starting point for the morphological analysis of the Sanskrit verb is the root. It is conventionally indicated using the mathematical symbol √; for instance, "√bhū-" means the root "bhū-". There are about 2000 roots enumerated by the ancient grammarians, of which less than half are attested in actual use. Allowing for sorting reduplication and other anomalies, there remain somewhat over 800 roots that form the practical basis of the verbal system, as well as the larger part of the inherited nominal stems of the language. Compared to kindred Indo-European languages, Sanskrit is more readily analysable in its morphological structure, and its roots are more easily separable from accretionary elements. Stems and stem formation. Before the final endings — to denote number, person etc can be applied, additional elements may be added to the root. Whether such elements are affixed or not, the resulting component here is the stem, to which these final endings can then be added. formula_0 The following types of treatment are possible on the root to form the stem: No Treatment. The personal endings are directly affixed to the root with no prior modification, subject to any internal sandhi rules in the process. With a few exceptions, the root keeps the accent and guṇa grade in the three persons of the active, while elsewhere the termination takes on the accent and the root grade is weakened. There are around 130 roots in Sanskrit that come under this class. Sanskrit is unique among the ancient Indo-European languages to have largely preserved this system, which has largely died out in the others. Since adding endings to the root is complicated by phonological changes, the tendency right from the Proto-Indo-European stage has been to use thematic processes instead. Suffixion. A theme vowel is suffixed before any personal endings are added. In Sanskrit, this is "-a-", inherited from Proto-Indo-European "*-o-" and "*-e-". The addition of the theme vowel serves to avoid complications due to internal sandhi; the large majority of the verbs in the language are thematic. Sanskrit also inherits other suffixes from Proto-Indo-European: "-ya-", "-ó- / -nó-", "-nā-", and "-aya-". Of these the first and the last include the thematic vowel while the others are athematic. Infixion. Another treatment also from Proto-Indo-European is inserting an exponent within the root itself. All roots undergoing this treatment end in consonants. In weak forms, the infix is simply a nasal ("n", "ñ", "ṇ", "ṅ"), while in strong forms this expands to "-ná-" and bears the accent. Accent and gradation. During conjugation, the accent might fall either on the root vowel or on the ending. Among thematic verbs, some roots always get the accent, accompanied by a strengthening of the grade to guṇa or vṛddhi, while in others it always falls on the ending. In non-thematic cases, the position of the accent varies. The general rule for variable-accent verbs is that in the indicative the stem has the accent and the guṇa grade in the three persons of the singular active, and that in the dual and plural of the active and the whole of the middle, the accent falls on the ending and the stem is in its weak form. Reduplication. The root might be subject to reduplication, wherein a part of it is prefixed to itself in the process of forming the stem. For roots beginning in a consonant, that initial consonant, or a modified form of it, is taken, while for those beginning in a vowel, it's the very vowel. The potential modifications that might be made to the prefix consonant can be seen in some typical examples below: Augment. Roots are prefixed with an "á-" (from PIE "é-") in preterite formations (imperfect, aorist, pluperfect, conditional). The augment without exception bears the accent in these forms. When the root starts with any of the vowels "i-", "u-" or "ṛ", the vowel is subject to vṛddhi and not guṇa. Voice. Sanskrit has in the present inherited two sets of personal endings from its parent Proto-Indo-European, one for the active voice and another for the middle voice. Verbs can be conjugated in either voice, although some verbs only show one or the other. Originally the active voice suggested action carried out for someone else and the middle voice meant action carried out for oneself. By the time of Classical Sanskrit, and especially in later literature, this distinction blurred and in many cases eventually disappeared. Personal endings. Conjugational endings in Sanskrit convey person, number, and voice. Different forms of the endings are used depending on what tense stem and mood they are attached to. Verb stems or the endings themselves may be changed or obscured by sandhi. The theoretical forms of the endings are as follow: Primary endings are used with present indicative and future forms. Secondary endings are used with the imperfect, conditional, aorist, and optative. Perfect and imperative endings are used with the perfect and imperative respectively. Verb classes. Based on the treatment they undergo to form the stem, the roots of the Sanskrit language are arranged by the ancient grammarians in ten classes or "gaṇa"s, based on how they form the present stem, and named after a verb typical to each class. No discoverable grammatical principle has been found for the ordering of these classes. This can be rearranged for greater clarity into non-thematic and thematic groups as summarized below: "Seṭ" and "aniṭ" roots. Sanskrit roots may also be classified, independent of their ', into three groups, depending on whether they take the vowel ' before certain tense markers. Since the term used for this vowel by Sanskrit grammarians is ', these two groups are called ' (with '), ' (optional '), and ' (without "") respectively. The "i" sound in question is a phoneme "i" that appears in certain morphological circumstances for certain, lexically defined roots, regularly continuing Proto-Indo-European (PIE) laryngeals, as in "*bʰéuH·tu·m" &gt; "bháv·i·tum". Note that the PIE laryngeal (represented by an "*H" here) was a part of the PIE root; it occurs in all of its allomorphs, for example "*bʰuH·tó·s" &gt; "bhū·tá·s" ("*bʰeuH-" is reduced to "*bʰuH-" in PIE due to ablaut; the laryngeal disappears in this context, leaving its trace in the length of "ū" in Sanskrit). In Classical Sanskrit, the scope of this "i" was broadened by analogous change. In the Aṣṭādhyāyī the synchronic analysis of the phenomenon is somewhat different: the "i" sound is treated as an augment of the suffix that follows the root. Rule 7.2.35 states that "i" should be prepended to "ārdhadhātuka" suffixes beginning with a consonant other than "y"; an example of such suffix is "-tum" (the Classical Sanskrit infinitive). An example of differences between the two classes is the aorist-marker. While some of the aniṭ-roots form aorist with the "-s" suffix, seṭ-roots are suffixed by "-iṣ". Following this terminology, PIE roots ending in laryngeals are also called seṭ-roots, and all others aniṭ-roots. Conjugation. Scope. As in kindred Indo-European languages, conjugation is effected using the above building blocks across the tenses, moods, voices, persons and numbers, yielding, in Sanskrit, a huge number of combinations. Where the forms take personal endings, in other words when it complements a subject, these are called finite forms. Sanskrit also has a few subjectless, i.e., non-finite forms. In the standard scenario, the following forms are seen in Classical Sanskrit: Furthermore, Sanskrit has so-called "Secondary" conjugations: The non-finite forms are: Principal parts. It is difficult to generalize how many principal parts a Sanskrit verb possesses, since different verb form categories are used with different degrees of regularity. For the vast majority of verbs, conjugation can be made sufficiently clear with the first five of the following forms supplied: Present system. The present system includes the present tense, the imperfect, and the optative and imperative moods, as well as some of the remnant forms of the old subjunctive. Thematic classes. All thematic classes have invariant stems and share the same inflectional endings. To demonstrate, observe the conjugation of the Cl. 1 verb "√bhū- bháv-". Note that this root is gunated and holds the stress within the root syllable. Present. The present indicative takes primary endings. Imperfect. The imperfect takes the augment and secondary endings. The augment always bears the accent with no exceptions. Optative. The present optative takes the suffix "-e" and athematic secondary endings. Imperative. The imperative has its own set of special endings. Some of these forms are relics from an original subjunctive. Athematic classes. Present. The present indicative used the strong stem in the singular and the weak elsewhere. For "√kṛ-" used as example here, the weak stem final "-u-" is sometimes omitted before endings in "-v-" and "-m-". The alternate forms for class 3 (reduplicating class) are shown with "hu-". Imperfect. The imperfect uses the two stems in the same way as the present. Optative. The optative takes the suffix "-yā́-" in the active, and "-ī-" in the middle; the stem in front of them is always the weak one. Here the final "-u-" of the "kuru-" stem is again irregularly dropped. Imperative. The imperative uses the strong stem in all of the 1st person forms, as well as the 3rd person singular active. The 2nd person active may have no ending (class 5, class 8), "-dhi" (most of class 3,7, as well as class 1 ending in consonants), or "-hi" (class 9, class 3 in "ā", and class 1 in vowels; these classes usually ended in laryngeals in Proto-Indo-European). Perfect system. The perfect system includes only the perfect. The stem is formed with reduplication; the reduplicated vowel is usually "a", but "u" or "i" for verbs containing them. This system also produces separate "strong" and "weak" forms of the verb — the strong "guṇa" form is used with the singular active, and the weak zero-grade form with the rest. In some verbs, the 3rd and optionally 1st person are further strengthened until the root syllable becomes heavy. Most verbs ending in consonants behave as "seṭ" in the perfect tense in front of consonant endings. "√kṛ-" shown here is one of the exceptions. Aorist system. The aorist system includes aorist proper (with past indicative meaning, e.g. "abhūḥ" 'you were') and some of the forms of the ancient injunctive (used almost exclusively with "mā" in prohibitions, e.g. "mā bhūḥ" 'don't be'). The principle distinction of the two is the presence/absence of an augment – "á-" prefixed to the stem. The aorist system stem actually has three different formations: the simple aorist, the reduplicating aorist (semantically related to the causative verb), and the sibilant aorist. Root aorist. This aorist is formed by directly adding the athematic secondary endings to the root. Originally this type also had different strong and weak stems for the singular and plural, but verbs that both allow this distinction and utilize this type of aorist are exceptionally rare. From "√gam-" and "√dā-" ; the latter takes "-us" in the 3rd person plural. Known instances of weak stems from the Veda include "avṛjan" from "√vṛj-" in the plural active, "adhithās" from "√dhā-" in the singular middle, and various forms from "√kṛ-" . Middle voice forms of this class are almost nonexistent in the classical period, being suppleted by those of the sibilant classes. a-root aorist. This class is formed with a thematized zero-grade root, and takes regular thematic endings. From "√sic-" : s-aorist. This is the most productive aorist class for regular "aniṭ" verbs, made by suffixing "s" to the root. All active voice forms use the "vṛddhi" grade, and middle forms use the weakest grade that produces a heavy root syllable; "√kṛ-" and some verbs in "ā" may irregularly use zero grade in place of the latter. From "√ji-" : From "√tud-" : is-aorist. This aorist form contains the suffix "-iṣ-" and is the productive form of regular "seṭ" verbs. The strong active stem is usually strengthened until the root syllable is heavy, and the weak middle stem usually assumes the "guṇa" grade. Some verbs in "a" followed by a single consonant, such as "grah-" , do not take additional strengthening in the active. From "√pū-" : sis-aorist. This small class is characterized by a reduplicated "-siṣ-" suffix, and is only used in the active voice; the s-aorist is usually used in the middle by verbs that take this formation. From "√yā-" : sa-aorist. This formation is used with a small number of verbs ending in consonants which can form the cluster "kṣ" when an "-s-" is added. It takes a mixture of thematic and athematic endings. From "√diś-" : Future system. Simple future. The simple future stem is formed with the suffix "-sya-" or "-iṣya-" and the "guṇa" grade of the root. From "√kṛ-" : Periphrastic future. The periphrastic future is formed by first deriving the "agentive" noun from the root using "-tṛ", and attaching forms of the verb "as-" 'to be' as auxiliary, in the first and second persons. In the third person, the masculine form of the agentive noun stands in for all actors, masculine, feminine or neuter. From "√dā-" : The medio-passive forms are hardly ever found in the literature. Conditional. There is also a conditional, formed from the future stem as the imperfect is formed from a thematic present stem. Rarely used in Classical Sanskrit, the conditional refers to hypothetical actions. Secondary Conjugation. Sanskrit verbs are capable of a second category of conjugation wherein the root takes on a modified or extended meaning. These are: Passive. The passive is very similar in formation to the dív-class (4th) already seen above, with the primary difference that the -yá- always bears the accent. The root is in its weak form, and the middle endings are used. From "√han-" : Intensive. The intensive is formed by reduplicating the root and is conjugated like a class-2 verb. Thus for "√vid", we have "véved-", "vevid-": Participles. Participles are verbal adjectives, a form of the non-finite verb. They are derived from verb roots, but behave like adjectives. Sanskrit inherits a highly developed system of participles from Proto-Indo-European preserving some of the more archaic features of the parent language. Such a participial element found in almost all Indo-European languages is "-nt-". This can be seen in PIE "*bheront-", from "*bher-" 'bear', Sanskrit "bharan(t)-", Greek "φέρον(τ)-" ("pheron(t)-"), Latin "feren(t)-", all meaning 'bearing, carrying'. In Sanskrit, participles exist in all three voices — active, middle and passive, and in three of the tenses — present, perfect and future. While this should logically yield 3x3=9 forms, the actual number is usually higher, because potentially at least, there are three different future passive participles and two perfect active participles. In some cases it may be lower, because a verb lacks active or middle forms. The different possible forms for a couple of representative verbs ("√nī-, nayati" 1 &amp; "√dhā-, dadhāti" 3) can be seen below: Past participles. Past participles are formed directly from verbal roots for most verbs in most cases (except for verbs of the tenth "gaṇa", which form them from the present stem). They have a perfective sense, in that they refer to actions that are completed. They can freely substitute for finite verbs conjugated in the past sense. Past passive participles. Sanskrit inherits two suffixes from Proto-Indo-European used to form verbal adjectives and the past passive participle: "*-tó-" and "*-nó-". The first can be seen in the root 'to come' forming "*gʷm̥-tó-", which in Sanskrit becomes "gatá-" '(having) gone', and in Latin . The second method is less frequent but can be seen in PIE 'to split' giving "*bʰid-nó-", in Sanskrit "bhin-ná-" '(having been) split', cognate with English "bitten". In Sanskrit thus the past passive participle is formed by adding "-tá-", or "-ná-", to a root in its weakest grade when weakening is applicable (e.g. samprasāraṇa). For "seṭ" roots, the augment "i" is inserted before the suffix. The resulting form is an adjective and modifies a noun either expressed or implied. The past passive participle can usually be translated by the corresponding English past passive participle: When used with transitive ("sakarmaka") verbs, the standard passive meaning can be achieved; the agent, if used, is placed in the instrumental case: Note that rākṣasa is the direct object (karman) of the verbal action expressed in √han "to kill" and the agent (kartṛ) of the same action, Rāma, occurs in the instrumental case. When made from an intransitive ("akarmaka") or neuter verb, the same participle has no passive, but an indefinite past sense: Past active participles. The past participle could be extended by adding the possessive suffix "-vant-": "kṛ·tá·vant-" – 'one who has something (or things) done'. This naturally takes on the function of the active past participle. This is a linguistic innovation within the Indo-Aryan branch, and the first purely participial formation of this character appears in the Atharvaveda. Later on this formation ("-tá·vant-" or "-ná·vant-") comes to be used independently, with the copula understood, in place of an active preterite: Present participle. Unlike the past participles, the present participle is formed from the present stem of the verb, and is formed differently depending on whether the verb is "parasmaipada" or "ātmanepada". The present participle can never substitute for a finite verb. It is also inherently imperfective, indicating an action that is still in process at the time of the main verb. Present active participle. In theory, the present active participle is the addition of "-ant" to a form of the root. In practice however, this participle can simply be made by dropping the -i from the 3rd person plural in the present indicative. This gives us the masculine singular form of the participle. Thus, The weak form is "-at-" The feminine is formed as "-antī́" in some roots, and as "-atī́" in others. Present middle participle. This participle is formed by adding "-māna-" to a thematic stem and "-āná-" to an athematic stem in the weak form. Thus for "√bhū-" and "√kṛ-": Future participles. Formed from the future stem just as the present participle is formed from the present stem, the future participle describes an action that has not yet happened, but that may in the future. Future active participle. Just as in the present, it can be formed by simply dropping the "-i" of the third-person plural. Thus, The feminines are in either "-ántī" or "-atī́" although the latter is extremely rare. Future middle participle. Similarly, the middle form is obtained by adding "-māna-" to the future stem. So we have: Gerundive. The gerundive is a future passive prescriptive participle, indicating that the word modified should or ought to be the object of the action of the participle. This is made by affixing "-ya-", "-távya-/-tavyá-", "-anī́ya-" to different stem forms. Thus for "√bhū-" and "√kṛ-": The accent on "-tavya-" may fall on either syllable. Perfect participle. The perfect participle is a past active participle, but is very rarely used in classical Sanskrit. This is formed by adding "-vā́ṅs" in the active and "-āná" in the middle voice to the weak form of the perfect stem, as seen, for example in the third person active. The feminine forms are "-uṣī́" and "-ānā́". Thus, Aorist participle. The aorist participle used in Vedic was lost in Classical Sanskrit. Other non-finite forms. Infinitive. The infinitive originates as the accusative form of an old verbal noun. The ending "-tum", similar to the Latin supine, is added to the root which bears the accent with its vowel guṇated. An '-i-' intervenes just like in other conjugation forms as needed. Gerund. There exists a non-finite form in Sanskrit termed "gerund" or "absolutive" which is analysed differently from the gerund in other Proto-Indo-European languages. It has the sense of 'having done' or whatever the verb may be. It is formed using "-tvā́" or "-ya", with the former normally used on a bare root whereas the latter applied to verbs with prefixes added to the root. The "-tvā́" formation is similar to the past passive participle formed from "-tá" and correspondingly bears the accent. The second form can be normally derived by suffixing the root directly, with its vowel bearing the accent whilst in the weak form. A root ending in a short vowel gets an intervening -t-. Comprehensive example. The following table is a partial listing of the major verbal forms that can be generated from a single root. Not all roots can take all forms; some roots are often confined to particular stems. The verbal forms listed here are all in the third person singular, and they can all be conjugated in three persons and three numbers. When there are two forms in one cell of this table, the first one is active, the second one middle. Taking into account the fact that the participial forms each decline in seven cases in three numbers across three genders, and the fact that the verbs each conjugate in three persons in three numbers, the primary, causative, and desiderative stems for this root when counted together have over a thousand forms. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; Glossary. &lt;templatestyles src="Reflist/styles.css" /&gt; Traditional glossary and notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\underbrace{\\underbrace{\\mathrm{root+suffix}}_{\\mathrm{stem}} + \\mathrm{ending}}_{\\mathrm{word}}" } ]
https://en.wikipedia.org/wiki?curid=10433995
10434600
P-wave modulus
There are two kinds of seismic body waves in solids, "pressure waves" (P-waves) and "shear waves." In linear elasticity, the P-wave modulus formula_0, also known as the longitudinal modulus, or the constrained modulus, is one of the elastic moduli available to describe isotropic homogeneous materials. It is defined as the ratio of axial stress to axial strain in a uniaxial strain state. This occurs when expansion in the transverse direction is prevented by the inertia of neighboring material, such as in an earthquake, or underwater seismic blast. formula_1 where all the other strains formula_2 are zero. This is equivalent to stating that formula_3 where "V"P is the velocity of a P-wave and "ρ" is the density of the material through which the wave is propagating.
[ { "math_id": 0, "text": "M" }, { "math_id": 1, "text": "\\sigma_{zz} = M \\epsilon_{zz}" }, { "math_id": 2, "text": "\\epsilon_{**}" }, { "math_id": 3, "text": "M_{x} = \\rho_{x} V_\\mathrm{P}^2 ," } ]
https://en.wikipedia.org/wiki?curid=10434600
10435040
Scott's Pi
Scott's pi (named after William A Scott) is a statistic for measuring inter-rater reliability for nominal data in communication studies. Textual entities are annotated with categories by different annotators, and various measures are used to assess the extent of agreement between the annotators, one of which is Scott's pi. Since automatically annotating text is a popular problem in natural language processing, and the goal is to get the computer program that is being developed to agree with the humans in the annotations it creates, assessing the extent to which humans agree with each other is important for establishing a reasonable upper limit on computer performance. Introduction. Scott's pi is similar to Cohen's kappa in that they both improve on simple observed agreement by factoring in the extent of agreement that might be expected by chance. However, in each statistic, the expected agreement is calculated slightly differently. Scott's pi compares to the baseline of the annotators being not only independent but also having the same distribution of responses; Cohen's kappa compares to a baseline in which the annotators are assumed to be independent but to have their own, different distributions of responses. Thus, Scott's pi measures disagreements between the annotators relative to the level of agreement expected due to pure random chance if the annotators were independent and identically distributed, whereas Cohen's kappa measures disagreements between the annotators that are above and beyond any systematic, average disagreement that the annotators might have. Indeed, Cohen's kappa explicitly ignores all systematic, average disagreement between the annotators prior to comparing the annotators. So Cohen's kappa assesses only the level of randomly varying disagreements between the annotators, not systematic, average disagreements. Scott's pi is extended to more than two annotators by Fleiss' kappa. The equation for Scott's pi, as in Cohen's kappa, is: formula_0 However, Pr(e) is calculated using squared "joint proportions" which are squared arithmetic means of the marginal proportions (whereas Cohen's uses squared geometric means of them). Worked example. Confusion matrix for two annotators, three categories {Yes, No, Maybe} and 45 items rated (90 ratings for 2 annotators): To calculate the expected agreement, sum marginals across annotators and divide by the total number of ratings to obtain joint proportions. Square and total these: To calculate observed agreement, divide the number of items on which annotators agreed by the total number of items. In this case, formula_1 Given that Pr(e) = 0.369, Scott's pi is then formula_2
[ { "math_id": 0, "text": "\\pi = \\frac{\\Pr(a) - \\Pr(e)}{1 - \\Pr(e)}," }, { "math_id": 1, "text": "\\Pr(a) = \\frac{1 + 5 + 9}{45} = 0.333." }, { "math_id": 2, "text": "\\pi = \\frac{0.333 - 0.369}{1 - 0.369} = -0.057 ." } ]
https://en.wikipedia.org/wiki?curid=10435040
10438174
Precoding
Precoding is a generalization of beamforming to support multi-stream (or multi-layer) transmission in multi-antenna wireless communications. In conventional single-stream beamforming, the same signal is emitted from each of the transmit antennas with appropriate weighting (phase and gain) such that the signal power is maximized at the receiver output. When the receiver has multiple antennas, single-stream beamforming cannot simultaneously maximize the signal level at all of the receive antennas. In order to maximize the throughput in multiple receive antenna systems, multi-stream transmission is generally required. In point-to-point systems, precoding means that multiple data streams are emitted from the transmit antennas with independent and appropriate weightings such that the link throughput is maximized at the receiver output. In multi-user MIMO, the data streams are intended for different users (known as SDMA) and some measure of the total throughput (e.g., the sum performance or max-min fairness) is maximized. In point-to-point systems, some of the benefits of precoding can be realized without requiring channel state information at the transmitter, while such information is essential to handle the inter-user interference in multi-user systems. Precoding in the downlink of cellular networks, known as network MIMO or coordinated multipoint (CoMP), is a generalized form of multi-user MIMO that can be analyzed by the same mathematical techniques. Precoding in Simple Words. Precoding is a technique that exploits transmit diversity by weighting the information stream, i.e. the transmitter sends the coded information to the receiver to achieve pre-knowledge of the channel. The receiver is a simple detector, such as a matched filter, and does not have to know the channel state information. This technique will reduce the corrupted effect of the communication channel. For example, you are sending the information formula_0, and it will pass through the channel formula_1, and add Gaussian noise formula_2. The received signal at the receiver front-end will be formula_3; The receiver will have to know the information about formula_1 and formula_2. It will suppress the effect of formula_2 by increasing SNR, but what about formula_1? It needs information about the channel, formula_1, and this will increase the complexity. The receiver (mobile units) has to be simple for many reasons like cost or size of mobile unit. So, the transmitter (the base station) will do the hard work and predict the channel. Let us call the predicted channel formula_4 and for a system with precoder the information will be coded: formula_5. The received signal will be formula_6. If your prediction is perfect, formula_7 and formula_8 and it turns out to be the detection problem in Gaussian channels which is simple. To prevent a potential misunderstanding here, precoding does not cancel out the impact of the channel, but it aligns the vector containing the transmit symbols (i.e. transmit vector) with the eigenvector(s) of the channel. In simple terms, it transforms the transmit symbols' vector in such a way that the vector reaches the receiver in the strongest form that is possible in the given channel. Why do they call it "coding"? It is a preprocessing technique that performs transmit diversity and it is similar to equalization, but the main difference is that you have to optimize the precoder with a decoder. Channel equalization aims to minimize channel errors, but the precoder aims to minimize the error in the receiver output. Precoding for Point-to-Point MIMO Systems. In point-to-point multiple-input multiple-output (MIMO) systems, a transmitter equipped with multiple antennas communicates with a receiver that has multiple antennas. Most classic precoding results assume narrowband, slowly fading channels, meaning that the channel for a certain period of time can be described by a single channel matrix which does not change faster. In practice, such channels can be achieved, for example, through OFDM. The precoding strategy that maximizes the throughput, called channel capacity, depends on the channel state information available in the system. Statistical channel state information. If the receiver knows the channel matrix and the transmitter has statistical information, eigenbeamforming is known to achieve the MIMO channel capacity. In this approach, the transmitter emits multiple streams in eigendirections of the channel covariance matrix. Full channel state information. If the channel matrix is completely known, singular value decomposition (SVD) precoding is known to achieve the MIMO channel capacity. In this approach, the channel matrix is diagonalized by taking an SVD and removing the two unitary matrices through pre- and post-multiplication at the transmitter and receiver, respectively. Then, one data stream per singular value can be transmitted (with appropriate power loading) without creating any interference whatsoever. Precoding for Multi-user MIMO Systems. In multi-user MIMO, a multi-antenna transmitter communicates simultaneously with multiple user's receiver (each having one or multiple antennas). This is known as space-division multiple access (SDMA). From an implementation perspective, precoding algorithms for SDMA systems can be sub-divided into linear and nonlinear precoding types. The capacity achieving algorithms are nonlinear, but linear precoding approaches usually achieve reasonable performance with much lower complexity. Linear precoding strategies include maximum ratio transmission (MRT), zero-forcing (ZF) precoding, and transmit Wiener precoding. There are also precoding strategies tailored for low-rate feedback of channel state information, for example random beamforming. Nonlinear precoding is designed based on the concept of dirty paper coding (DPC), which shows that any known interference at the transmitter can be subtracted without the penalty of radio resources if the optimal precoding scheme can be applied on the transmit signal. While performance maximization has a clear interpretation in point-to-point MIMO, a multi-user system cannot simultaneously maximize the performance for all users. This can be viewed as a multi-objective optimization problem where each objective corresponds to maximization of the capacity of one of the users. The usual way to simplify this problem is to select a system utility function; for example, the weighted sum capacity where the weights correspond to the system's subjective user priorities. Furthermore, there might be more users than data streams, requiring a scheduling algorithm to decide which users to serve at a given time instant. Linear precoding with full channel state information. This suboptimal approach cannot achieve the weighted sum rate, but it can still maximize the weighted sum performance (or some other metric of achievable rates under linear precoding). The optimal linear precoding does not have any closed-form expression, but it takes the form of a weighted MMSE precoding for single-antenna receivers. The precoding weights for a given user are selected to maximize a ratio between the signal gain at this user and the interference generated at other users (with some weights) plus noise. Thus, precoding can be interpreted as finding the optimal balance between achieving strong signal gain and limiting inter-user interference. Finding the optimal weighted MMSE precoding is difficult, leading to approximate approaches where the weights are selected heuristically. A common approach is to concentrate on either the numerator or the denominator of the mentioned ratio; that is, maximum ratio transmission (MRT) and zero-forcing (ZF) precoding. MRT only maximizes the signal gain at the intended user. MRT is close-to-optimal in noise-limited systems, where the inter-user interference is negligible compared to the noise. ZF precoding aims at nulling the inter-user interference, at the expense of losing some signal gain. ZF precoding can achieve a performance close to the sum capacity when the number of users is large or the system is interference-limited (i.e., the noise is weak compared to the interference). A balance between MRT and ZF is obtained by the so-called regularized zero-forcing (also known as signal-to-leakage-and-interference ratio (SLNR) beamforming and transmit Wiener filtering) All of these heuristic approaches can also be applied to receivers that have multiple antennas. Also for multiuser MIMO system setup, another approach has been used to reformulate the weighted sum rate optimization problem to a weighted sum MSE problem with additional optimization MSE weights for each symbol in. However, still this work is not able to solve this problem optimally (i.e., its solution is suboptimal). On the other hand, duality approach also considered in and to get sub-optimal solution for weighted sum rate optimization. Note that the optimal linear precoding can be computed using monotonic optimization algorithms, but the computational complexity scales exponentially fast with the number of users. These algorithms are therefore only useful for benchmarking in small systems. Linear precoding with limited channel state information. In practice, the channel state information is limited at the transmitter due to estimation errors and quantization. Inaccurate channel knowledge may result in significant loss of system throughput, as the interference between the multiplexed streams cannot be completely controlled. In closed-loop systems, the feedback capabilities decide which precoding strategies are feasible. Each receiver can either feedback a quantized version of its complete channel knowledge or focus on certain critical performance indicators (e.g., the channel gain). If the complete channel knowledge is fed back with good accuracy, then one can use strategies designed for having full channel knowledge with minor performance degradation. Zero-forcing precoding may even achieve the full multiplexing gain, but only provided that the accuracy of the channel feedback increases linearly with signal-to-noise ratio (in dB). Quantization and feedback of channel state information is based on vector quantization, and codebooks based on Grassmannian line packing have shown good performance. Other precoding strategies have been developed for the case with very low channel feedback rates. Random beamforming (or opportunistic beamforming) was proposed as a simple way of achieving good performance that scales like the sum capacity when the number of receivers is large. In this suboptimal strategy, a set of beamforming directions are selected randomly and users feed back a few bits to tell the transmitter which beam gives the best performance and what rate they can support using it. When the number of users is large, it is likely that each random beamforming weight will provide good performance for some user. In spatially correlated environments, the long-term channel statistics can be combined with low-rate feedback to perform multi-user precoding. As spatially correlated statistics contain much directional information, it is only necessary for users to feed back their current channel gain to achieve reasonable channel knowledge. As the beamforming weights are selected from the statistics, and not randomly, this approach outperforms random beamforming under strong spatial correlation. In multiuser MIMO systems where the number of users are higher than the number of transmit antennas, a multiuser diversity can be achieved by performing user scheduling before applying zero-forcing beamforming. Multiuser diversity is a form of selection diversity among users, the base station can schedule its transmission to those users with favorable channel fading conditions to improve the system throughput. In order to achieve multiuser diversity and apply zero-forcing precoding, the CSI of all users are required at the base station. However, the amount of overall feedback information increases with the number of users. Therefore, it is important to perform a user selection at the receiver to determine the users which feed back their quantized CSI to the transmitter based on a pre-defined threshold. DPC or DPC-like nonlinear precoding. Dirty paper coding is a coding technique that pre-cancels known interference without power penalty. Only the transmitter needs to know this interference, but full channel state information is required everywhere to achieve the weighted sum capacity. This category includes Costa precoding, Tomlinson-Harashima precoding and the vector perturbation technique. Mathematical Description. Description of Point-to-Point MIMO. The standard narrowband, slowly fading channel model for point-to-point (single-user) MIMO communication is described in the page on MIMO communication. Description of Multi-user MIMO. Consider a downlink multi-user MIMO system where a base station with formula_9 transmit antennas and formula_10 single-antenna users. The channel to user formula_11 is described by the formula_12 vector formula_13 of channel coefficients and its formula_14th element describes the channel response between the formula_14th transmit antenna and the receive antenna. The input-output relationship can be described as formula_15 where formula_16 is the formula_12 transmitted vector signal, formula_17 is the received signal, and formula_18 is the zero-mean unit-variance noise. Under linear precoding, the transmitted vector signal is formula_19 where formula_20 is the (normalized) data symbol and formula_21 is the formula_12 linear precoding vector. The signal-to-interference-and-noise ratio (SINR) at user formula_11 becomes formula_22 where formula_23 is the noise variance for channel to user formula_11 and the corresponding achievable information rate is formula_24 bits per channel use. The transmission is limited by power constraints. This can, for example, be a total power constraint formula_25 where formula_26 is the power limit. A common performance metric in multi-user systems is the weighted sum rate formula_27 for some positive weights formula_28 that represent the user priority. The weighted sum rate is maximized by weighted MMSE precoding that selects formula_29 for some positive coefficients formula_30 (related to the user weights) that satisfy formula_31 and formula_32 is the optimal power allocation. The suboptimal MRT approach removes the channel inversion and only selects formula_33 while the suboptimal ZF precoding makes sure that formula_34 for all i ≠ k and thus the interference can be removed in the SINR expression: formula_35 Uplink-downlink duality. For comparison purposes, it is instructive to compare the downlink results with the corresponding uplink MIMO channel where the same single-antenna users transmit to the same base station, having formula_9 receive antennas. The input-output relationship can be described as formula_36 where formula_37 is the transmitted symbol for user formula_11, formula_38 is the transmit power for this symbol, formula_39 and formula_40 are the formula_12 vector of received signals and noise respectively, formula_13 is the formula_12 vector of channel coefficients. If the base station uses linear receive filters to combine the received signals on the formula_9 antennas, the SINR for the data stream from user formula_11 becomes formula_41 where formula_42 is the unit-norm receive filter for this user. Compared with the downlink case, the only difference in the SINR expressions is that the indices are switched in the interference term. Remarkably, the optimal receive filters are the same as the weighted MMSE precoding vectors, up to a scaling factor: formula_43 Observe that the coefficients formula_30 that was used in the weighted MMSE precoding are not exactly the optimal power coefficients in the uplink (that maximize the weighted sum rate) except under certain conditions. This important relationship between downlink precoding and uplink receive filtering is known as the uplink-downlink duality. As the downlink precoding problem usually is more difficult to solve, it often useful to first solve the corresponding uplink problem. Limited feedback precoding. The precoding strategies described above was based on having perfect channel state information at the transmitter. However, in real systems, receivers can only feed back quantized information that is described by a limited number of bits. If the same precoding strategies are applied, but now based on inaccurate channel information, additional interference appears. This is an example on limited feedback precoding. The received signal in multi-user MIMO with limited feedback precoding is mathematically described as formula_44 In this case, the beamforming vectors are distorted as formula_45, where formula_21 is the optimal vector and formula_46 is the error vector caused by inaccurate channel state information. The received signal can be rewritten as formula_47 where formula_48 is the additional interference at user formula_11 according to the limited feedback precoding. To reduce this interference, higher accuracy in the channel information feedback is required, which in turn reduces the throughput in the uplink. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "s" }, { "math_id": 1, "text": "h" }, { "math_id": 2, "text": "n" }, { "math_id": 3, "text": "r = sh + n" }, { "math_id": 4, "text": "h_{\\text{est}}" }, { "math_id": 5, "text": "{s \\over h_{\\text{est}}}" }, { "math_id": 6, "text": "r = \\left(\\frac{h}{h_{\\text{est}}}\\right) s + n" }, { "math_id": 7, "text": "h_{\\text{est}} = h" }, { "math_id": 8, "text": "r = s + n" }, { "math_id": 9, "text": "N" }, { "math_id": 10, "text": "K" }, { "math_id": 11, "text": "k" }, { "math_id": 12, "text": "N \\times 1" }, { "math_id": 13, "text": "\\mathbf{h}_k" }, { "math_id": 14, "text": "i" }, { "math_id": 15, "text": "y_k = \\mathbf{h}_k^H \\mathbf{x}+n_k, \\quad k=1,2, \\ldots, K" }, { "math_id": 16, "text": "\\mathbf{x}" }, { "math_id": 17, "text": "y_k" }, { "math_id": 18, "text": "n_k" }, { "math_id": 19, "text": "\\mathbf{x} = \\sum_{i=1}^K \\mathbf{w}_i s_i," }, { "math_id": 20, "text": "s_i" }, { "math_id": 21, "text": "\\mathbf{w}_i" }, { "math_id": 22, "text": "\\textrm{SINR}_k = \\frac{|\\mathbf{h}_k^H\\mathbf{w}_k|^2}{\\sigma_k^2+\\sum_{i \\neq k} |\\mathbf{h}_k^H\\mathbf{w}_i|^2}" }, { "math_id": 23, "text": "\\sigma_k^2" }, { "math_id": 24, "text": "\\log_2(1+\\textrm{SINR}_k)" }, { "math_id": 25, "text": "\\sum_{i=1}^K \\|\\mathbf{w}_i\\|^2 \\leq P" }, { "math_id": 26, "text": "P" }, { "math_id": 27, "text": "\\underset{\\{\\mathbf{w}_k\\}:\\sum_i \\|\\mathbf{w}_i\\|^2 \\leq P}{\\mathrm{maximize}} \\sum_{k=1}^K a_k \\log_2(1+\\textrm{SINR}_k)" }, { "math_id": 28, "text": "a_k" }, { "math_id": 29, "text": "\\mathbf{w}^{\\textrm{W-MMSE}}_k = \\sqrt{p_k} \\frac{( \\mathbf{I} + \\sum_{i \\neq k} q_i \\mathbf{h}_i \\mathbf{h}_i^H )^{-1} \\mathbf{h}_k}{\\|( \\mathbf{I} + \\sum_{i \\neq k} q_i \\mathbf{h}_i \\mathbf{h}_i^H )^{-1} \\mathbf{h}_k\\|} " }, { "math_id": 30, "text": "q_1,\\ldots,q_K" }, { "math_id": 31, "text": "\\sum_{i=1}^K q_i = P" }, { "math_id": 32, "text": "p_i" }, { "math_id": 33, "text": "\\mathbf{w}^{\\mathrm{MRT}}_k = \\sqrt{p_k} \\frac{\\mathbf{h}_k}{\\|\\mathbf{h}_k\\|}, " }, { "math_id": 34, "text": "\\mathbf{h}_i^H\\mathbf{w}^{\\mathrm{ZF}}_k=0" }, { "math_id": 35, "text": "\\textrm{SINR}^{\\mathrm{ZF}}_k = \\frac{|\\mathbf{h}_k^H\\mathbf{w}_k^{\\mathrm{ZF}}|^2}{\\sigma_k^2}." }, { "math_id": 36, "text": "\\mathbf{y} = \\sum_{k=1}^{K} \\mathbf{h}_k \\sqrt{q_k} s_k + \\mathbf{n}" }, { "math_id": 37, "text": "s_k" }, { "math_id": 38, "text": "q_k" }, { "math_id": 39, "text": "\\mathbf{y}" }, { "math_id": 40, "text": "\\mathbf{n}" }, { "math_id": 41, "text": "\\textrm{SINR}^{\\mathrm{uplink}}_k = \\frac{q_k|\\mathbf{h}_k^H\\mathbf{v}_k|^2}{\\sigma_k^2+\\sum_{i \\neq k} q_i |\\mathbf{h}_i^H\\mathbf{v}_k|^2}" }, { "math_id": 42, "text": "\\mathbf{v}_k" }, { "math_id": 43, "text": "\\mathbf{v}^{\\textrm{MMSE}}_k = \\frac{(\\sigma_k^2 \\mathbf{I} + \\sum_{i \\neq k} q_i \\mathbf{h}_i \\mathbf{h}_i^H )^{-1} \\mathbf{h}_k}{\\|(\\sigma_k^2 \\mathbf{I} + \\sum_{i \\neq k} q_i \\mathbf{h}_i \\mathbf{h}_i^H )^{-1} \\mathbf{h}_k\\|} " }, { "math_id": 44, "text": "y_k = \\mathbf{h}_k^H \\sum_{i=1}^K \\hat{\\mathbf{w}}_i s_i +n_k, \\quad k=1,2, \\ldots, K." }, { "math_id": 45, "text": "\\hat{\\mathbf{w}}_i = \\mathbf{w}_i + \\mathbf{e}_i" }, { "math_id": 46, "text": "\\mathbf{e}_i" }, { "math_id": 47, "text": "y_k = \\mathbf{h}_k^H \\sum_{i=1}^K \\mathbf{w}_i s_i + \\mathbf{h}_k^H \\sum_{i=1}^K \\mathbf{e}_i s_i+ n_k, \\quad k=1,2, \\ldots, K" }, { "math_id": 48, "text": "\\mathbf{h}_k^H \\sum_{i \\neq k} \\mathbf{e}_i s_i" } ]
https://en.wikipedia.org/wiki?curid=10438174
1043867
Energy–maneuverability theory
Model of aircraft performance Energy–maneuverability theory is a model of aircraft performance. It was developed by Col. John Boyd, a fighter pilot, and Thomas P. Christie, a mathematician with the United States Air Force, and is useful in describing an aircraft's performance as the total of kinetic and potential energies or aircraft specific energy. It relates the thrust, weight, aerodynamic drag, wing area, and other flight characteristics of an aircraft into a quantitative model. This enables the combat capabilities of various aircraft or prospective design trade-offs to be predicted and compared. Formula. All of these aspects of airplane performance are compressed into a single value by the following formula: formula_0 History. John Boyd, a U.S. jet fighter pilot in the Korean War, began developing the theory in the early 1960s. He teamed with mathematician Thomas Christie at Eglin Air Force Base to use the base's high-speed computer to compare the performance envelopes of U.S. and Soviet aircraft from the Korean and Vietnam Wars. They completed a two-volume report on their studies in 1964. Energy Maneuverability came to be accepted within the U.S. Air Force and brought about improvements in the requirements for the F-15 Eagle and later the F-16 Fighting Falcon fighters. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\n \\begin{array}{rcl}\n P_S & = & V \\left ( \\frac{T-D}{W} \\right ) \\\\\n \\\\\n V & = & \\text{Speed} \\\\\n T & = & \\text{Thrust} \\\\\n D & = & \\text{Drag} \\\\\n W & = & \\text{Weight}\n \\end{array}\n" } ]
https://en.wikipedia.org/wiki?curid=1043867
1043937
Scientific pitch notation
Musical notation system to describe pitch and relative frequency Scientific pitch notation (SPN), also known as American standard pitch notation (ASPN) and international pitch notation (IPN), is a method of specifying musical pitch by combining a musical note name (with accidental if needed) and a number identifying the pitch's octave. Although scientific pitch notation was originally designed as a companion to scientific pitch (see below), the two are not synonymous. Scientific pitch is a pitch standard—a system that defines the specific frequencies of particular pitches (see below). Scientific pitch notation concerns only how pitch names are notated, that is, how they are designated in printed and written text, and does not inherently specify actual frequencies. Thus, the use of scientific pitch notation to distinguish octaves does not depend on the pitch standard used. Nomenclature. The notation makes use of the traditional tone names (A to G) which are followed by numbers showing which octave they are part of. For standard A440 pitch equal temperament, the system begins at a frequency of 16.35160 Hz, which is assigned the value C0. The octave 0 of the scientific pitch notation is traditionally called the sub-contra octave, and the tone marked C0 in SPN is written as ",C" or "C," or "CCC" in traditional systems, such as Helmholtz notation. Octave 0 of SPN marks the low end of what humans can actually perceive, with the average person being able to hear frequencies no lower than 20 Hz as pitches. The octave number increases by 1 upon an ascension from B to C. Thus, "A0" refers to the first A "above" C0 and middle C (the one-line octave's C or simply c′) is denoted as "C4" in SPN. For example, C4 is one note above B3, and A5 is one note above G5. The octave number is tied to the alphabetic character used to describe the pitch, with the division between note letters ‘B’ and ‘C’, thus: * "B3" and all of its possible variants (B, B♭, B, B♯, B) would properly be designated as being in octave "3". * "C4" and all of its possible variants (C, C♭, C, C♯, C) would properly be designated as being in octave "4". * In equal temperament "C♭4" is same frequency as "B3". Use. Scientific pitch notation is often used to specify the range of an instrument. It provides an unambiguous means of identifying a note in terms of textual notation rather than frequency, while at the same time avoiding the transposition conventions that are used in writing the music for instruments such as the clarinet and guitar. It is also easily translated into staff notation, as needed. In describing musical pitches, nominally enharmonic spellings can give rise to anomalies where, for example in Pythagorean intonation C is a lower frequency than B3; but such paradoxes usually do not arise in a scientific context. Scientific pitch notation avoids possible confusion between various derivatives of Helmholtz notation which use similar symbols to refer to different notes. For example, "C" in Helmholtz's original notation refers to the C two octaves below middle C, whereas "C" in ABC Notation refers to middle C itself. With scientific pitch notation, middle C is "always" C4, and C4 is never any note but middle C. This notation system also avoids the "fussiness" of having to visually distinguish between four and five primes, as well as the typographic issues involved in producing acceptable subscripts or substitutes for them. C7 is much easier to quickly distinguish visually from C8, than is, for example, c′′′′ from c′′′′′, and the use of simple integers (e.g. C7 and C8) makes subscripts unnecessary altogether. Although pitch notation is intended to describe sounds audibly perceptible as pitches, it can also be used to specify the frequency of non-pitch phenomena. Notes below E0 or higher than E are outside most humans' hearing range, although notes slightly outside the hearing range on the low end may still be indirectly perceptible as pitches due to their overtones falling within the hearing range. For an example of truly inaudible frequencies, when the Chandra X-ray Observatory observed the waves of pressure fronts propagating away from a black hole, their one oscillation every 10 million years was described by NASA as corresponding to the B♭ fifty-seven octaves below middle C (B or 3.235 fHz). Similar systems. There are pitch-octave notation conventions that appear similar to scientific pitch notation but are based on an alternative octave convention that differs from scientific pitch notation, usually by one octave. For example, middle C ("C4" in ISPN) appears in some MIDI software as "C5" (MIDI note 60). This convention is probably related to a similar convention in sample-based trackers, where C5 is the basic pitch at which a sample plays (8287.12 Hz in MOD), forcing the musician to treat samples at any other pitch as transposing instruments when using them in songs. Alternately, both Yamaha and the software MaxMSP define middle C as C3. Apple's GarageBand also defines middle C (261.6256 Hz) as C3. Using scientific pitch notation consistently, the MIDI NoteOn message assigns MIDI note 0 to C−1 (five octaves below C4 or Middle C; lowest note on the two largest organs of the world; about one octave below the human hearing threshold: its overtones, however, are audible), MIDI note 21 to A0 (the bottom key of an 88-key piano), MIDI note 60 to C4 (Middle C), MIDI note 69 to A4 (A440), MIDI note 108 to C8 (the top key of an 88-key piano), and MIDI note 127 to G9 (beyond the piano; one octave above the highest note on some keyboard glockenspiels; some notes above the highest-pitched organ pipes). This creates a linear pitch space in which an octave spans 12 semitones, where each semitone is the distance between adjacent keys of the piano keyboard. Distance in this space corresponds to musical pitch distance in an equal-tempered scale, 2 semitones being a whole step, and 1 semitone being a half step. An equal-tempered semitone can also be subdivided further into 100 cents. Each cent is &lt;templatestyles src="Fraction/styles.css" /&gt;1⁄100 semitone or &lt;templatestyles src="Fraction/styles.css" /&gt;1⁄1200 octave. This measure of pitch allows the expression of "microtones" not found on standard piano keyboards. French–Belgian notation system. The French–Belgian system defines the note C placed two ledger lines below the bass staff as Do1 and middle C as Do3. As in the scientific pitch notation, the index of a Do is shared with all notes above it until the next Do. However, no octave receives the index zero, the octave right below 1 receiving the index −1. Therefore while C4 in SPN equates to Do3 in the French–Belgian system, C1 in SPN equates to Do−1. Meantone temperament. The notation is sometimes used in the context of meantone temperament, and does not always assume equal temperament nor the standard concert A4 of 440 Hz; this is particularly the case in connection with earlier music. The standard proposed to the Acoustical Society of America explicitly states a logarithmic scale for frequency, which excludes meantone temperament, and the base frequency it uses gives A4 a frequency of exactly 440 Hz. However, when dealing with earlier music that did not use equal temperament, it is understandably easier to simply refer to notes by their closest modern equivalent, as opposed to specifying the difference using cents every time. Table of note frequencies. The table below gives notation for pitches based on standard piano key frequencies: standard concert pitch and twelve-tone equal temperament. When a piano is tuned to just intonation, C4 refers to the same key on the keyboard, but a slightly different frequency. Notes not produced by any piano are highlighted in medium gray, and those produced only by an extended 108-key piano, light gray. Mathematically, given the number n of semitones above middle C, the fundamental frequency in hertz is given by formula_0 (see twelfth root of two). Given the MIDI NoteOn number m, the frequency of the note is normally formula_1 Hz, using standard tuning. Scientific pitch versus scientific pitch "notation". Scientific pitch (q.v.) is an absolute pitch "standard", first proposed in 1713 by French physicist Joseph Sauveur. It was defined so that all Cs are integer powers of 2, with middle C (C4) at 256 hertz. As already noted, it is not dependent upon, nor a part of scientific pitch "notation" described here. To avoid the confusion in names, scientific pitch is sometimes also called "Verdi tuning" or "philosophical pitch". The current international pitch standard, using A4 as exactly 440 Hz, had been informally adopted by the music industry as far back as 1926, and A440 became the official international pitch standard in 1955. SPN is routinely used to designate pitch in this system. A4 may be tuned to other frequencies under different tuning standards, and SPN octave designations still apply (ISO 16). With changes in concert pitch and the widespread adoption of A440 as a musical standard, new scientific frequency tables were published by the Acoustical Society of America in 1939, and adopted by the International Organization for Standardization in 1955. C0, which was exactly 16 Hz under the scientific pitch standard, is now 16.35160 Hz under the current international standard system. Footnotes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "440 \\cdot 2^{(n-9)/12}" }, { "math_id": 1, "text": "440 \\cdot 2^{(m-69)/12}" } ]
https://en.wikipedia.org/wiki?curid=1043937
1044268
Spark spread
The spark spread is the theoretical gross margin of a gas-fired power plant from selling a unit of electricity, having bought the fuel required to produce this unit of electricity. All other costs (operation and maintenance, capital and other financial costs) must be covered from the spark spread. The term was first coined by Tony West's trading team on the trading floor of National Power Ltd in Swindon, UK during the late 1990s and quickly came into common usage as other traders realised the trading and hedging opportunities. The terms dark spread, quark spread and bark spread refer to the similarly defined differences ("spreads") between cash streams for coal-fired power plants, nuclear power plants and bio-mass power plants, respectively. These indicators of power plant economics are useful for trading energy markets. For operating or investment decisions published "spread" data are not applicable. Local market conditions, actual plant efficiencies and other plant costs have to be considered. A higher dark spread is more economically beneficial to the owner of the generator; an IPP with a dark spread of €15/MWh will be more profitable than a competitor with a dark spread of only €10/MWh. Further definition of clean spread indicators include the price of carbon dioxide emission allowances (see: Emission trading). Definition of spark spread. Conceptually, the spark spread (SS in megawatt-hours) equals: A more refined version of this calculation may be: formula_0 with pE as price of electricity in MU/MWh pG as price of natural gas in MU/MWh or MU/Btu ηel as electrical efficiency resp. HR as heat rate in Btu/MWh While the above equations may be sufficient for a single power plant or electricity provider, more detailed calculations may needed depending on the analysis being performed. If the data is sourced from futures contracts for fuels and over-the-counter contracts for electricity, further calculations must be made to determine the appropriate hedge ratio of electricity to fuel. A precise definition of a spark spread has to be given by the source publishing such indicators. Definitions should specify energy (electricity and fuel) prices considered (delivery point &amp; conditions) and the plant efficiency used for the calculation. Also, any plant operating costs that may be included should be stated. Typically, an efficiency of 50 % is considered for gas-fired plants, and 36% for coal-fired plants. In the UK, a non-rounded efficiency of 49.13% is used for calculating the gas conversion. In reality, each gas-fired plant has a different fuel efficiency, but 49.13% is used as a standard in the UK market because it provides an easy conversion between gas and power volumes. The spark spread value is therefore the power price minus the gas cost divided by 0.4913, i.e. Spark Spread = Power Price – (Gas cost/0.4913). As of August 2006, UK dark spreads were in the range of 10–30 £/MWh, while UK spark spreads were in the range of 4–9 £/MWh. It is well-known that these values substantially understate the actual efficiency of modern plants. Best-in-class efficiencies (as of 2019) are near 64%, and commercial development is rapid. Clean spread. In countries that are covered by the European Union Emissions Trading Scheme, generators have to consider also the cost of carbon dioxide emission allowances that will be under a cap and trade regime. Emission trading has started in the EU in January 2005. The Clean Spark Spread is calculated using a gas emissions intensity factor of 0.411 tCO2/MWh. Therefore, the clean spark spread is calculated by subtracting the carbon price per tonne (multiplied by 0.411) from the ‘dirty’ spark spread, i.e. Clean Spark Spread = Spark Spread – (Carbon Price*0.411). Clean spark spread or "spark green spread" represents the net revenue a generator makes from selling power, having bought gas and the required number of carbon allowances. This spread is calculated by adjusting the cost of natural gas for the efficiency of the generation and subsequently applying the market cost of procuring or opportunity cost of setting aside an emissions allowance such as a European Union Allowance (EUA) in the European Union Emissions Trading Scheme (EU ETS). Let S: spark spread, E: electricity price, G: gas cost, Ng: number of carbon credits necessary to cover gas operation, Pcc: price of a carbon credit. Then the Clean spark spread is defined as formula_1 Clean dark spread or "dark green spread" refers to an analogous indicator for coal-fired generation of electricity. The spark green spread and the dark green spread are especially important in areas where coal-fired electricity generation is prevalent as the convergence of the spreads will lead to an important decision point. Let D: dark spread, E: electricity price, C: coal cost, Nc: number of carbon credits necessary to cover coal operation (2–2.5x that of gas), Pcc: price of a carbon credit. Then, Clean dark spread = E - C - Nc*Pcc = D - Nc*Pcc Climate spread: The difference between the dark green spread and the spark green spread is known as the "Climate Spread". Climate spread = Clean dark spread - Clean spark spread = (D - Nc*Pcc) - (S - Ng*Pcc) = (D - S) - (Nc - Ng)*Pcc. Note: (D - S) and (Nc - Ng) are positive numbers. In a carbon constrained economy a power producer in a geographic area where coal is currently the preferred method by which electricity is generated may eventually encounter a negative climate spread if carbon credit prices rise. This would mean that when taking into consideration the cost to produce plus the cost of compliance with a cap and trade (coal is on average 2.5 times as polluting as natural gas for the same output of electricity), natural gas would be a better decision. This would begin to cause more internal abatement via power generation fuel switching and less reliance on flexible mechanisms. This is important due to concerns regarding supplementarity. Climate spread is also interesting in that it is the fundamental driver for the price of carbon credits. Since the ETS cap-and-trade system covers the major polluting industries, power generation by coal- and gas-fired power plants, by far the largest power sources, create the most carbon credit demand within the ETS. To cover emissions on an ever-tightening ration of free EUA allowances, a coal-fired powered power plant will either have to abate internally or buy credits. If the price of marginal internal abatement is lower than the price of carbon credits, the firm will choose internal abatement. However marginal abatement becomes more and more expensive, at some point forcing the plant to buy credits – thus the carbon credit price is equal to the marginal cost of abatement to the extent that European power plants have chosen to abate. Clean Dark Spreads are a reflection of the cost of generating power from coal after taking into account fuel (coal) and carbon allowance costs. A positive spread effectively means that it is profitable to generate electricity on a Baseload basis for the period in question, while a negative spread means that generation would be a loss-making activity. The Clean Spark Spreads do not take into account additional generating charges (beyond fuel and carbon), such as operational costs. Both the UK and German Dark Spread tables use a fuel efficiency factor of 35% for the coal conversion, and an energy conversion factor of 7.1 for converting tonnes/coal into MWh/electricity. In reality, each type of coal has a different energy value and each coal-fired plant has a different fuel efficiency, but 35% is accepted as a broad standard. At the time of writing (March 2007) there is no liquid Dark Spread traded market in either the UK or Germany. The Dark Spread value is the power price minus the coal price divided by 0.35, i.e. Dark Spread = Power price – (Coal price/0.35). The Clean Dark Spread is calculated using a coal emissions intensity factor of 0.971 tCO2/MWh. Therefore, the Clean Dark Spread is calculated by subtracting the carbon price (multiplied by 0.971) from the ‘dirty’ spark spread, i.e. Clean Dark Spread = Dark Spread – (Carbon Price*0.971). Spark spread as cost of replacement power for intermittent renewables. Spark spread can be used to assess the loss of revenue if a power station is switched from a normal running scenario to one where it is held in reserve to provide power when a large population of wind, or other renewable generators, is unable to generate. In theory, the power station operator would be indifferent to such non-running as long as he was paid the spread it would have earned during the normally expected number of hour run. In fact, if paid the expected spark spread for the hours it had expected to run in normal operating mode, the operator would be better off, because it would not incur the variable operating and maintenance costs (O&amp;M costs), which are proportional to the electrical energy produced. An assessment of the lost revenues is needed if some power plants, such as wind turbines, have absolute priority (must-run plants). A dispatching authority will in this case order the other plants to decrease power. In some countries plant operators are entitled to receive compensation for such interventions. In a competitive electricity market the situation can be handled by a balancing mechanism, in which any imbalance from the schedule (typically a day-ahead schedule) is penalized, either using the price from a balancing market or a calculated price. Thus, since UK spark spreads were in the range of 4–9 £/MWh – on average £6.5/MWh, or 0.65 p/kWh, we can assess the likely cost of relegating existing power stations to a standby role for a large penetration of renewables as being around 0.65 p/kWh.
[ { "math_id": 0, "text": " SS = p_E - \\frac{p_G}{\\eta_{el}} = p_E - HR \\cdot p_G " }, { "math_id": 1, "text": " Clean Spark Spread = p_E - \\frac{p_G}{\\eta_{el}} -Ng*Pcc" } ]
https://en.wikipedia.org/wiki?curid=1044268
104463
Liouville function
Arithmetic function The Liouville lambda function, denoted by λ("n") and named after Joseph Liouville, is an important arithmetic function. Its value is +1 if n is the product of an even number of prime numbers, and −1 if it is the product of an odd number of primes. Explicitly, the fundamental theorem of arithmetic states that any positive integer n can be represented uniquely as a product of powers of primes: "n" = "p"1"a"1 &amp;ctdot; "p""k""a""k", where "p"1 &lt; "p"2 &lt; ... &lt; "p""k" are primes and the "aj" are positive integers. (1 is given by the empty product.) The prime omega functions count the number of primes, with (Ω) or without (ω) multiplicity: formula_0 formula_1 λ("n") is defined by the formula formula_2 (sequence in the OEIS). λ is completely multiplicative since Ω("n") is completely additive, i.e.: Ω("ab") = Ω("a") + Ω("b"). Since 1 has no prime factors, Ω(1) = 0, so λ(1) = 1. It is related to the Möbius function μ("n"). Write n as "n" = "a"2"b", where b is squarefree, i.e., ω("b") = Ω("b"). Then formula_3 The sum of the Liouville function over the divisors of n is the characteristic function of the squares: formula_4 Möbius inversion of this formula yields formula_5 The Dirichlet inverse of Liouville function is the absolute value of the Möbius function, λ–1("n") = &amp;vert;μ("n")&amp;vert; = μ2("n"), the characteristic function of the squarefree integers. We also have that λ("n") = μ2("n"). Series. The Dirichlet series for the Liouville function is related to the Riemann zeta function by formula_6 Also: formula_7 The Lambert series for the Liouville function is formula_8 where formula_9 is the Jacobi theta function. Conjectures on weighted summatory functions. The Pólya problem is a question raised made by George Pólya in 1919. Defining formula_10 (sequence in the OEIS), the problem asks whether formula_11 for "n" &gt; 1. The answer turns out to be no. The smallest counter-example is "n" = 906150257, found by Minoru Tanaka in 1980. It has since been shown that "L"("n") &gt; 0.0618672√"n" for infinitely many positive integers "n", while it can also be shown via the same methods that "L"("n") &lt; -1.3892783√"n" for infinitely many positive integers "n". For any formula_12, assuming the Riemann hypothesis, we have that the summatory function formula_13 is bounded by formula_14 where the formula_15 is some absolute limiting constant. Define the related sum formula_16 It was open for some time whether "T"("n") ≥ 0 for sufficiently big "n" ≥ "n"0 (this conjecture is occasionally–though incorrectly–attributed to Pál Turán). This was then disproved by , who showed that "T"("n") takes negative values infinitely often. A confirmation of this positivity conjecture would have led to a proof of the Riemann hypothesis, as was shown by Pál Turán. Generalizations. More generally, we can consider the weighted summatory functions over the Liouville function defined for any formula_17 as follows for positive integers "x" where (as above) we have the special cases formula_18 and formula_19 formula_20 These formula_21-weighted summatory functions are related to the Mertens function, or weighted summatory functions of the Moebius function. In fact, we have that the so-termed non-weighted, or ordinary function formula_22 precisely corresponds to the sum formula_23 Moreover, these functions satisfy similar bounding asymptotic relations. For example, whenever formula_24, we see that there exists an absolute constant formula_25 such that formula_26 By an application of Perron's formula, or equivalently by a key (inverse) Mellin transform, we have that formula_27 which then can be inverted via the inverse transform to show that for formula_28, formula_29 and formula_30 formula_31 where we can take formula_32, and with the remainder terms defined such that formula_33 and formula_34 as formula_35. In particular, if we assume that the Riemann hypothesis (RH) is true and that all of the non-trivial zeros, denoted by formula_36, of the Riemann zeta function are simple, then for any formula_30 and formula_37 there exists an infinite sequence of formula_38 which satisfies that formula_39 for all "v" such that formula_40 where for any increasingly small formula_41 we define formula_42 and where the remainder term formula_43 which of course tends to "0" as formula_35. These exact analytic formula expansions again share similar properties to those corresponding to the weighted Mertens function cases. Additionally, since formula_44 we have another similarity in the form of formula_45 to formula_46 in so much as the dominant leading term in the previous formulas predicts a negative bias in the values of these functions over the positive natural numbers "x". References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\omega(n) = k, " }, { "math_id": 1, "text": " \\Omega(n) = a_1 + a_2 + \\cdots + a_k. " }, { "math_id": 2, "text": " \\lambda(n) = (-1)^{\\Omega(n)} " }, { "math_id": 3, "text": " \\lambda(n) = \\mu(b). " }, { "math_id": 4, "text": "\n\\sum_{d|n}\\lambda(d) =\n\\begin{cases}\n1 & \\text{if }n\\text{ is a perfect square,} \\\\\n0 & \\text{otherwise.}\n\\end{cases}\n" }, { "math_id": 5, "text": "\\lambda(n) = \\sum_{d^2|n} \\mu\\left(\\frac{n}{d^2}\\right)." }, { "math_id": 6, "text": "\\frac{\\zeta(2s)}{\\zeta(s)} = \\sum_{n=1}^\\infty \\frac{\\lambda(n)}{n^s}." }, { "math_id": 7, "text": "\\sum\\limits_{n=1}^{\\infty} \\frac{\\lambda(n)\\ln n}{n}=-\\zeta(2)=-\\frac{\\pi^2}{6}." }, { "math_id": 8, "text": "\\sum_{n=1}^\\infty \\frac{\\lambda(n)q^n}{1-q^n} = \n\\sum_{n=1}^\\infty q^{n^2} = \n\\frac{1}{2}\\left(\\vartheta_3(q)-1\\right)," }, { "math_id": 9, "text": "\\vartheta_3(q)" }, { "math_id": 10, "text": "L(n) = \\sum_{k=1}^n \\lambda(k)" }, { "math_id": 11, "text": "L(n)\\leq 0" }, { "math_id": 12, "text": "\\varepsilon > 0" }, { "math_id": 13, "text": "L(x) \\equiv L_0(x)" }, { "math_id": 14, "text": "L(x) = O\\left(\\sqrt{x} \\exp\\left(C \\cdot \\log^{1/2}(x) \\left(\\log\\log x\\right)^{5/2+\\varepsilon}\\right)\\right)," }, { "math_id": 15, "text": "C > 0" }, { "math_id": 16, "text": "T(n) = \\sum_{k=1}^n \\frac{\\lambda(k)}{k}." }, { "math_id": 17, "text": "\\alpha \\in \\mathbb{R}" }, { "math_id": 18, "text": "L(x) := L_0(x)" }, { "math_id": 19, "text": "T(x) = L_1(x)" }, { "math_id": 20, "text": "L_{\\alpha}(x) := \\sum_{n \\leq x} \\frac{\\lambda(n)}{n^{\\alpha}}." }, { "math_id": 21, "text": "\\alpha^{-1}" }, { "math_id": 22, "text": "L(x)" }, { "math_id": 23, "text": "L(x) = \\sum_{d^2 \\leq x} M\\left(\\frac{x}{d^2}\\right) = \\sum_{d^2 \\leq x} \\sum_{n \\leq \\frac{x}{d^2}} \\mu(n)." }, { "math_id": 24, "text": "0 \\leq \\alpha \\leq \\frac{1}{2}" }, { "math_id": 25, "text": "C_{\\alpha} > 0" }, { "math_id": 26, "text": "L_{\\alpha}(x) = O\\left(x^{1-\\alpha}\\exp\\left(-C_{\\alpha} \\frac{(\\log x)^{3/5}}{(\\log\\log x)^{1/5}}\\right)\\right)." }, { "math_id": 27, "text": "\\frac{\\zeta(2\\alpha+2s)}{\\zeta(\\alpha+s)} = s \\cdot \\int_1^{\\infty} \\frac{L_{\\alpha}(x)}{x^{s+1}} dx," }, { "math_id": 28, "text": "x > 1" }, { "math_id": 29, "text": "T \\geq 1" }, { "math_id": 30, "text": "0 \\leq \\alpha < \\frac{1}{2}" }, { "math_id": 31, "text": "L_{\\alpha}(x) = \\frac{1}{2\\pi\\imath} \\int_{\\sigma_0-\\imath T}^{\\sigma_0+\\imath T} \\frac{\\zeta(2\\alpha+2s)}{\\zeta(\\alpha+s)} \n \\cdot \\frac{x^s}{s} ds + E_{\\alpha}(x) + R_{\\alpha}(x, T), " }, { "math_id": 32, "text": "\\sigma_0 := 1-\\alpha+1 / \\log(x)" }, { "math_id": 33, "text": "E_{\\alpha}(x) = O(x^{-\\alpha})" }, { "math_id": 34, "text": "R_{\\alpha}(x, T) \\rightarrow 0" }, { "math_id": 35, "text": "T \\rightarrow \\infty" }, { "math_id": 36, "text": "\\rho = \\frac{1}{2} + \\imath\\gamma" }, { "math_id": 37, "text": " x \\geq 1" }, { "math_id": 38, "text": "\\{T_v\\}_{v \\geq 1}" }, { "math_id": 39, "text": "v \\leq T_v \\leq v+1" }, { "math_id": 40, "text": "L_{\\alpha}(x) = \\frac{x^{1/2-\\alpha}}{(1-2\\alpha) \\zeta(1/2)} + \\sum_{|\\gamma| < T_v} \\frac{\\zeta(2\\rho)}{\\zeta^{\\prime}(\\rho)} \\cdot \n \\frac{x^{\\rho-\\alpha}}{(\\rho-\\alpha)} + E_{\\alpha}(x) + R_{\\alpha}(x, T_v) + I_{\\alpha}(x), " }, { "math_id": 41, "text": "0 < \\varepsilon < \\frac{1}{2}-\\alpha" }, { "math_id": 42, "text": "I_{\\alpha}(x) := \\frac{1}{2\\pi\\imath \\cdot x^{\\alpha}} \\int_{\\varepsilon+\\alpha-\\imath\\infty}^{\\varepsilon+\\alpha+\\imath\\infty} \n \\frac{\\zeta(2s)}{\\zeta(s)} \\cdot \\frac{x^s}{(s-\\alpha)} ds," }, { "math_id": 43, "text": "R_{\\alpha}(x, T) \\ll x^{-\\alpha} + \\frac{x^{1-\\alpha} \\log(x)}{T} + \\frac{x^{1-\\alpha}}{T^{1-\\varepsilon} \\log(x)}, " }, { "math_id": 44, "text": "\\zeta(1/2) < 0" }, { "math_id": 45, "text": "L_{\\alpha}(x)" }, { "math_id": 46, "text": "M(x)" } ]
https://en.wikipedia.org/wiki?curid=104463
1044685
Comb filter
Signal processing filter In signal processing, a comb filter is a filter implemented by adding a delayed version of a signal to itself, causing constructive and destructive interference. The frequency response of a comb filter consists of a series of regularly spaced notches in between regularly spaced "peaks" (sometimes called "teeth") giving the appearance of a comb. Comb filters exist in two forms, "feedforward" and "feedback"; which refer to the direction in which signals are delayed before they are added to the input. Comb filters may be implemented in discrete time or continuous time forms which are very similar. Applications. Comb filters are employed in a variety of signal processing applications, including: In acoustics, comb filtering can arise as an unwanted artifact. For instance, two loudspeakers playing the same signal at different distances from the listener, create a comb filtering effect on the audio. In any enclosed space, listeners hear a mixture of direct sound and reflected sound. The reflected sound takes a longer, delayed path compared to the direct sound, and a comb filter is created where the two mix at the listener. Similarly, comb filtering may result from mono mixing of multiple mics, hence the 3:1 rule of thumb that neighboring mics should be separated at least three times the distance from its source to the mic. Discrete time implementation. Feedforward form. The general structure of a feedforward comb filter is described by the difference equation: formula_0 where formula_1 is the delay length (measured in samples), and "α" is a scaling factor applied to the delayed signal. The "z" transform of both sides of the equation yields: formula_2 The transfer function is defined as: formula_3 Frequency response. The frequency response of a discrete-time system expressed in the "z"-domain is obtained by substitution formula_4 where formula_5 is the imaginary unit and formula_6 is angular frequency. Therefore, for the feedforward comb filter: formula_7 Using Euler's formula, the frequency response is also given by formula_8 Often of interest is the "magnitude" response, which ignores phase. This is defined as: formula_9 In the case of the feedforward comb filter, this is: formula_10 The formula_11 term is constant, whereas the formula_12 term varies periodically. Hence the magnitude response of the comb filter is periodic. The graphs show the periodic magnitude response for various values of formula_13 Some important properties: formula_15 Impulse response. The feedforward comb filter is one of the simplest finite impulse response filters. Its response is simply the initial impulse with a second impulse after the delay. Pole–zero interpretation. Looking again at the "z"-domain transfer function of the feedforward comb filter: formula_19 the numerator is equal to zero whenever "zK" −"α". This has "K" solutions, equally spaced around a circle in the complex plane; these are the zeros of the transfer function. The denominator is zero at "zK" 0, giving "K" poles at "z" 0. This leads to a pole–zero plot like the ones shown. Feedback form. Similarly, the general structure of a feedback comb filter is described by the difference equation: formula_20 This equation can be rearranged so that all terms in formula_21 are on the left-hand side, and then taking the "z" transform: formula_22 The transfer function is therefore: formula_23 Frequency response. By substituting formula_24 into the feedback comb filter's "z"-domain expression: formula_25 the magnitude response becomes: formula_26 Again, the response is periodic, as the graphs demonstrate. The feedback comb filter has some properties in common with the feedforward form: formula_27 However, there are also some important differences because the magnitude response has a term in the denominator: Impulse response. The feedback comb filter is a simple type of infinite impulse response filter. If stable, the response simply consists of a repeating series of impulses decreasing in amplitude over time. Pole–zero interpretation. Looking again at the "z"-domain transfer function of the feedback comb filter: formula_28 This time, the numerator is zero at "zK" 0, giving "K" zeros at "z" 0. The denominator is equal to zero whenever "zK" "α". This has "K" solutions, equally spaced around a circle in the complex plane; these are the poles of the transfer function. This leads to a pole–zero plot like the ones shown below. Continuous time implementation. Comb filters may also be implemented in continuous time which can be expressed in the Laplace domain as a function of the complex frequency domain parameter formula_29 analogous to the z domain. Analog circuits use some form of analog delay line for the delay element. Continuous-time implementations share all the properties of the respective discrete-time implementations. Feedforward form. The feedforward form may be described by the equation: formula_30 where "τ" is the delay (measured in seconds). This has the following transfer function: formula_31 The feedforward form consists of an infinite number of zeros spaced along the jω axis (which corresponds to the Fourier domain). Feedback form. The feedback form has the equation: formula_32 and the following transfer function: formula_33 The feedback form consists of an infinite number of poles spaced along the jω axis. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "y[n] = x[n] + \\alpha x[n-K] " }, { "math_id": 1, "text": "K" }, { "math_id": 2, "text": "Y(z) = \\left(1 + \\alpha z^{-K}\\right) X(z) " }, { "math_id": 3, "text": "H(z) = \\frac{Y(z)}{X(z)} = 1 + \\alpha z^{-K} = \\frac{z^K + \\alpha}{z^K} " }, { "math_id": 4, "text": "z = e^{j\\omega}," }, { "math_id": 5, "text": "j " }, { "math_id": 6, "text": "\\omega " }, { "math_id": 7, "text": "H\\left(e^{j \\omega}\\right) = 1 + \\alpha e^{-j \\omega K} " }, { "math_id": 8, "text": "H\\left(e^{j \\omega}\\right) = \\bigl[1 + \\alpha \\cos(\\omega K)\\bigr] - j \\alpha \\sin(\\omega K) " }, { "math_id": 9, "text": "\\left| H\\left(e^{j \\omega}\\right) \\right| = \\sqrt{\\Re\\left\\{H\\left(e^{j \\omega}\\right)\\right\\}^2 + \\Im\\left\\{H\\left(e^{j \\omega}\\right)\\right\\}^2} \n" }, { "math_id": 10, "text": "\\begin{align}\n\\left| H(e^{j \\omega}) \\right| &= \\sqrt{(1 + \\alpha \\cos(\\omega K) )^2 + (\\alpha \\sin(\\omega K))^2} \\\\\n&= \\sqrt{(1 + \\alpha^2) + 2 \\alpha \\cos(\\omega K)}\n\\end{align} " }, { "math_id": 11, "text": "(1 + \\alpha^2) " }, { "math_id": 12, "text": "2 \\alpha \\cos( \\omega K) " }, { "math_id": 13, "text": "\\alpha ." }, { "math_id": 14, "text": "\\alpha," }, { "math_id": 15, "text": "\\begin{align}\nf &= \\frac{1}{2 K}, \\frac{3}{2 K}, \\frac{5}{2 K} \\cdots \\\\\n\\omega &= \\frac{\\pi}{K}, \\frac{3\\pi}{K}, \\frac{5\\pi}{K} \\cdots \\,\n\n\\end{align}" }, { "math_id": 16, "text": "\\alpha = \\pm 1 , " }, { "math_id": 17, "text": "\\alpha " }, { "math_id": 18, "text": "\\alpha" }, { "math_id": 19, "text": "H(z) = \\frac{z^K + \\alpha}{z^K} " }, { "math_id": 20, "text": "y[n] = x[n] + \\alpha y[n-K] " }, { "math_id": 21, "text": "y" }, { "math_id": 22, "text": "\\left(1 - \\alpha z^{-K}\\right) Y(z) = X(z) " }, { "math_id": 23, "text": "H(z) = \\frac{Y(z)}{X(z)} = \\frac{1}{1 - \\alpha z^{-K}} = \\frac{z^K}{z^K - \\alpha} " }, { "math_id": 24, "text": "z = e^{j\\omega}" }, { "math_id": 25, "text": "H\\left(e^{j \\omega}\\right) = \\frac{1}{1 - \\alpha e^{-j \\omega K}} \\, , " }, { "math_id": 26, "text": "\\left| H\\left(e^{j \\omega}\\right) \\right| = \\frac{1}{\\sqrt{\\left(1 + \\alpha^2\\right) - 2 \\alpha \\cos(\\omega K)}} \\, . " }, { "math_id": 27, "text": "\\begin{align}\nf &= 0, \\frac{1}{K}, \\frac{2}{K}, \\frac{3}{K} \\cdots \\\\\n\\omega &= 0, \\frac{2\\pi}{K}, \\frac{4\\pi}{K}, \\frac{6\\pi}{K} \\cdots\n\\end{align}" }, { "math_id": 28, "text": "H(z) = \\frac{z^K}{z^K - \\alpha} " }, { "math_id": 29, "text": "s = \\sigma + j \\omega" }, { "math_id": 30, "text": "y(t) = x(t) + \\alpha x(t - \\tau) " }, { "math_id": 31, "text": "H(s) = 1 + \\alpha e^{-s \\tau} " }, { "math_id": 32, "text": "y(t) = x(t) + \\alpha y(t - \\tau) " }, { "math_id": 33, "text": "H(s) = \\frac{1}{1 - \\alpha e^{-s \\tau}} " } ]
https://en.wikipedia.org/wiki?curid=1044685
1044856
Shotgun cartridge
Self-contained cartridge loaded with either shot or a solid slug A shotgun cartridge, shotshell, or shell is a type of rimmed, cylindrical (straight-walled) ammunition used specifically in shotguns. It is typically loaded with numerous small, spherical sub-projectiles called shot. Shotguns typically use a smoothbore barrel with a tapered constriction at the muzzle to regulate the extent of scattering. Some cartridges contain a single solid projectile known as a slug (sometimes fired through a rifled slug barrel). The casing usually consists of a paper or plastic tube with a metallic base containing the primer. The shot charge is typically contained by wadding inside the case. The caliber of the cartridge is known as its gauge. The projectiles are traditionally made of lead, but other metals such as steel, tungsten and bismuth are also used due to restrictions on lead, or for performance reasons such as achieving higher shot velocities by reducing the mass of the shot charge. Other unusual projectiles such as saboted flechettes, rubber balls, rock salt and magnesium shards also exist. Cartridges can also be made with specialty non-lethal projectiles such as rubber and bean bag rounds. Shotguns have an effective range of about with buckshot, with birdshot, with slugs, and well over with saboted slugs in rifled barrels. Most shotgun cartridges are designed to be fired from a smoothbore barrel, as "shot" would be spread too wide by rifling. A rifled barrel will increase the accuracy of sabot slugs, but makes it unsuitable for firing shot, as it imparts a spin to the shot cup, causing the shot cluster to disperse. A rifled slug uses rifling on the slug itself so it can be used in a smoothbore shotgun. History. Early shotgun cartridges used brass cases, not unlike pistol and rifle cartridge cases of the same era. These brass shotgun hulls or cases closely resembled large rifle cartridges, in terms of both the head and primer portions of the cartridge, as well as in their dimensions. Card wads, made of felt, leather, and cork, as well as paperboard, were all used at various times. Waterglass (Sodium silicate) was commonly used to cement the top overshot wad into these brass casings. No roll crimp or fold crimp was used on these early brass cases, although roll crimps were eventually used by some manufacturers to hold the overshot wad in place securely. The primers on these early shotgun cartridges were identical to pistol primers of the same diameter. Starting in the late 1870s, paper hulls began replacing brass hulls. Paper hulls remained popular for nearly a century, until the early 1960s. These shotgun cartridges using paper hulls were nearly always roll crimped, although fold crimping also eventually became popular. The primers on these paper hull cartridges also changed from the pistol primers used on the early brass shotgun shells to a primer containing both the priming charge and an anvil, making the shotgun primer taller. Card wads, made of felt and cork, as well as paperboard, were all used at various times, gradually giving way to plastic over powder wads, with card wads, and, eventually, to all plastic wads. Starting from the early 1960s to the late 1970s, plastic hulls started replacing paper hulls for the majority of cartridges and by the early 1980s, plastic hulls had become universally adopted. Typical construction. Modern shotgun cartridges typically consist of a plastic hull, with the base covered in a thin brass or plated steel covering. Paper cartridges used to be common and are still made, as are solid brass shells. Some companies have produced what appear to be all-plastic shells, although in these there is a small metal ring cast into the rim of the cartridge to provide strength. More powerful loads may use "high brass" shells, with the brass extended up further along the sides of the cartridge, while light loads will use "low brass" shells. The brass does not actually provide a significant amount of strength, but the difference in appearance provides shooters with a way to quickly differentiate between high and low powered ammunition. The base of the cartridge is fairly thick to hold the large primer, which is longer than primers used for rifle and pistol ammunition. Modern smokeless powders are far more efficient than the original black powder, so very little space is actually taken by propellant; shotguns use small quantities of double base powders, equivalent to quick-burning pistol powders, with up to 50% nitroglycerin. After the powder comes the wadding or wad. The primary purpose of a wad is to prevent the shot and powder from mixing, and to provide a seal that prevents gas from blowing through the shot rather than propelling it. The wad design may also encompass a shock absorber and a cup that holds the shot together until it is out of the barrel. A modern wad consists of three parts, the powder wad, the cushion, and the shot cup, which may be separate pieces or be one part. The powder wad acts as the gas seal (known as obturation), and is placed firmly over the powder; it may be a paper or plastic part. The cushion comes next, and it is designed to compress under pressure, to act as a shock absorber and minimize the deformation of the shot; it also serves to take up as much space as is needed between the powder wad and the shot. Cushions are almost universally made of plastic with crumple zones, although for game shooting in areas grazed by farm stock or wildlife biodegradable fiber wads are often preferred. The shot cup is the last part of the cartridge, and it serves to hold the shot together as it moves down the barrel. Shot cups have slits on the sides so that they peel open after leaving the barrel, allowing the shot to continue on in flight undisturbed. Shot cups, where used, are also almost universally plastic. The shot fills the shot cup (which must be of the correct length to hold the desired quantity of shot), and the cartridges is then crimped, or rolled closed. The only known shotgun cartridge using rebated rims is the 12 Gauge RAS12, specially made for the RAS-12 semi automatic shotgun. Sizes. Standard. Shotgun cartridges are generally measured by "gauge", which is the weight, in fractions of a pound, of a pure lead round ball that is the same diameter as the internal diameter of the barrel; in Britain and some other locations outside the United States the term "bore" is used with the same meaning. This contrasts with rifles and handguns, which are almost always measured in "caliber", a measurement of the internal diameter of the barrel measured in millimeters or inches and, consequently, is approximately equal to the diameter of the projectile that is fired. For example, a shotgun is called "12-gauge" because a lead sphere that just fits the inside diameter of the barrel weighs . This measurement comes from the time when early cannons were designated in a similar manner—a "12 pounder" would be a cannon that fired a cannonball; inversely, an individual "12-gauge" shot would in fact be a &lt;templatestyles src="Fraction/styles.css" /&gt;1⁄12 pounder. Thus, a 10-gauge shotgun has a larger-diameter barrel than a 12-gauge shotgun, which has a larger-diameter barrel than a 20-gauge shotgun, and so forth. The most popular shotgun gauge by far is 12-gauge. The larger 10-gauge, once popular for hunting larger birds such as goose and turkey, is on the decline with the advent of the longer, "magnum" 12-gauge cartridges, which offer similar performance. The mid-size 20-gauge is also a very popular chambering for smaller-framed shooters who favor its reduced recoil, those hunting smaller game, and experienced trap and skeet shooters who like the additional challenge of hitting their targets with a smaller shot charge. Other less-common, but commercially available gauges are 16 and 28. Several other gauges may be encountered but are considered obsolete. The 4, 8, 24, and 32 gauge guns are collector items. There are also some shotguns measured by diameter, rather than gauge. These are the .410 (10.4mm), .380 (9mm), and .22 (5.5mm); these are correctly called ".410 bore", not ".410-gauge". The .410 bore is the smallest shotgun size which is widely available commercially in the United States. For size comparison purposes, the .410, when measured by gauge, would be around 67- or 68-gauge (it is 67.62-gauge), The .410 is often mistakenly assigned 36-gauge. The 36 gauge had a 0.506" bore. Reloading components are still available. Other calibers. Snake shot (AKA: bird shot, rat shot, and dust shot) refers to handgun and rifle cartridges loaded with small lead shot. Snake shot is generally used for shooting at snakes, rodents, birds, and other pest at very close range. The most common snake shot cartridges are .22 Long Rifle, .22 Magnum, .38 Special, 9×19mm Luger, .40 Smith &amp; Wesson, .44 Special, .45 ACP, and .45 Colt. Commonly used by hikers, backpackers and campers, snake shot is ideally suited for use in revolvers and derringers, chambered for .38 Special and .357 Magnum. Snake shot may not cycle properly in semi-automatic pistols. Rifles specifically made to fire .22 caliber snake shot are also commonly used by farmers for pest control inside of barns and sheds, as the snake shot will not shoot holes in the roof or walls, or more importantly injure livestock with a ricochet. They are also used for airport and warehouse pest control. Shot shells have also been historically issued to soldiers, to be used in standard issue rifles. The .45-70 "Forager" round, which contained a thin wooden bullet filled with birdshot, was intended for hunting small game to supplement the soldiers' rations. This round in effect made the .45-70 rifle into a small gauge shotgun, capable of killing rabbits, ducks, and other small game. During World War II, the United States military developed the .45 ACP M12 and M15 shot cartridge cartridges. They were issued to pilots, to be used as foraging ammunition in the event that they were shot down. While they were best used in the M1917 revolvers, the M15 cartridge would actually cycle the semi-automatic M1911 pistols action. Garden guns. Garden guns are smooth-bore firearms specifically made to fire .22 caliber snake shot, and are commonly used by gardeners and farmers for pest control. Garden guns are short-range weapons that can do little harm past 15 to 20 yards, and they are quiet when fired with snake shot, compared to a standard ammunition. These guns are especially effective inside of barns and sheds, as the snake shot will not shoot holes in the roof or walls, or more importantly injure livestock with a ricochet. They are also used for pest control at airports, warehouses, stockyards, etc. Shotgun gauge diameter formula. The standard definition of shotgun gauge assumes that a pure lead ball is used. The following formulas relate the bore diameter "dn" (in inches) to the gauge "n": formula_0 formula_1 For example, the common bore diameter "dn" = 0.410 inches (.410 bore) is effectively gauge "n" = 67.6 . Lead free. By 1957 the ammo industry had the capability of producing a nontoxic shot, made out of either iron or steel. In 1976 the United States Fish and Wildlife Service took the first steps toward phasing out lead shot by designating steel-shot-only hunting zones for waterfowl. In the 1970s lead-free ammunition loaded with steel, bismuth, or tungsten composite pellets instead of more traditional lead-based shot was introduced and required for Migratory Bird Hunting (Ducks &amp; Geese). Lead shot in waterfowl hunting was banned throughout the United States in 1991. Due to environmental regulations, lead-loaded ammunition must be used carefully by hunters in Europe. For instance, in France, it cannot be fired in the vicinity of a pond. In fact, the laws are so complex that some hunters in Europe prefer not to risk getting into problems for firing lead pellets in the wrong places, so they opt for composite pellets in all situations. The use of lead shot is banned in Canada and the United States when hunting migratory game birds, such as ducks and geese, forcing the use of non-toxic shot in these countries for waterfowl hunting (lead shot can still legally be used in the United States for hunting game other than waterfowl). This means that manufacturers need to market new types of lead-free shotgun ammunition loaded with alternative pellets to meet environmental restrictions on the use of lead, as well as lead-based and cheaper shotshell ammunition, to remain competitive worldwide. The C.I.P. enforces approval of all ammunition a manufacturer or importer intends to sell in any of the (mainly European) C.I.P. member states. The ammunition manufacturing plants are obliged to test their products during production against the C.I.P. pressure specifications. A compliance report must be issued for each production lot and archived for later verification if needed. Besides pressure testing, cartridges containing steel pellets require an additional Vickers hardness test. The steel pellets used must have a hardness under 100 HV1, but, even so, steel is known to wear the barrel excessively over time if the steel pellet velocities become too high, leading to potentially harmful situations for the user. As a result, the measurement of pellet velocity is also an additional obligation for cartridges in 12-, 16-, and 20-gauges in both standard and high performance versions sold in Europe. The velocity of pellets must be below , and respectively for the standard versions. Another disadvantage of steel pellets is their tendency to ricochet unpredictably after striking any hard surface. This poses a major hazard at indoor ranges or whenever metal targets or hard backstops (e.g. concrete wall vs. a dirt berm) are used. For this reason, steel shot is explicitly banned at most indoor shooting ranges. Any shooters who are considering buying ammo loaded with steel for anything other than hunting purposes should first find out if using it won't cause undue hazard to themselves and others. However, data supporting the danger of firing high velocity cartridges loaded with steel shot causing barrel wear has not been published and the US equivalent of CIP, SAAMI, does not have any such restrictive limitations on the velocity of commercial steel shot cartridges sold in the United States. Similarly, shotgun manufacturers selling shotguns in the United States select their own appropriate standards for setting steel hardness for shotgun barrels and for velocities of steel shot ammunition. Some indoor shooting ranges prohibit the use of steel shot over concern of it causing a spark when hitting an object down range and causing a fire. Shot sizes. Cartridges are loaded with different sizes of shot depending on the target. For skeet shooting, a small shot such as a No. 8 or No. 9 would be used, because range is short and a high density pattern is desirable. Trap shooting requires longer shots, and so a larger shot, usually #&lt;templatestyles src="Fraction/styles.css" /&gt;7+1⁄2 is used. For hunting game, the range and penetration needed to assure a clean kill is considered. Shot loses its velocity very quickly due to its low sectional density and ballistic coefficient (see external ballistics). Small shot, like that used for skeet and trap, will have lost all appreciable energy by around , which is why trap and skeet ranges can be located in relatively close proximity to inhabited areas with negligible risk of injury to those outside the range. Birdshot. Birdshots are designed to be used for waterfowl and upland hunting, where the game is agile small/medium-sized birds. Their sizes are numbered similarly to the shotgun gauges—the smaller the number, the larger the shot (except in the obsolete Swedish system, in which it is reversed). Generally birdshot is just called "shot", such as "number 9 shot" or "BB shot". To make matters more complex, there are small differences in the size of American, Standard (European), Belgian, Italian, Norwegian, Spanish, Swedish, British, and Australian shot. That is because some systems go by diameter in inches (American), some go by diameter in millimeters (European), and the British system goes by the number of lead shot per ounce. Australia has a hybrid system due to its market being flooded with a mixture of British, American, and European cartridges. For American shot, a useful method for remembering the diameter of numbered shot in inches is simply to subtract the shot size from 17. The resulting answer is the diameter of the shot in hundredths of an inch. For example, #2 shot gives 17−2 = 15, meaning that the diameter of #2 shot is &lt;templatestyles src="Fraction/styles.css" /&gt;15⁄100 or . B shot is , and sizes go up in increments for BB and BBB sizes. In metric measurement, it is easy to remember that #5 shot is 3 mm; each number up or down represents a 0.25 mm change in diameter, so e.g. #7 shot is 2.5 mm. Number 11 and number 12 lead shot also exists. Shot of these sizes is used in specialized cartridges designed to be fired at close range (less than four yards) for killing snakes, rats and similar-sized animals. Such cartridges are typically intended to be fired from handguns, particularly revolvers. This type of ammunition is produced by Federal and CCI, among others. Birdshot selection. For hunting, shot size must be chosen not only for the range, but also for the game. The shot must reach the target with enough energy to penetrate to a depth sufficient to kill the game. Lead shot is still the best ballistic performer, but environmental restrictions on the use of lead, especially with waterfowl, require steel, bismuth, or tungsten composites. Steel, being significantly less dense than lead, requires larger shot sizes, but is a good choice when lead is not legal and cost is a consideration. It is argued that steel shot cannot safely be used in some older shotguns without causing damage to either the bore or to the choke due to the hardness of steel shot. However, the increased pressure in most steel cartridges is a far greater problem, causing more strain to the breech of the gun. Since tungsten is very hard, it must also be used with care in older guns. Tungsten shot is often alloyed with nickel and iron, softening the base metal. That alloy is approximately 1/3 denser than lead, but far more expensive. Bismuth shot falls between steel and tungsten shot in both density and cost. The rule of thumb in converting appropriate steel shot is to go up by two numbers when switching from lead. However, there are different views on dense patterns versus higher pellet energies. Buckshot. Larger sizes of shot, large enough that they must be carefully packed into the cartridge rather than simply dumped or poured in, are called "buckshot" or just "buck". Buckshot is used for hunting medium to large game, as a tactical round for law enforcement and military personnel, and for personal self-defense. Buckshot size is most commonly designated by a series of numbers and letters, with smaller numbers indicating larger shot. Sizes larger than "0" are designated by multiple zeros. "00" (usually pronounced "double-aught" in North American English) is the most commonly sold size. The British system for designating buckshot size is based on the amount of shot per ounce. The sizes are LG (large grape – from grapeshot derived from musket shooting), MG (medium grape), and SG (small grape). For smaller game, SSG shot is half the weight of SG, SSSG shot is half the weight of SSG, SSSSG shot is half the weight of SSSG, and so on. The Australian system is similar, except that it has 00-SG, a small-game cartridge filled with 00 buckshot. Loads of 12-gauge 00 buckshot are commonly available in cartridges holding from 8 (eight) to 18 (eighteen) pellets in standard lengths (&lt;templatestyles src="Fraction/styles.css" /&gt;2+3⁄4 inches, 3 inches, and &lt;templatestyles src="Fraction/styles.css" /&gt;3+1⁄2). Reduced-recoil 00 buckshot is often used in tactical and self-defense rounds, minimizing shooter stress and improving the speed of follow-up shots. Specialist loads. Other rounds include: Spread and patterning. Most modern sporting shotguns have interchangeable choke tubes to allow the shooter to change the spread of shot that comes out of the gun. In some cases, it is not practical to do this; the gun might have fixed choke, or a shooter firing at receding targets may want to fire a wide pattern immediately followed by a narrower pattern out of a single barrelled shotgun. The spread of the shot can also be altered by changing the characteristics of the cartridge. Narrower patterns. A buffering material, such as granulated plastic, sawdust, or similar material can be mixed with the shot to fill the spaces between the individual pellets. When fired, the buffering material compresses and supports the shot, reducing the deformation the shot pellets experience under the extreme acceleration. Antimony-lead alloys, copper plated lead shot, steel, bismuth, and tungsten composite shot all have a hardness greater than that of plain lead shot, and will deform less as well. Reducing the deformation will result in tighter patterns, as the spherical pellets tend to fly straighter. One improvised method for achieving the same effect involves pouring molten wax or tar into the mass of shot. Another is a partial ring cut around the case intended to ensure that the shot comes out tightly bunched along with the portion of the case forward of the cut, creating a 'cut-shell'. This can be dangerous, as it is thought to cause higher chamber pressures—especially if part of the cartridge remains behind in the barrel and is not cleared before another shot is fired. Wider patterns. Shooting the softest possible shot will result in more shot deformation and a wider pattern. This is often the case with cheap ammunition, as the lead used will have minimal alloying elements such as antimony and be very soft. Spreader wads are wads that have a small plastic or paper insert in the middle of the shot cup, usually a cylinder or "X" cross-section. When the shot exits the barrel, the insert helps to push the shot out from the center, opening up the pattern. Often these result in inconsistent performance, though modern designs are doing much better than the traditional improvised solutions. Intentionally deformed shot (hammered into ellipsoidal shape) or cubical shot will also result in a wider pattern, much wider than spherical shot, with more consistency than spreader wads. Spreader wads and non-spherical shot are disallowed in some competitions. Hunting loads that use either spreaders or non-spherical shot are usually called "brush loads", and are favored for hunting in areas where dense cover keeps shot distances very short. Spread. Most shotgun cartridges contain multiple pellets in order to increase the likelihood of a target being hit. A shotgun's shot spread refers to the two-dimensional pattern that these projectiles (or shot) leave behind on a target. Another less important dimension of spread concerns the length of the in-flight shot string from the leading pellet to the trailing one. The use of multiple pellets is especially useful for hunting small game such as birds, rabbits, and other animals that fly or move quickly and can unpredictably change their direction of travel. However, some cartridges only contain one metal shot, known as a slug, for hunting large game such as deer. As the shot leaves the barrel upon firing, the three-dimensional shot string is close together. But as the shot moves farther away, the individual pellets increasingly spread out and disperse. Because of this, the effective range of a shotgun, when firing a multitude of shot, is limited to approximately . To control this effect, shooters may use a constriction within the barrel of a shotgun called a choke. The choke, whether selectable or fixed within a barrel, effectively reduces the diameter of the end of the barrel, forcing the shot even closer together as it leaves the barrel, thereby increasing the effective range. The tighter the choke, the narrower the end of the barrel. Consequently, the effective range of a shotgun is increased with a tighter choke, as the shot column is held tighter over longer ranges. Hunters or target shooters can install several types of chokes, on guns having selectable chokes, depending on the range at which their intended targets will be located. For fixed choke shotguns, different shotguns or barrels are often selected for the intended hunting application at hand. From tightest to loosest, the various choke sizes are: full choke, improved modified, modified, improved cylinder, skeet, and cylinder bore. A hunter who intends to hunt an animal such as rabbit or grouse knows that the animal will be encountered at a close range—usually within —and will be moving very quickly. So, an ideal choke would be a cylinder bore (the loosest) as the hunter wants the shot to spread out as quickly as possible. If this hunter were using a full choke (the tightest) at , the shot would be very close together and cause an unnecessarily large amount of damage to the rabbit, or, alternatively, a complete miss of the rabbit. This would waste virtually all of the meat for a hit, as the little amount of meat remaining would be overly-laden with shot and rendered inedible. By using a cylinder bore, this hunter would maximize the likelihood of a kill, and maximize the amount of edible meat. Contrarily, a hunter who intends to hunt geese knows that a goose will likely be approximately away, so that hunter would want to delay the spread of the shot as much as possible by using a full choke. By using a full choke for targets that are farther away, the shooter again maximizes the likelihood of a kill, and maximizes the amount of edible meat. This also maximizes the chances of a swift and humane kill as the target would be hit with enough shot to kill quickly instead of only wounding the animal. For older shotguns having only one fixed choke, intended primarily for equally likely use against rabbits, squirrels, quail, doves, and pheasant, an often-chosen choke is the improved cylinder, in a barrel, making the shotgun suitable for use as a general all-round hunting shotgun, without having excess weight. Shotguns having fixed chokes intended for geese, in contrast, are often found with full choke barrels, in longer lengths, and are much heavier, being intended for fixed use within a blind against distant targets. Defensive shotguns with fixed chokes generally have a cylinder bore choke. Likewise, shotguns intended primarily for use with slugs invariably also are found with a choke that is a cylinder bore. Dram equivalence. "Dram" equivalence is sometimes still used as a measure of the powder charge power in a cartridge. Today, it is an anachronistic equivalence that represents the equivalent power of a cartridge containing this equivalent amount of black-powder measured in drams avoirdupois. A dram in the avoirdupois system is the mass of &lt;templatestyles src="Fraction/styles.css" /&gt;1⁄256 pound or &lt;templatestyles src="Fraction/styles.css" /&gt;1⁄16 ounce or 27.3 grains. The reasoning behind this archaic equivalence is that when smokeless powder first came out, some method of establishing an equivalence with common loads was needed in order to sell a box of cartridges. For example, a cartridge containing a 3 or 3 1/2 dram load of black-powder was a common hunting field load, and a heavy full power load would have contained about a 4 to 4-1/2 dram load, whereas a cartridge containing only a 2 dram load of black-powder was a common target practice load. A hunter looking for a field or full power load familiar with black-powder shotgun loads would have known exactly what the equivalence of the cartridges would have been in the newly introduced smokeless powder. Today, however, this represents a poorly understood equivalence of the powder charge power in a cartridge. To further complicate matters, "dram" equivalence was only defined for 12 gauge cartridges, and only for lead shot, although it has often been used for describing other gauges of shells, and even steel shot loads. Furthermore, "dram" equivalence only came around about 15 years after smokeless powder had been introduced, long after the need for an equivalence had started to fade, and actual black-powder loaded shotshells had largely vanished. In practice, "dram" equivalence today most commonly equates just to a velocity rating equivalence in fps (feet-per-second), while assuming lead shot. A secondary impact of this equivalence was that common cartridges needed to stay the same size, physically, e.g., 2-1/2 or 2-3/4-inch shells, in order to be used in pre-existing shotguns when smokeless powder started being in the place of black-powder. As smokeless powder did not have to be loaded in the same volume as black-powder to achieve the same power, being more powerful, the volumes of wads had to increase, to fill the cartridge enough to permit proper crimps still to be made. Initially, this meant that increased numbers of over powder card wads had to be stacked to achieve the same stack-up length. Eventually, this also led to the introduction of one-piece plastic wads in the late 1950s through the early 1960s, to add additional wad volumes, in order to maintain the same overall cartridge length. Dram equivalence has no bearing on the reloading of cartridges with smokeless powder; loading a cartridge with an equivalent dram weight of smokeless powder would cause a shotgun to explode. It only has an equivalence in reloading with black powder. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "d_n = 1.67 / \\sqrt[3]{n} = \\sqrt[3]{4.66/n}" }, { "math_id": 1, "text": "n= (1.67 / d_n)^{3} = 4.66 / (d_n)^{3}" } ]
https://en.wikipedia.org/wiki?curid=1044856
1045012
Nonlinear regression
Regression analysis In statistics, nonlinear regression is a form of regression analysis in which observational data are modeled by a function which is a nonlinear combination of the model parameters and depends on one or more independent variables. The data are fitted by a method of successive approximations (iterations). General. In nonlinear regression, a statistical model of the form, formula_0 relates a vector of independent variables, formula_1, and its associated observed dependent variables, formula_2. The function formula_3 is nonlinear in the components of the vector of parameters formula_4, but otherwise arbitrary. For example, the Michaelis–Menten model for enzyme kinetics has two parameters and one independent variable, related by formula_3 by: formula_5 This function, which is a rectangular hyperbola, is "nonlinear" because it cannot be expressed as a linear combination of the two "formula_4"s. Systematic error may be present in the independent variables but its treatment is outside the scope of regression analysis. If the independent variables are not error-free, this is an errors-in-variables model, also outside this scope. Other examples of nonlinear functions include exponential functions, logarithmic functions, trigonometric functions, power functions, Gaussian function, and Lorentz distributions. Some functions, such as the exponential or logarithmic functions, can be transformed so that they are linear. When so transformed, standard linear regression can be performed but must be applied with caution. See Linearization§Transformation, below, for more details. In general, there is no closed-form expression for the best-fitting parameters, as there is in linear regression. Usually numerical optimization algorithms are applied to determine the best-fitting parameters. Again in contrast to linear regression, there may be many local minima of the function to be optimized and even the global minimum may produce a biased estimate. In practice, estimated values of the parameters are used, in conjunction with the optimization algorithm, to attempt to find the global minimum of a sum of squares. For details concerning nonlinear data modeling see least squares and non-linear least squares. Regression statistics. The assumption underlying this procedure is that the model can be approximated by a linear function, namely a first-order Taylor series: formula_6 where formula_7 are Jacobian matrix elements. It follows from this that the least squares estimators are given by formula_8 compare generalized least squares with covariance matrix proportional to the unit matrix. The nonlinear regression statistics are computed and used as in linear regression statistics, but using J in place of X in the formulas. When the function formula_9 itself is not known analytically, but needs to be linearly approximated from formula_10, or more, known values (where formula_11 is the number of estimators), the best estimator is obtained directly from the Linear Template Fit as formula_12 (see also linear least squares). The linear approximation introduces bias into the statistics. Therefore, more caution than usual is required in interpreting statistics derived from a nonlinear model. Ordinary and weighted least squares. The best-fit curve is often assumed to be that which minimizes the sum of squared residuals. This is the ordinary least squares (OLS) approach. However, in cases where the dependent variable does not have constant variance, or there are some outliers, a sum of weighted squared residuals may be minimized; see weighted least squares. Each weight should ideally be equal to the reciprocal of the variance of the observation, or the reciprocal of the dependent variable to some power in the outlier case , but weights may be recomputed on each iteration, in an iteratively weighted least squares algorithm. Linearization. Transformation. Some nonlinear regression problems can be moved to a linear domain by a suitable transformation of the model formulation. For example, consider the nonlinear regression problem formula_13 with parameters "a" and "b" and with multiplicative error term "U". If we take the logarithm of both sides, this becomes formula_14 where "u" = ln("U"), suggesting estimation of the unknown parameters by a linear regression of ln("y") on "x", a computation that does not require iterative optimization. However, use of a nonlinear transformation requires caution. The influences of the data values will change, as will the error structure of the model and the interpretation of any inferential results. These may not be desired effects. On the other hand, depending on what the largest source of error is, a nonlinear transformation may distribute the errors in a Gaussian fashion, so the choice to perform a nonlinear transformation must be informed by modeling considerations. For Michaelis–Menten kinetics, the linear Lineweaver–Burk plot formula_15 of 1/"v" against 1/["S"] has been much used. However, since it is very sensitive to data error and is strongly biased toward fitting the data in a particular range of the independent variable, ["S"], its use is strongly discouraged. For error distributions that belong to the exponential family, a link function may be used to transform the parameters under the Generalized linear model framework. Segmentation. The "independent" or "explanatory variable" (say X) can be split up into classes or segments and linear regression can be performed per segment. Segmented regression with confidence analysis may yield the result that the "dependent" or "response" variable (say Y) behaves differently in the various segments. The figure shows that the soil salinity (X) initially exerts no influence on the crop yield (Y) of mustard, until a "critical" or "threshold" value ("breakpoint"), after which the yield is affected negatively. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathbf{y} \\sim f(\\mathbf{x}, \\boldsymbol\\beta)" }, { "math_id": 1, "text": "\\mathbf{x}" }, { "math_id": 2, "text": "\\mathbf{y}" }, { "math_id": 3, "text": "f" }, { "math_id": 4, "text": "\\beta" }, { "math_id": 5, "text": " f(x,\\boldsymbol\\beta)= \\frac{\\beta_1 x}{\\beta_2 + x} " }, { "math_id": 6, "text": " f(x_i,\\boldsymbol\\beta) \\approx f(x_i,0) + \\sum_j J_{ij} \\beta_j " }, { "math_id": 7, "text": "J_{ij} = \\frac{\\partial f(x_i,\\boldsymbol\\beta)}{\\partial \\beta_j}" }, { "math_id": 8, "text": "\\hat{\\boldsymbol{\\beta}} \\approx \\mathbf { (J^TJ)^{-1}J^Ty}," }, { "math_id": 9, "text": "f(x_i,\\boldsymbol\\beta)" }, { "math_id": 10, "text": "n+1" }, { "math_id": 11, "text": "n" }, { "math_id": 12, "text": " \\hat{\\boldsymbol\\beta} = ((\\mathbf{Y\\tilde{M}})^\\mathsf{T} \\boldsymbol\\Omega^{-1} \\mathbf{Y\\tilde{M}})^{-1}(\\mathbf{Y\\tilde{M}})^\\mathsf{T}\\boldsymbol\\Omega^{-1}(\\mathbf{d}-\\mathbf{Y\\bar{m})}" }, { "math_id": 13, "text": " y = a e^{b x}U \\,\\!" }, { "math_id": 14, "text": " \\ln{(y)} = \\ln{(a)} + b x + u, \\,\\!" }, { "math_id": 15, "text": " \\frac{1}{v} = \\frac{1}{V_\\max} + \\frac{K_m}{V_{\\max}[S]}" } ]
https://en.wikipedia.org/wiki?curid=1045012
1045015
Storage tube
CRTs designed for use as computer memory Storage tubes are a class of cathode-ray tubes (CRTs) that are designed to hold an image for a long period of time, typically as long as power is supplied to the tube. A specialized type of storage tube, the Williams tube, was used as a main memory system on a number of early computers, from the late 1940s into the early 1950s. They were replaced with other technologies, notably core memory, starting in the 1950s. In a new form, the bistable tube, storage tubes made a comeback in the 1960s and 1970s for use in computer graphics, most notably the Tektronix 4010 series. Today they are obsolete, their functions provided by low-cost memory devices and liquid crystal displays. Operation. Background. A conventional CRT consists of an electron gun at the back of the tube that is aimed at a thin layer of phosphor at the front of the tube. Depending on the role, the beam of electrons emitted by the gun is steered around the display using magnetic (television) or electrostatic (oscilloscope) means. When the electrons strike the phosphor, the phosphor "lights up" at that location for a time, and then fades away. The length of time the spot remains is a function of the phosphor chemistry. At very low energies, electrons from the gun will strike the phosphor and nothing will happen. As the energy is increased, it will reach a critical point, formula_0, that will activate the phosphor and cause it to give off light. As the voltage increases beyond Vcr1 the brightness of the spot will increase. This allows the CRT to display images with varying intensity, like a television image. Above Vcr1 another effect also starts, secondary emission. When any insulating material is struck by electrons over a certain critical energy, electrons within the material are forced out of it through collisions, increasing the number of free electrons. This effect is used in electron multipliers as found in night vision systems and similar devices. In the case of a CRT this effect is generally undesirable; the new electrons generally fall back to the display and cause the surrounding phosphor to light up, which appears as a lowering of the focus of the image. The rate of secondary emission is also a function of the electron beam energy, but follows a different rate curve. As the electron energy is increased, the rate increases until it reaches a critical threshold, Vcr2 when the number of secondary emissions is greater than the number supplied by the gun. In this case the localized image rapidly fades as energy leaving the display through secondary electrons is greater than the rate it is being supplied by the gun. In any CRT, images are displayed by striking the screen with electron energies between these two values, Vcr1 and Vcr2. Below Vcr1 no image is formed, and above Vcr2 any image rapidly fades. Another side effect, initially a curiosity, is that electrons will stick to the phosphor in lit up areas. As the light emission fades, these electrons are likewise released back into the tube. The charge is generally far too small to have a visual effect, and was generally ignored in the case of displays. Storage. These two effects were both utilized in the construction of a storage tube. Storage was accomplished by striking any suitably long-lived phosphor with electrons with energies just above Vcr1, and erased by striking them with electrons above Vcr2. There were any number of varieties of mechanical layouts used to improve focus or cause the image to be refreshed either internally to the tube or through off board storage. The easiest example to understand are the early computer memory systems as typified by the Williams tube. These consisted of World War II surplus radar display CRTs connected to a computer. The X and Y deflection plates were connected to amplifiers that converted memory locations into X and Y positions on the screen. To write a value to memory, the address was amplified and sent to the Y deflection plates, such that the beam would be fixed to a horizontal line on the screen. A time base generator then set the X deflection plate to increasing voltages, causing the beam to be scanned across the selected line. In this respect, it is similar to a conventional television scanning a single line. The gun was set to a default energy close to Vcr1, and the bits from the computer fed to the gun to modulate the voltage up and down such that 0's would be below Vcr1 and 1's above it. By the time the beam reached the other side of the line, a pattern of short dashes was drawn for each 1, while 0's were empty locations. To read the values back out, the deflections plates were set to the same values, but the gun energy set to a value above Vcr2. As the beam scanned the line, the phosphor was pushed well beyond the secondary emission threshold. If the beam was located over a blank area, a certain number of electrons would be released, but if it was over a lit area, the number would be increased by the number of electrons previously stuck to that area. In the Williams tube, these values were read by measuring the capacitance of a metal plate just in front of the display side of the tube. Electrons leaving the front of the CRT hit the plate and changed its charge. As the reading process also erased any stored values, the signal had to be regenerated through associated circuitry. A CRT with two electron guns, one for reading and one for writing, made this process trivial. Imaging systems. The earliest computer graphics systems, like those of the TX-2 and DEC PDP-1, required the entire attention of the computer to maintain. A list of points stored in main memory was periodically read out to the display to refresh it before the image faded. This generally occurred frequently enough that there was little time to do anything else, and interactive systems like "Spacewar!" were tour-de-force programming efforts. For practical use, graphical displays were developed that contained their own memory and an associated very simple computer which offloaded the refreshing task from the mainframe. This was not inexpensive; the IBM 2250 graphics terminal used with the IBM S/360 cost $280,000 in 1970. A storage tube could replace most or all of the localized hardware by storing the vectors directly within the display, instead of an associated local computer. Commands that previously caused the terminal to erase its memory and thus clear the display could be emulated by scanning the entire screen at an energy above Vcr2. In most systems, this caused the entire screen to quickly "flash" before clearing to a blank state. The two main advantages were: Generally speaking, storage tubes could be divided into two categories. In the more common category, they were only capable of storing "binary" images; any given point on the screen was either illuminated or dark. The Tektronix Direct-View Bistable Storage Tube was perhaps the best example in this category. Other storage tubes were able to store greyscale/halftoned images; the tradeoff was usually a much-reduced storage time. Some pioneering storage tube displays were MIT Project MAC's ARDS (Advanced Remote Display Station), the Computek 400 Series Display terminals (a commercial derivative), which both used a Tektronix type 611 storage display unit, and Tektronix's 4014 terminal, the latter becoming a de facto computer terminal standard some time after its introduction (later being emulated by other systems due to this status). The first generalized computer assisted instruction system, PLATO I, c. 1960 on ILLIAC I, used a storage tube as its computer graphics display. PLATO II and PLATO III also used storage tubes as displays. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "V_{cr1}" } ]
https://en.wikipedia.org/wiki?curid=1045015
10450522
Nyström method
In mathematics numerical analysis, the Nyström method or quadrature method seeks the numerical solution of an integral equation by replacing the integral with a representative weighted sum. The continuous problem is broken into formula_0 discrete intervals; quadrature or numerical integration determines the weights and locations of representative points for the integral. The problem becomes a system of linear equations with formula_0 equations and formula_0 unknowns, and the underlying function is implicitly represented by an interpolation using the chosen quadrature rule. This discrete problem may be ill-conditioned, depending on the original problem and the chosen quadrature rule. Since the linear equations require formula_1 operations to solve, high-order quadrature rules perform better because low-order quadrature rules require large formula_0 for a given accuracy. Gaussian quadrature is normally a good choice for smooth, non-singular problems. Discretization of the integral. Standard quadrature methods seek to represent an integral as a weighed sum in the following manner: formula_2 where formula_3 are the weights of the quadrature rule, and points formula_4 are the abscissas. Example. Applying this to the inhomogeneous Fredholm equation of the second kind formula_5, results in formula_6. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n" }, { "math_id": 1, "text": "O(n^3)" }, { "math_id": 2, "text": "\\int_a^b h (x) \\;\\mathrm d x \\approx \\sum_{k=1}^n w_k h (x_k)" }, { "math_id": 3, "text": "w_k" }, { "math_id": 4, "text": "x_k" }, { "math_id": 5, "text": "f (x) = \\lambda u (x) - \\int_a^b K (x, x') f (x') \\;\\mathrm d x'" }, { "math_id": 6, "text": "f (x) \\approx \\lambda u (x) - \\sum_{k=1}^n w_k K (x, x_k) f (x_k)" } ]
https://en.wikipedia.org/wiki?curid=10450522
10451852
Nahm equations
In differential geometry and gauge theory, the Nahm equations are a system of ordinary differential equations introduced by Werner Nahm in the context of the "Nahm transform" – an alternative to Ward's twistor construction of monopoles. The Nahm equations are formally analogous to the algebraic equations in the ADHM construction of instantons, where finite order matrices are replaced by differential operators. Deep study of the Nahm equations was carried out by Nigel Hitchin and Simon Donaldson. Conceptually, the equations arise in the process of infinite-dimensional hyperkähler reduction. They can also be viewed as a dimensional reduction of the anti-self-dual Yang-Mills equations . Among their many applications we can mention: Hitchin's construction of monopoles, where this approach is critical for establishing nonsingularity of monopole solutions; Donaldson's description of the moduli space of monopoles; and the existence of hyperkähler structure on coadjoint orbits of complex semisimple Lie groups, proved by , , and . Equations. Let formula_0 be three matrix-valued meromorphic functions of a complex variable formula_1. The Nahm equations are a system of matrix differential equations formula_2 together with certain analyticity properties, reality conditions, and boundary conditions. The three equations can be written concisely using the Levi-Civita symbol, in the form formula_3 More generally, instead of considering formula_4 by formula_4 matrices, one can consider Nahm's equations with values in a Lie algebra formula_5. Additional conditions. The variable formula_1 is restricted to the open interval formula_6, and the following conditions are imposed: Nahm–Hitchin description of monopoles. There is a natural equivalence between Lax representation. The Nahm equations can be written in the Lax form as follows. Set formula_19 then the system of Nahm equations is equivalent to the Lax equation formula_20 As an immediate corollary, we obtain that the spectrum of the matrix formula_21 does not depend on formula_1. Therefore, the characteristic equation formula_22 which determines the so-called spectral curve in the twistor space formula_23is invariant under the flow in formula_1.
[ { "math_id": 0, "text": "T_1(z), T_2(z), T_3(z)" }, { "math_id": 1, "text": "z" }, { "math_id": 2, "text": "\n\\begin{align}\n\\frac{dT_1}{dz}&=[T_2,T_3]\\\\[3pt]\n\\frac{dT_2}{dz}&=[T_3,T_1]\\\\[3pt]\n\\frac{dT_3}{dz}&=[T_1,T_2],\n\\end{align}\n" }, { "math_id": 3, "text": "\\frac{dT_i}{dz}=\\frac{1}{2}\\sum_{j,k}\\epsilon_{ijk}[T_j,T_k]=\\sum_{j,k}\\epsilon_{ijk}T_j T_k. " }, { "math_id": 4, "text": "N" }, { "math_id": 5, "text": "g" }, { "math_id": 6, "text": "(0,2)" }, { "math_id": 7, "text": "T^*_i = -T_i;" }, { "math_id": 8, "text": "T_i(2-z)=T_i(z)^{T};\\," }, { "math_id": 9, "text": "T_iN" }, { "math_id": 10, "text": "[0,2]" }, { "math_id": 11, "text": "0" }, { "math_id": 12, "text": "2" }, { "math_id": 13, "text": "z = 0" }, { "math_id": 14, "text": "z = 2" }, { "math_id": 15, "text": "T_1, T_2, T_3" }, { "math_id": 16, "text": "K" }, { "math_id": 17, "text": "SU(2)" }, { "math_id": 18, "text": "O(k,R)" }, { "math_id": 19, "text": " \n\\begin{align}\n& A_0=T_1+iT_2, \\quad A_1=-2i T_3, \\quad A_2=T_1-iT_2 \\\\[3 pt]\n& A(\\zeta)=A_0+\\zeta A_1+\\zeta^2 A_2, \\quad B(\\zeta)=\\frac{1}{2}\\frac{dA}{d\\zeta}=\\frac{1}{2}A_1+\\zeta A_2, \n\\end{align}\n" }, { "math_id": 20, "text": " \\frac{dA}{dz}=[A,B]. " }, { "math_id": 21, "text": "A" }, { "math_id": 22, "text": " \\det(\\lambda I+A(\\zeta,z))=0, " }, { "math_id": 23, "text": "TP^1" } ]
https://en.wikipedia.org/wiki?curid=10451852
10452186
Inverse problem for Lagrangian mechanics
In mathematics, the inverse problem for Lagrangian mechanics is the problem of determining whether a given system of ordinary differential equations can arise as the Euler–Lagrange equations for some Lagrangian function. There has been a great deal of activity in the study of this problem since the early 20th century. A notable advance in this field was a 1941 paper by the American mathematician Jesse Douglas, in which he provided necessary and sufficient conditions for the problem to have a solution; these conditions are now known as the Helmholtz conditions, after the German physicist Hermann von Helmholtz. Background and statement of the problem. The usual set-up of Lagrangian mechanics on "n"-dimensional Euclidean space R"n" is as follows. Consider a differentiable path "u" : [0, "T"] → R"n". The action of the path "u", denoted "S"("u"), is given by formula_0 where "L" is a function of time, position and velocity known as the Lagrangian. The principle of least action states that, given an initial state "x"0 and a final state "x"1 in R"n", the trajectory that the system determined by "L" will actually follow must be a minimizer of the action functional "S" satisfying the boundary conditions "u"(0) = "x"0, "u"(T) = "x"1. Furthermore, the critical points (and hence minimizers) of "S" must satisfy the Euler–Lagrange equations for "S": formula_1 where the upper indices "i" denote the components of "u" = ("u"1, ..., "u""n"). In the classical case formula_2 formula_3 formula_4 the Euler–Lagrange equations are the second-order ordinary differential equations better known as Newton's laws of motion: formula_5 formula_6 The inverse problem of Lagrangian mechanics is as follows: given a system of second-order ordinary differential equations formula_7 that holds for times 0 ≤ "t" ≤ "T", does there exist a Lagrangian "L" : [0, "T"] × R"n" × R"n" → R for which these ordinary differential equations (E) are the Euler–Lagrange equations? In general, this problem is posed not on Euclidean space R"n", but on an "n"-dimensional manifold "M", and the Lagrangian is a function "L" : [0, "T"] × T"M" → R, where T"M" denotes the tangent bundle of "M". Douglas' theorem and the Helmholtz conditions. To simplify the notation, let formula_8 and define a collection of "n"2 functions Φ"j""i" by formula_9 Theorem. (Douglas 1941) There exists a Lagrangian "L" : [0, "T"] × T"M" → R such that the equations (E) are its Euler–Lagrange equations if and only if there exists a non-singular symmetric matrix "g" with entries "g""ij" depending on both "u" and "v" satisfying the following three Helmholtz conditions: formula_10 formula_11 formula_12 Applying Douglas' theorem. At first glance, solving the Helmholtz equations (H1)–(H3) seems to be an extremely difficult task. Condition (H1) is the easiest to solve: it is always possible to find a "g" that satisfies (H1), and it alone will not imply that the Lagrangian is singular. Equation (H2) is a system of ordinary differential equations: the usual theorems on the existence and uniqueness of solutions to ordinary differential equations imply that it is, "in principle", possible to solve (H2). Integration does not yield additional constants but instead first integrals of the system (E), so this step becomes difficult "in practice" unless (E) has enough explicit first integrals. In certain well-behaved cases (e.g. the geodesic flow for the canonical connection on a Lie group), this condition is satisfied. The final and most difficult step is to solve equation (H3), called the "closure conditions" since (H3) is the condition that the differential 1-form "g""i" is a closed form for each "i". The reason why this is so daunting is that (H3) constitutes a large system of coupled partial differential equations: for "n" degrees of freedom, (H3) constitutes a system of formula_13 partial differential equations in the 2"n" independent variables that are the components "g""ij" of "g", where formula_14 denotes the binomial coefficient. In order to construct the most general possible Lagrangian, one must solve this huge system! Fortunately, there are some auxiliary conditions that can be imposed in order to help in solving the Helmholtz conditions. First, (H1) is a purely algebraic condition on the unknown matrix "g". Auxiliary algebraic conditions on "g" can be given as follows: define functions Ψ"jk""i" by formula_15 The auxiliary condition on "g" is then formula_16 In fact, the equations (H2) and (A) are just the first in an infinite hierarchy of similar algebraic conditions. In the case of a parallel connection (such as the canonical connection on a Lie group), the higher order conditions are always satisfied, so only (H2) and (A) are of interest. Note that (A) comprises formula_17 conditions whereas (H1) comprises formula_18 conditions. Thus, it is possible that (H1) and (A) together imply that the Lagrangian function is singular. As of 2006, there is no general theorem to circumvent this difficulty in arbitrary dimension, although certain special cases have been resolved. A second avenue of attack is to see whether the system (E) admits a submersion onto a lower-dimensional system and to try to "lift" a Lagrangian for the lower-dimensional system up to the higher-dimensional one. This is not really an attempt to solve the Helmholtz conditions so much as it is an attempt to construct a Lagrangian and then show that its Euler–Lagrange equations are indeed the system (E).
[ { "math_id": 0, "text": "S(u) = \\int_{0}^{T} L(t, u(t), \\dot{u}(t)) \\, \\mathrm{d} t," }, { "math_id": 1, "text": "\\frac{\\mathrm{d}}{\\mathrm{d} t} \\frac{\\partial L}{\\partial \\dot{u}^{i}} - \\frac{\\partial L}{\\partial u^{i}} = 0 \\quad \\text{for } 1 \\leq i \\leq n," }, { "math_id": 2, "text": "T(\\dot{u}) = \\frac{1}{2} m | \\dot{u} |^{2}," }, { "math_id": 3, "text": "V : [0, T] \\times \\mathbb{R}^{n} \\to \\mathbb{R}," }, { "math_id": 4, "text": "L(t, u, \\dot{u}) = T(\\dot{u}) - V(t, u)," }, { "math_id": 5, "text": "m \\ddot{u}^{i} = - \\frac{\\partial V(t, u)}{\\partial u^{i}} \\quad \\text{for } 1 \\leq i \\leq n," }, { "math_id": 6, "text": "\\mbox{i.e. }m \\ddot{u} = - \\nabla_{u} V(t, u)." }, { "math_id": 7, "text": "\\ddot{u}^{i} = f^{i} (u^{j}, \\dot{u}^{j}) \\quad \\text{for } 1 \\leq i, j \\leq n, \\quad \\mbox{(E)}" }, { "math_id": 8, "text": "v^{i} = \\dot{u}^{i}" }, { "math_id": 9, "text": "\\Phi_{j}^{i} = \\frac{1}{2} \\frac{\\mathrm{d}}{\\mathrm{d} t} \\frac{\\partial f^{i}}{\\partial v^{j}} - \\frac{\\partial f^{i}}{\\partial u^{j}} - \\frac{1}{4} \\frac{\\partial f^{i}}{\\partial v^{k}} \\frac{\\partial f^{k}}{\\partial v^{j}}." }, { "math_id": 10, "text": "g \\Phi = (g \\Phi)^{\\top}, \\quad \\mbox{(H1)}" }, { "math_id": 11, "text": "\\frac{\\mathrm{d} g_{ij}}{\\mathrm{d} t} + \\frac{1}{2} \\frac{\\partial f^{k}}{\\partial v^{i}} g_{kj} + \\frac{1}{2} \\frac{\\partial f^{k}}{\\partial v^{j}} g_{ki} = 0 \\mbox{ for } 1 \\leq i, j \\leq n, \\quad \\mbox{(H2)}" }, { "math_id": 12, "text": "\\frac{\\partial g_{ij}}{\\partial v^{k}} = \\frac{\\partial g_{ik}}{\\partial v^{j}} \\mbox{ for } 1 \\leq i, j, k \\leq n. \\quad \\mbox{(H3)}" }, { "math_id": 13, "text": "2 \\left( \\begin{matrix} n + 1 \\\\ 3 \\end{matrix} \\right)" }, { "math_id": 14, "text": "\\left( \\begin{matrix} n \\\\ k \\end{matrix} \\right)" }, { "math_id": 15, "text": "\\Psi_{jk}^{i} = \\frac{1}{3} \\left( \\frac{\\partial \\Phi_{j}^{i}}{\\partial v^{k}} - \\frac{\\partial \\Phi_{k}^{i}}{\\partial v^{j}} \\right)." }, { "math_id": 16, "text": "g_{mi} \\Psi_{jk}^{m} + g_{mk} \\Psi_{ij}^{m} + g_{mj} \\Psi_{ki}^{m} = 0 \\mbox{ for } 1 \\leq i, j \\leq n. \\quad \\mbox{(A)}" }, { "math_id": 17, "text": "\\left( \\begin{matrix} n \\\\ 3 \\end{matrix} \\right)" }, { "math_id": 18, "text": "\\left( \\begin{matrix} n \\\\ 2 \\end{matrix} \\right)" } ]
https://en.wikipedia.org/wiki?curid=10452186
1045233
Ethoxylation
Chemical reaction between ethylene oxide and substrate In organic chemistry, ethoxylation is a chemical reaction in which ethylene oxide () adds to a substrate. It is the most widely practiced alkoxylation, which involves the addition of epoxides to substrates. In the usual application, alcohols and phenols are converted into , where "n" ranges from 1 to 10. Such compounds are called alcohol ethoxylates. Alcohol ethoxylates are often converted to related species called ethoxysulfates. Alcohol ethoxylates and ethoxysulfates are surfactants, used widely in cosmetic and other commercial products. The process is of great industrial significance, with more than 2,000,000 metric tons of various ethoxylates produced worldwide in 1994. Production. The process was developed at the Ludwigshafen laboratories of IG Farben by Conrad Schöller and Max Wittwer during the 1930s. Alcohol ethoxylates. Industrial ethoxylation is primarily performed upon alcohols. Lower alcohols react to give glycol ethers which are commonly used as solvents, while longer fatty alcohols are converted to fatty alcohol ethoxylates (FAE's), which are a common form of nonionic surfactant. The reaction typically proceeds by blowing ethylene oxide through the alcohol at 180 °C and under 1-2 bar of pressure, with potassium hydroxide (KOH) serving as a catalyst. The process is highly exothermic (Δ"H" = -92 kJ/mol of ethylene oxide reacted) and requires careful control to avoid a potentially disastrous thermal runaway. formula_0 The starting materials are usually primary alcohols as they tend to react 10–30× faster than secondary alcohols do. Typically 5-10 units of ethylene oxide are added to each alcohol, however ethoxylated alcohols can be more prone to ethoxylation than the starting alcohol, making the reaction difficult to control and leading to the formation of a product with varying repeat unit length (the value of "n" in the equation above). Better control can be afforded by the use of more sophisticated catalysts, which can be used to generate narrow-range ethoxylates. Ethoxylated alcohols are considered to be a high production volume (HPV) chemical by the US EPA. Ethoxylation/propoxylation. Ethoxylation is sometimes combined with propoxylation, the analogous reaction using propylene oxide as the monomer. Both reactions are normally performed in the same reactor and may be run simultaneously to give a random polymer, or in alternation to obtain block copolymers such as poloxamers. Propylene oxide is more hydrophobic than ethylene oxide and its inclusion at low levels can significantly affect the properties of the surfactant. In particular ethoxylated fatty alcohols which have been 'capped' with ~1 propylene oxide unit are extensively marketed as defoamers. Ethoxysulfates. Ethoxylated fatty alcohols are often converted to the corresponding organosulfates, which can be easily deprotonated to give anionic surfactants such as sodium laureth sulfate. Being salts, ethoxysulfates exhibit good water solubility (high HLB value). The conversion is achieved by treating ethoxylated alcohols with sulfur trioxide. Laboratory scale synthesis may be performed using chlorosulfuric acid: formula_1 The resulting sulfate esters are neutralized to give the salt: formula_2 Small volumes are neutralized with alkanolamines such as triethanolamine (TEA). In 2008, 381,000 metric tons of alcohol ethoxysulfates were consumed in North America. Other materials. Although alcohols are by far the major substrate for ethoxylation, many nucleophiles are reactive toward ethylene oxide. Primary amines will react to give di-chain materials such as polyethoxylated tallow amine. The reaction of ammonia produces important bulk chemicals such as ethanolamine, diethanolamine, and triethanolamine. Applications of ethoxylated products. Alcohol ethoxylates (AE) and alcohol ethoxysulfates (AES) are surfactants found in products such as laundry detergents, surface cleaners, cosmetics, agricultural products, textiles, and paint. Alcohol ethoxylates. As alcohol ethoxylate based surfactants are non-ionic they typically require longer ethoxylate chains than their sulfonated analogues in order to be water-soluble. Examples synthesized on an industrial scale include octyl phenol ethoxylate, polysorbate 80 and poloxamers. Ethoxylation is commonly practiced, albeit on a much smaller scale, in the biotechnology and pharmaceutical industries to increase water solubility and, in the case of pharmaceuticals, circulatory half-life of non-polar organic compounds. In this application, ethoxylation is known as "PEGylation" (polyethylene oxide is synonymous with polyethylene glycol, abbreviated as PEG). Carbon chain length is 8-18 while the ethoxylated chain is usually 3 to 12 ethylene oxides long in home products. They feature both lipophilic tails, indicated by the alkyl group abbreviation, R, and relatively polar headgroups, represented by the formula . Alcohol ethoxysulfates. AES found in consumer products generally are linear alcohols, which could be mixtures of entirely linear alkyl chains or of both linear and mono-branched alkyl chains. A high-volume example of these is sodium laureth sulfate a foaming agent in shampoos and liquid soaps, as well as industrial detergents. Environmental and safety. Alcohol ethoxylates (AEs). Human health. Alcohol ethoxylates are not observed to be mutagenic, carcinogenic, or skin sensitizers, nor cause reproductive or developmental effects. One byproduct of ethoxylation is 1,4-dioxane, a possible human carcinogen. Undiluted AEs can cause dermal or eye irritation. In aqueous solution, the level of irritation is dependent on the concentration. AEs are considered to have low to moderate toxicity for acute oral exposure, low acute dermal toxicity, and have mild irritation potential for skin and eyes at concentrations found in consumer products. Recent studies have found dried AE residues similar to what would be found on restaurant dishes (as effective concentrations from 1:10,000 to 1:40,000) killed epithelial intestinal cells at high concentrations. Lower concentrations made cells more permeable and prone to inflammatory response . Aquatic and environmental aspects. AEs are usually released down the drain, where they may be adsorbed into solids and biodegrade through anaerobic processes, with ~28–58% degraded in the sewer. The remaining AEs are treated at waste water treatment plants and biodegraded via aerobic processes with less than 0.8% of AEs released in effluent. If released into surface waters, sediment or soil, AEs will degrade through aerobic and anaerobic processes or be taken up by plants and animals. Toxicity to certain invertebrates has a range of EC50 values for linear AE from 0.1 mg/L to greater than 100 mg/L. For branched alcohol exthoxylates, toxicity ranges from 0.5 mg/L to 50 mg/L. The EC50 toxicity for algae from linear and branched AEs was 0.05 mg/L to 50 mg/L. Acute toxicity to fish ranges from LC50 values for linear AE of 0.4 mg/L to 100 mg/L, and branched is 0.25 mg/L to 40 mg/L. For invertebrates, algae and fish the essentially linear and branched AEs are considered to not have greater toxicity than Linear AE. Alcohol ethoxysulfates (AESs). Biodegradation. The degradation of AES proceeds by ω- or β-oxidation of the alkyl chain, enzymatic hydrolysis of the sulfate ester, and by cleavage of an ether bond in the AES producing alcohol or alcohol ethoxylate and an ethylene glycol sulfate. Studies of aerobic processes also found AES to be readily biodegradable. The half-life of both AE and AES in surface water is estimated to be less than 12 hours. The removal of AES due to degradation via anaerobic processes is estimated to be between 75 and 87%. In water. Flow-through laboratory tests in a terminal pool of AES with mollusks found the NOEC of a snail, Goniobasis and the Asian clam, Corbicula to be greater than 730 ug/L. Corbicula growth was measured to be affected at a concentration of 75 ug/L. The mayfly, genus "Tricorythodes" has a normalized density NOEC value of 190 ug/L. Human safety. AES has not been found to be genotoxic, mutagenic, or carcinogenic. A 2022 study revealed the expression of genes involved in cell survival, epithelial barrier, cytokine signaling, and metabolism were altered by rinse aid in concentrations used in professional dishwashers. The alcohol ethoxylates present in the rinse aid were identified as the culprit component causing the epithelial inflammation and barrier damage.
[ { "math_id": 0, "text": "\\ce{R-OH} + n\\,\\ce{C2H4O ->[][\\text{KOH}] R-(OC2H4)_\\mathit{n}OH}" }, { "math_id": 1, "text": "\\begin{align}\n \\ce{R(OC2H4)}_n\\ce{OH + SO3} &\\longrightarrow \\ce{R(OC2H4)}_n\\ce{OSO3H} \\\\[4pt]\n \\ce{R(OC2H4)}_n\\ce{OH + HSO3Cl} &\\longrightarrow \\ce{R(OC2H4)}_n\\ce{OSO3H + HCl}\n\\end{align}" }, { "math_id": 2, "text": "\\ce{R(OC2H4)}_n\\ce{OSO3H + NaOH -> R(OC2H4)}_n\\ce{OSO3Na + H2O}" } ]
https://en.wikipedia.org/wiki?curid=1045233
10453294
Mendelian randomization
Statistical method in genetic epidemiology In epidemiology, Mendelian randomization (commonly abbreviated to MR) is a method using measured variation in genes to examine the causal effect of an exposure on an outcome. Under key assumptions (see below), the design reduces both reverse causation and confounding, which often substantially impede or mislead the interpretation of results from epidemiological studies. The study design was first proposed in 1986 and subsequently described by Gray and Wheatley as a method for obtaining unbiased estimates of the effects of an assumed causal variable without conducting a traditional randomized controlled trial (the standard in epidemiology for establishing causality). These authors also coined the term "Mendelian randomization". Motivation. One of the predominant aims of epidemiology is to identify modifiable causes of health outcomes and disease especially those of public health concern. In order to ascertain whether modifying a particular trait (e.g. via an intervention, treatment or policy change) will convey a beneficial effect within a population, firm evidence that this trait causes the outcome of interest is required. However, many observational epidemiological study designs are limited in the ability to discern correlation from causation - specifically whether a particular trait causes an outcome of interest, is simply related to that outcome (but does not cause it) or is a consequence of the outcome itself. Only the former will be beneficial within a public health setting where the aim is to modify that trait to reduce the burden of disease. There are many epidemiological study designs that aim to understand relationships between traits within a population sample, each with shared and unique advantages and limitations in terms of providing causal evidence, with the "gold standard" being randomized controlled trials. Well-known successful demonstrations of causal evidence consistent across multiple studies with different designs include the identified causal links between smoking and lung cancer, and between blood pressure and stroke. However, there have also been notable failures when exposures hypothesized to be a causal risk factor for a particular outcome were later shown by well conducted randomized controlled trials not to be causal. For instance, it was previously thought that hormone replacement therapy would prevent cardiovascular disease, but it is now known to have no such benefit Another notable example is that of selenium and prostate cancer. Some observational studies found an association between higher circulating selenium levels (usually acquired through various foods and dietary supplements ) and lower risk of prostate cancer. However, the Selenium and Vitamin E Cancer Prevention Trial (SELECT) showed evidence that dietary selenium supplementation actually increased the risk of prostate and advanced prostate cancer and had an additional off-target effect on increasing type 2 diabetes risk. Mendelian randomization methods now support the view that high selenium status may not prevent cancer in the general population, and may even increase the risk of specific types. Such inconsistencies between observational epidemiological studies and randomized controlled trials are likely a function of social, behavioral, or physiological confounding factors in many observational epidemiological designs, which are particularly difficult to measure accurately and difficult to control for. Moreover, randomized controlled trials (RCTs) are usually expensive, time-consuming and laborious and many epidemiological findings cannot be ethically replicated in clinical trials. Mendelian randomization studies appear capable of resolving questions of potential confounding more efficiently than RCTs Definition. Mendelian randomization (MR) is fundamentally an instrumental variables estimation method hailing from econometrics. The method uses the properties of germline genetic variation (usually in the form of single nucleotide polymorphisms or SNPs) strongly associated with a putative exposure as a "proxy" or "instrument" for that exposure to test for and estimate a causal effect of the exposure on an outcome of interest from observational data. The genetic variation used will have either well-understood effects on exposure patterns (e.g. propensity to smoke heavily) or effects that mimic those produced by modifiable exposures (e.g., raised blood cholesterol). Importantly, the genotype must only affect the disease status indirectly via its effect on the exposure of interest. As genotypes are assigned randomly when passed from parents to offspring during meiosis, then groups of individuals defined by genetic variation associated with an exposure at a population level should be largely unrelated to the confounding factors that typically plague observational epidemiology studies. Germline genetic variation (i.e. that which can be inherited) is also temporarily fixed at conception and not modified by the onset of any outcome or disease, precluding reverse causation. Additionally, given improvements in modern genotyping technologies, measurement error and systematic misclassification is often low with genetic data. In this regard Mendelian randomization can be thought of as analogous to "nature's randomized controlled trial". Mendelian randomization requires three core instrumental variable assumptions. Namely that: To ensure that the first core assumption is validated, Mendelian randomization requires distinct associations between genetic variation and exposures of interest. These are usually obtained from genome-wide association studies though can also be candidate gene studies. The second assumption relies on there being no population substructure (e.g. geographical factors that induce an association between the genotype and outcome), mate choice that is not associated with genotype (i.e. random mating or panmixia) and no dynastic effects (i.e. where the expression of parental genotype in the parental phenotype directly affects the offspring phenotype). Statistical analysis. Mendelian randomization is usually applied through the use of instrumental variables estimation with genetic variants acting as instruments for the exposure of interest. This can be implemented using data on the genetic variants, exposure and outcome of interest for a set of individuals in a single dataset or using summary data on the association between the genetic variants and the exposure and the association between the genetic variants and the outcome in separate datasets. The method has also been used in economic research studying the effects of obesity on earnings, and other labor market outcomes. When a single dataset is used the methods of estimation applied are those frequently used elsewhere in instrumental variable estimation, such as two-stage least squares. If multiple genetic variants are associated with the exposure they can either be used individually as instruments or combined to create an allele score which is used as a single instrument. Analysis using summary data often applies data from genome-wide association studies. In this case the association between genetic variants and the exposure is taken from the summary results produced by a genome-wide association study for the exposure. The association between the same genetic variants and the outcome is then taken from the summary results produced by a genome-wide association study for the outcome. These two sets of summary results are then used to obtain the MR estimate. Given the following notation: formula_0 effect of genetic variant formula_1 on the exposure formula_2; formula_3 estimated effect of genetic variant formula_1 on the outcome formula_4 formula_5 estimated standard error of this estimated effect; formula_6 MR estimate of the causal effect of the exposure formula_7 on the outcome formula_8 and considering the effect of a single genetic variant, the MR estimate can be obtained from the Wald ratio: formula_9 When multiple genetic variants are used, the individual ratios for each genetic variants are combined using inverse variance weighting where each individual ratio is weighted by the uncertainty in their estimation. This gives the IVW estimate which can be calculated as: formula_10 Alternatively, the same estimate can be obtained from a linear regression which used the genetic variant-outcome association as the outcome and the genetic variant-exposure association as the exposure. This linear regression is weighted by the uncertainty in the genetic-variant outcome association and does not include a constant. formula_11 These methods only provide reliable estimates of the causal effect of the exposure on the outcome under the core instrumental variable assumptions. Alternative methods are available that are robust to a violation of the third assumption, i.e. that provide reliable results under some types of horizontal pleiotropy. Additionally some biases that arise from violations of the second IV assumption, such as dynastic effects, can be overcome through the use of data which includes siblings or parents and their offspring. History. The Mendelian randomization method depends on two principles derived from the original work by Gregor Mendel on genetic inheritance. Its foundation come from Mendel’s laws namely 1) the law of segregation in which there is complete segregation of the two allelomorphs in equal number of germ-cells of a heterozygote and 2) separate pairs of allelomorphs segregate independently of one another and which were first published as such in 1906 by Robert Heath Lock. Another progenitor of Mendelian randomization is Sewall Wright who introduced path analysis, a form of causal diagram used for making causal inference from non-experimental data. The method relies on causal anchors, and the anchors in the majority of his examples were provided by Mendelian inheritance, as is the basis of MR. Another component of the logic of MR is the instrumental gene, the concept of which was introduced by Thomas Hunt Morgan. This is important as it removed the need to understand the physiology of the gene for making the inference about genetic processes. Since that time the literature includes examples of research using molecular genetics to make inference about modifiable risk factors, which is the essence of MR. One example is the work of Gerry Lower and colleagues in 1979 who used the N-acetyltransferase phenotype as an anchor to draw inference about various exposures including smoking and amine dyes as risk factors for bladder cancer. Another example is the work of Martijn Katan (then of Wageningen University &amp; Research, Netherlands) in which he advocated a study design using Apolipoprotein E allele as an instrumental variable anchor to study the observed relationship between low blood cholesterol levels and increased risk of cancer. In fact, the term “Mendelian randomization” was first used in print by Richard Gray and Keith Wheatley (both of Radcliffe Infirmary, Oxford, UK) in 1991 in a somewhat different context; in a method allowing instrumental variable estimation but in relation to an approach relying on Mendelian inheritance rather than genotype. In their 2003 paper, Shah Ebrahim and George Davey Smith use the term again to describe the method of using germline genetic variants for understanding causality in an instrumental variable analysis, and it is this methodology that is now widely used and to which the meaning is ascribed. The Mendelian randomization method is now widely adopted in causal epidemiology, and the number of MR studies reported in the scientific literature has grown every year since the 2003 paper. In 2021 STROBE-MR guidelines were published to assist readers and reviewers of Mendelian randomization studies to evaluate the validity and utility of published studies. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\hat{\\pi}_g \\equiv " }, { "math_id": 1, "text": "\\ g\\ " }, { "math_id": 2, "text": "(X)" }, { "math_id": 3, "text": "\\hat{\\Gamma}_g \\equiv " }, { "math_id": 4, "text": "\\ (Y)\\ ;" }, { "math_id": 5, "text": "\\hat{\\sigma}_g \\equiv " }, { "math_id": 6, "text": "\\hat{\\beta}_\\mathsf{MR} \\equiv " }, { "math_id": 7, "text": "\\ X\\ " }, { "math_id": 8, "text": "\\ Y\\ ;" }, { "math_id": 9, "text": "\\hat{\\beta}_\\mathsf{MR} = \\frac{\\ \\hat{\\Gamma}_g\\ }{\\ \\hat{\\pi}_g\\ } ~." }, { "math_id": 10, "text": " \\hat{\\beta}_\\mathsf{IVW} = \\frac{\\ \\sum_{g=1}^G \\hat{\\pi}_g\\ \\hat{\\Gamma}_g\\ \\sigma_{y,g}^2\\ }{\\ \\sum_{g=1}^G\\ \\hat{\\pi}_g^2\\ \\sigma_{y,g}^2\\ } ~." }, { "math_id": 11, "text": " \\hat{\\Gamma}_g = \\beta_\\mathsf{IVW}\\ \\hat{\\pi}_g+u_g\\ \\quad\\ \\mathsf{ weighted\\ by } \\ \\quad\\ \\frac{ 1 }{~~ \\hat{\\sigma}^2_{y,g}\\ } ~." } ]
https://en.wikipedia.org/wiki?curid=10453294
1045553
Multinomial distribution
In probability theory, the multinomial distribution is a generalization of the binomial distribution. For example, it models the probability of counts for each side of a "k"-sided dice rolled "n" times. For "n" independent trials each of which leads to a success for exactly one of "k" categories, with each category having a given fixed success probability, the multinomial distribution gives the probability of any particular combination of numbers of successes for the various categories. When "k" is 2 and "n" is 1, the multinomial distribution is the Bernoulli distribution. When "k" is 2 and "n" is bigger than 1, it is the binomial distribution. When "k" is bigger than 2 and "n" is 1, it is the categorical distribution. The term "multinoulli" is sometimes used for the categorical distribution to emphasize this four-way relationship (so "n" determines the suffix, and "k" the prefix). The Bernoulli distribution models the outcome of a single Bernoulli trial. In other words, it models whether flipping a (possibly biased) coin one time will result in either a success (obtaining a head) or failure (obtaining a tail). The binomial distribution generalizes this to the number of heads from performing "n" independent flips (Bernoulli trials) of the same coin. The multinomial distribution models the outcome of "n" experiments, where the outcome of each trial has a categorical distribution, such as rolling a "k"-sided die "n" times. Let "k" be a fixed finite number. Mathematically, we have "k" possible mutually exclusive outcomes, with corresponding probabilities "p"1, ..., "p""k", and "n" independent trials. Since the "k" outcomes are mutually exclusive and one must occur we have "p""i" ≥ 0 for "i" = 1, ..., "k" and formula_0. Then if the random variables "X""i" indicate the number of times outcome number "i" is observed over the "n" trials, the vector "X" = ("X"1, ..., "X""k") follows a multinomial distribution with parameters "n" and p, where p = ("p"1, ..., "p""k"). While the trials are independent, their outcomes "X""i" are dependent because they must be summed to n. Generalization of the binomial distribution Definitions. Probability mass function. Suppose one does an experiment of extracting "n" balls of "k" different colors from a bag, replacing the extracted balls after each draw. Balls of the same color are equivalent. Denote the variable which is the number of extracted balls of color "i" ("i" = 1, ..., "k") as "X""i", and denote as "p""i" the probability that a given extraction will be in color "i". The probability mass function of this multinomial distribution is: formula_2 for non-negative integers "x"1, ..., "x""k". The probability mass function can be expressed using the gamma function as: formula_3 This form shows its resemblance to the Dirichlet distribution, which is its conjugate prior. Example. Suppose that in a three-way election for a large country, candidate A received 20% of the votes, candidate B received 30% of the votes, and candidate C received 50% of the votes. If six voters are selected randomly, what is the probability that there will be exactly one supporter for candidate A, two supporters for candidate B and three supporters for candidate C in the sample? "Note: Since we’re assuming that the voting population is large, it is reasonable and permissible to think of the probabilities as unchanging once a voter is selected for the sample. Technically speaking this is sampling without replacement, so the correct distribution is the multivariate hypergeometric distribution, but the distributions converge as the population grows large in comparison to a fixed sample size""." formula_4 Properties. Normalization. The multinomial distribution is normalized according to: formula_5 where the sum is over all permutations of formula_6such that formula_7. Expected value and variance. The expected number of times the outcome "i" was observed over "n" trials is formula_8 The covariance matrix is as follows. Each diagonal entry is the variance of a binomially distributed random variable, and is therefore formula_9 The off-diagonal entries are the covariances: formula_10 for "i", "j" distinct. All covariances are negative because for fixed "n", an increase in one component of a multinomial vector requires a decrease in another component. When these expressions are combined into a matrix with "i, j" element formula_11 the result is a "k" × "k" positive-semidefinite covariance matrix of rank "k" − 1. In the special case where "k" = "n" and where the "p""i" are all equal, the covariance matrix is the centering matrix. The entries of the corresponding correlation matrix are formula_12 formula_13 Note that the number of trials "n" drops out of this expression. Each of the "k" components separately has a binomial distribution with parameters "n" and "p""i", for the appropriate value of the subscript "i". The support of the multinomial distribution is the set formula_14 Its number of elements is formula_15 Matrix notation. In matrix notation, formula_16 and formula_17 with pT = the row vector transpose of the column vector p. Visualization. As slices of generalized Pascal's triangle. Just like one can interpret the binomial distribution as (normalized) one-dimensional (1D) slices of Pascal's triangle, so too can one interpret the multinomial distribution as 2D (triangular) slices of Pascal's pyramid, or 3D/4D/+ (pyramid-shaped) slices of higher-dimensional analogs of Pascal's triangle. This reveals an interpretation of the range of the distribution: discretized equilateral "pyramids" in arbitrary dimension—i.e. a simplex with a grid. As polynomial coefficients. Similarly, just like one can interpret the binomial distribution as the polynomial coefficients of formula_18 when expanded, one can interpret the multinomial distribution as the coefficients of formula_19 when expanded, noting that just the coefficients must sum up to 1. Large deviation theory. Asymptotics. By Stirling's formula, at the limit of formula_20, we haveformula_21where relative frequencies formula_22 in the data can be interpreted as probabilities from the empirical distribution formula_23, and formula_24 is the Kullback–Leibler divergence. This formula can be interpreted as follows. Consider formula_25, the space of all possible distributions over the categories formula_26. It is a simplex. After formula_27 independent samples from the categorical distribution formula_28 (which is how we construct the multinomial distribution), we obtain an empirical distribution formula_23. By the asymptotic formula, the probability that empirical distribution formula_23 deviates from the actual distribution formula_28 decays exponentially, at a rate formula_29. The more experiments and the more different formula_23 is from formula_28, the less likely it is to see such an empirical distribution. If formula_30 is a closed subset of formula_25, then by dividing up formula_30 into pieces, and reasoning about the growth rate of formula_31 on each piece formula_32, we obtain Sanov's theorem, which states thatformula_33 Concentration at large "n". Due to the exponential decay, at large formula_27, almost all the probability mass is concentrated in a small neighborhood of formula_28. In this small neighborhood, we can take the first nonzero term in the Taylor expansion of formula_24, to obtainformula_34This resembles the gaussian distribution, which suggests the following theorem: Theorem. At the formula_35 limit, formula_36 converges in distribution to the chi-squared distribution formula_37. &lt;templatestyles src="Template:Hidden begin/styles.css"/&gt;[Proof] The space of all distributions over categories formula_40 is a simplex: formula_41, and the set of all possible empirical distributions after formula_27 experiments is a subset of the simplex: formula_42. That is, it is the intersection between formula_25 and the lattice formula_43. As formula_27 increases, most of the probability mass is concentrated in a subset of formula_44 near formula_28, and the probability distribution near formula_28 becomes well-approximated by formula_45From this, we see that the subset upon which the mass is concentrated has radius on the order of formula_38, but the points in the subset are separated by distance on the order of formula_39, so at large formula_27, the points merge into a continuum. To convert this from a discrete probability distribution to a continuous probability density, we need to multiply by the volume occupied by each point of formula_44 in formula_25. However, by symmetry, every point occupies exactly the same volume (except a negligible set on the boundary), so we obtain a probability density formula_46, where formula_47 is a constant. Finally, since the simplex formula_25 is not all of formula_48, but only within a formula_49-dimensional plane, we obtain the desired result. Conditional concentration at large "n". The above concentration phenomenon can be easily generalized to the case where we condition upon linear constraints. This is the theoretical justification for Pearson's chi-squared test. Theorem. Given frequencies formula_50 observed in a dataset with formula_27 points, we impose formula_51 independent linear constraints formula_52(notice that the first constraint is simply the requirement that the empirical distributions sum to one), such that empirical formula_53 satisfy all these constraints simultaneously. Let formula_54 denote the formula_55-projection of prior distribution formula_28 on the sub-region of the simplex allowed by the linear constraints. At the formula_35 limit, sampled counts formula_56 from the multinomial distribution conditional on the linear constraints are governed by formula_57 which converges in distribution to the chi-squared distribution formula_58. &lt;templatestyles src="Template:Hidden begin/styles.css"/&gt;[Proof] An analogous proof applies in this Diophantine problem of coupled linear equations in count variables formula_56, but this time formula_44 is the intersection of formula_43 with formula_59 and formula_60 hyperplanes, all linearly independent, so the probability density formula_61 is restricted to a formula_62-dimensional plane. In particular, expanding the KL divergence formula_63 around its minimum formula_54 (the formula_55-projection of formula_28 on formula_44) in the constrained problem ensures by the Pythagorean theorem for formula_55-divergence that any constant and linear term in the counts formula_56 vanishes from the conditional probability to multinationally sample those counts. Notice that by definition, every one of formula_64 must be a rational number, whereas formula_65 may be chosen from any real number in formula_66 and need not satisfy the Diophantine system of equations. Only asymptotically as formula_67, the formula_68's can be regarded as probabilities over formula_66. Away from empirically observed constraints formula_69 (such as moments or prevalences) the theorem can be generalized: Theorem. In the case that all formula_68 are equal, the Theorem reduces to the concentration of entropies around the Maximum Entropy. Related distributions. In some fields such as natural language processing, categorical and multinomial distributions are synonymous and it is common to speak of a multinomial distribution when a categorical distribution is actually meant. This stems from the fact that it is sometimes convenient to express the outcome of a categorical distribution as a "1-of-k" vector (a vector with one element containing a 1 and all other elements containing a 0) rather than as an integer in the range formula_77; in this form, a categorical distribution is equivalent to a multinomial distribution over a single trial. Statistical inference. Equivalence tests for multinomial distributions. The goal of equivalence testing is to establish the agreement between a theoretical multinomial distribution and observed counting frequencies. The theoretical distribution may be a fully specified multinomial distribution or a parametric family of multinomial distributions. Let formula_54 denote a theoretical multinomial distribution and let formula_28 be a true underlying distribution. The distributions formula_28 and formula_54 are considered equivalent if formula_79 for a distance formula_80 and a tolerance parameter formula_81. The equivalence test problem is formula_82 versus formula_83. The true underlying distribution formula_28 is unknown. Instead, the counting frequencies formula_84 are observed, where formula_27 is a sample size. An equivalence test uses formula_84 to reject formula_85. If formula_85 can be rejected then the equivalence between formula_28 and formula_54 is shown at a given significance level. The equivalence test for Euclidean distance can be found in text book of Wellek (2010). The equivalence test for the total variation distance is developed in Ostrovski (2017). The exact equivalence test for the specific cumulative distance is proposed in Frey (2009). The distance between the true underlying distribution formula_28 and a family of the multinomial distributions formula_86 is defined by formula_87. Then the equivalence test problem is given by formula_88 and formula_89. The distance formula_90 is usually computed using numerical optimization. The tests for this case are developed recently in Ostrovski (2018). Confidence intervals for the difference of two proportions. In the setting of a multinomial distribution, constructing confidence intervals for the difference between the proportions of observations from two events, formula_91, requires the incorporation of the negative covariance between the sample estimators formula_92 and formula_93. Some of the literature on the subject focused on the use-case of matched-pairs binary data, which requires careful attention when translating the formulas to the general case of formula_91 for any multinomial distribution. Formulas in the current section will be generalized, while formulas in the next section will focus on the matched-pairs binary data use-case. Wald's standard error (SE) of the difference of proportion can be estimated using: formula_94 For a formula_95 approximate confidence interval, the margin of error may incorporate the appropriate quantile from the standard normal distribution, as follows: formula_96 &lt;templatestyles src="Template:Hidden begin/styles.css"/&gt;[Proof] As the sample size (formula_27) increases, the sample proportions will approximately follow a multivariate normal distribution, thanks to the multidimensional central limit theorem (and it could also be shown using the Cramér–Wold theorem). Therefore, their difference will also be approximately normal. Also, these estimators are weakly consistent and plugging them into the SE estimator makes it also weakly consistent. Hence, thanks to Slutsky's theorem, the pivotal quantity formula_97 approximately follows the standard normal distribution. And from that, the above approximate confidence interval is directly derived. The SE can be constructed using the calculus of the variance of the difference of two random variables: formula_98 A modification which includes a continuity correction adds formula_99 to the margin of error as follows: formula_100 Another alternative is to rely on a Bayesian estimator using Jeffreys prior which leads to using a dirichlet distribution, with all parameters being equal to 0.5, as a prior. The posterior will be the calculations from above, but after adding 1/2 to each of the "k" elements, leading to an overall increase of the sample size by formula_101. This was originally developed for a multinomial distribution with four events, and is known as "wald+2", for analyzing matched pairs data (see the next section for more details). This leads to the following SE: formula_102 &lt;templatestyles src="Template:Hidden begin/styles.css"/&gt;[Proof] formula_103 Which can just be plugged into the original Wald formula as follows: formula_104 Occurrence and applications. Confidence intervals for the difference in matched-pairs binary data (using multinomial with "k=4"). For the case of matched-pairs binary data, a common task is to build the confidence interval of the difference of the proportion of the matched events. For example, we might have a test for some disease, and we may want to check the results of it for some population at two points in time (1 and 2), to check if there was a change in the proportion of the positives for the disease during that time. Such scenarios can be represented using a two-by-two contingency table with the number of elements that had each of the combination of events. We can use small "f" for sampling frequencies: formula_105, and capital "F" for population frequencies: formula_106. These four combinations could be modeled as coming from a multinomial distribution (with four potential outcomes). The sizes of the sample and population can be "n" and "N" respectively. And in such a case, there is an interest in building a confidence interval for the difference of proportions from the marginals of the following (sampled) contingency table: In this case, checking the difference in marginal proportions means we are interested in using the following definitions: formula_107, formula_108. And the difference we want to build confidence intervals for is: formula_109 Hence, a confidence intervals for the marginal positive proportions (formula_110) is the same as building a confidence interval for the difference of the proportions from the secondary diagonal of the two-by-two contingency table (formula_111). Calculating a p-value for such a difference is known as McNemar's test. Building confidence interval around it can be constructed using methods described above for Confidence intervals for the difference of two proportions. The Wald confidence intervals from the previous section can be applied to this setting, and appears in the literature using alternative notations. Specifically, the SE often presented is based on the contingency table frequencies instead of the sample proportions. For example, the Wald confidence intervals, provided above, can be written as: formula_112 Further research in the literature has identified several shortcomings in both the Wald and the Wald with continuity correction methods, and other methods have been proposed for practical application. One such modification includes "Agresti and Min’s Wald+2" (similar to some of their other works) in which each cell frequency had an extra formula_113 added to it. This leads to the "Wald+2" confidence intervals. In a Bayesian interpretation, this is like building the estimators taking as prior a dirichlet distribution with all parameters being equal to 0.5 (which is, in fact, the Jeffreys prior). The "+2" in the name "wald+2" can now be taken to mean that in the context of a two-by-two contingency table, which is a multinomial distribution with four possible events, then since we add 1/2 an observation to each of them, then this translates to an overall addition of 2 observations (due to the prior). This leads to the following modified SE for the case of matched pairs data: formula_114 Which can just be plugged into the original Wald formula as follows: formula_115 Other modifications include "Bonett and Price’s Adjusted Wald", and "Newcombe’s Score". Computational methods. Random variate generation. First, reorder the parameters formula_1 such that they are sorted in descending order (this is only to speed up computation and not strictly necessary). Now, for each trial, draw an auxiliary variable "X" from a uniform (0, 1) distribution. The resulting outcome is the component formula_116 Sampling using repeated conditional binomial samples. Given the parameters formula_117 and a total for the sample formula_27 such that formula_118, it is possible to sample sequentially for the number in an arbitrary state formula_119, by partitioning the state space into formula_120 and not-formula_120, conditioned on any prior samples already taken, repeatedly. Algorithm: Sequential conditional binomial sampling. S = n rho = 1 for i in [1,k-1]: if rho != 0: X[i] ~ Binom(S,p[i]/rho) else X[i] = 0 S = S - X[i] rho = rho - p[i] X[k] = S Heuristically, each application of the binomial sample reduces the available number to sample from and the conditional probabilities are likewise updated to ensure logical consistency. Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\sum_{i=1}^k p_i = 1" }, { "math_id": 1, "text": "p_1, \\ldots, p_k" }, { "math_id": 2, "text": " \\begin{align}\nf(x_1,\\ldots,x_k;n,p_1,\\ldots,p_k) & {} = \\Pr(X_1 = x_1 \\text{ and } \\dots \\text{ and } X_k = x_k) \\\\\n& {} = \\begin{cases} { \\displaystyle {n! \\over x_1!\\cdots x_k!}p_1^{x_1}\\times\\cdots\\times p_k^{x_k}}, \\quad &\n\\text{when } \\sum_{i=1}^k x_i=n \\\\ \\\\\n0 & \\text{otherwise,} \\end{cases}\n\\end{align}\n" }, { "math_id": 3, "text": "f(x_1,\\dots, x_{k}; p_1,\\ldots, p_k) = \\frac{\\Gamma(\\sum_i x_i + 1)}{\\prod_i \\Gamma(x_i+1)} \\prod_{i=1}^k p_i^{x_i}." }, { "math_id": 4, "text": " \\Pr(A=1,B=2,C=3) = \\frac{6!}{1! 2! 3!}(0.2^1) (0.3^2) (0.5^3) = 0.135 " }, { "math_id": 5, "text": "\\sum_{\\sum_{j=1}^k x_j=n} f(x_1,...,x_k;n,p_1,...,p_k) = 1" }, { "math_id": 6, "text": "x_j" }, { "math_id": 7, "text": " \\sum_{j=1}^k x_j=n " }, { "math_id": 8, "text": "\\operatorname{E}(X_i) = n p_i.\\," }, { "math_id": 9, "text": "\\operatorname{Var}(X_i)=np_i(1-p_i).\\," }, { "math_id": 10, "text": "\\operatorname{Cov}(X_i,X_j)=-np_i p_j\\," }, { "math_id": 11, "text": "\\operatorname{cov} (X_i,X_j)," }, { "math_id": 12, "text": "\\rho(X_i,X_i) = 1." }, { "math_id": 13, "text": "\\rho(X_i,X_j) = \\frac{\\operatorname{Cov}(X_i,X_j)}{\\sqrt{\\operatorname{Var}(X_i)\\operatorname{Var}(X_j)}} = \\frac{-p_i p_j}{\\sqrt{p_i(1-p_i) p_j(1-p_j)}} = -\\sqrt{\\frac{p_i p_j}{(1-p_i)(1-p_j)}}." }, { "math_id": 14, "text": "\\{(n_1,\\dots,n_k)\\in \\mathbb{N}^k \\mid n_1+\\cdots+n_k=n\\}.\\," }, { "math_id": 15, "text": "{n+k-1 \\choose k-1}." }, { "math_id": 16, "text": "\\operatorname{E}(\\mathbf{X}) = n \\mathbf{p},\\," }, { "math_id": 17, "text": "\\operatorname{Var}(\\mathbf{X}) = n \\lbrace \\operatorname{diag}(\\mathbf{p}) - \\mathbf{p} \\mathbf{p}^{\\rm T} \\rbrace ,\\," }, { "math_id": 18, "text": "(p + q)^n" }, { "math_id": 19, "text": "(p_1 + p_2 + p_3 + \\cdots + p_k)^n" }, { "math_id": 20, "text": "n, x_1, ..., x_k \\to \\infty" }, { "math_id": 21, "text": "\\ln \\binom{n}{x_1, \\cdots, x_k} + \\sum_{i=1}^k x_i\\ln p_i = -n D_{KL}(\\hat p \\| p) - \\frac{k-1}{2} \\ln(2\\pi n) - \\frac 12 \\sum_{i=1}^k \\ln(\\hat p_i) + o(1)" }, { "math_id": 22, "text": "\\hat p_i = x_i/n" }, { "math_id": 23, "text": "\\hat p" }, { "math_id": 24, "text": "D_{KL}" }, { "math_id": 25, "text": "\\Delta_k" }, { "math_id": 26, "text": "\\{1, 2, ..., k\\}" }, { "math_id": 27, "text": "n" }, { "math_id": 28, "text": "p" }, { "math_id": 29, "text": " n D_{KL}(\\hat p \\| p)" }, { "math_id": 30, "text": "A" }, { "math_id": 31, "text": "Pr(\\hat p \\in A_\\epsilon)" }, { "math_id": 32, "text": "A_\\epsilon" }, { "math_id": 33, "text": "\\lim_{n \\to \\infty} \\frac 1n \\ln Pr(\\hat p \\in A) = - \\inf_{\\hat p \\in A} D_{KL}(\\hat p \\| p)" }, { "math_id": 34, "text": "\\ln \\binom{n}{x_1, \\cdots, x_k} p_1^{x_1} \\cdots p_k^{x_k} \\approx -\\frac n2 \\sum_{i=1}^k \\frac{(\\hat p_i - p_i)^2}{p_i} = -\\frac 12 \\sum_{i=1}^k \\frac{(x_i - n p_i)^2}{n p_i}" }, { "math_id": 35, "text": "n \\to \\infty" }, { "math_id": 36, "text": "n \\sum_{i=1}^k \\frac{(\\hat p_i - p_i)^2}{p_i} = \\sum_{i=1}^k \\frac{(x_i - n p_i)^2}{n p_i}" }, { "math_id": 37, "text": "\\chi^2(k-1)" }, { "math_id": 38, "text": "1/\\sqrt n" }, { "math_id": 39, "text": "1/n" }, { "math_id": 40, "text": "\\{1, 2, \\ldots, k\\}" }, { "math_id": 41, "text": "\\Delta_{k} = \\left\\{(y_1, \\ldots, y_k)\\colon y_1, \\ldots, y_k \\geq 0, \\sum_i y_i = 1\\right\\}" }, { "math_id": 42, "text": "\\Delta_{k, n} = \\left\\{(x_1/n, \\ldots, x_k/n)\\colon x_1, \\ldots, x_k \\in \\N, \\sum_i x_i = n\\right\\}" }, { "math_id": 43, "text": "(\\Z^k)/n" }, { "math_id": 44, "text": "\\Delta_{k, n}" }, { "math_id": 45, "text": "\\binom{n}{x_1, \\cdots, x_k} p_1^{x_1} \\cdots p_k^{x_k} \\approx e^{-\\frac n2 \\sum_i \\frac{(\\hat p_i - p_i)^2}{p_i}}" }, { "math_id": 46, "text": "\\rho(\\hat p) = C e^{-\\frac n2 \\sum_i \\frac{(\\hat p_i - p_i)^2}{p_i}}" }, { "math_id": 47, "text": "C" }, { "math_id": 48, "text": "\\R^k" }, { "math_id": 49, "text": "(k-1)" }, { "math_id": 50, "text": "x_i\\in\\mathbb N" }, { "math_id": 51, "text": "\\ell + 1" }, { "math_id": 52, "text": "\\begin{cases}\n\\sum_i \\hat p_i = 1, \\\\\n\\sum_i a_{1i} \\hat p_i = b_1, \\\\\n\\sum_i a_{2i} \\hat p_i = b_2, \\\\\n\\cdots, \\\\\n\\sum_i a_{\\ell i} \\hat p_i = b_{\\ell}\n\\end{cases} " }, { "math_id": 53, "text": "\\hat p_i=x_i/n" }, { "math_id": 54, "text": "q" }, { "math_id": 55, "text": "I" }, { "math_id": 56, "text": "n \\hat p_i" }, { "math_id": 57, "text": "2n D_{KL}(\\hat p \\vert\\vert q) \\approx n \\sum_i \\frac{(\\hat p_i - q_i)^2}{q_i}" }, { "math_id": 58, "text": "\\chi^2(k-1-\\ell)" }, { "math_id": 59, "text": "\\Delta_k " }, { "math_id": 60, "text": "\\ell " }, { "math_id": 61, "text": "\\rho(\\hat p)" }, { "math_id": 62, "text": "(k-\\ell-1)" }, { "math_id": 63, "text": "D_{KL}(\\hat p\\vert\\vert p)" }, { "math_id": 64, "text": "\\hat p_1, \\hat p_2, ..., \\hat p_k" }, { "math_id": 65, "text": "p_1, p_2, ..., p_k" }, { "math_id": 66, "text": "[0, 1]" }, { "math_id": 67, "text": "n\\rightarrow\\infty" }, { "math_id": 68, "text": "\\hat p_i" }, { "math_id": 69, "text": "b_1,\\ldots,b_\\ell" }, { "math_id": 70, "text": "f_1, ..., f_\\ell" }, { "math_id": 71, "text": "(1, 1, ..., 1), \\nabla f_1(p), ..., \\nabla f_\\ell(p)" }, { "math_id": 72, "text": "\\epsilon_1(n), ..., \\epsilon_\\ell(n)" }, { "math_id": 73, "text": "\\frac 1n \\ll \\epsilon_i(n) \\ll \\frac{1}{\\sqrt n}" }, { "math_id": 74, "text": "i \\in \\{1, ..., \\ell\\}" }, { "math_id": 75, "text": "f_1(\\hat p) \\in [f_1(p)- \\epsilon_1(n), f_1(p) + \\epsilon_1(n)], ..., f_\\ell(\\hat p) \\in [f_\\ell(p)- \\epsilon_\\ell(n), f_\\ell(p) + \\epsilon_\\ell(n)]" }, { "math_id": 76, "text": "n \\sum_i \\frac{(\\hat p_i - p_i)^2}{p_i} = \\sum_i \\frac{(x_i - n p_i)^2}{n p_i}" }, { "math_id": 77, "text": "1 \\dots k" }, { "math_id": 78, "text": "(\\theta^2, 2 \\theta (1-\\theta), (1-\\theta)^2) " }, { "math_id": 79, "text": "d(p,q)<\\varepsilon" }, { "math_id": 80, "text": "d" }, { "math_id": 81, "text": "\\varepsilon>0" }, { "math_id": 82, "text": "H_0=\\{d(p,q)\\geq\\varepsilon\\}" }, { "math_id": 83, "text": "H_1=\\{d(p,q)<\\varepsilon\\}" }, { "math_id": 84, "text": "p_n" }, { "math_id": 85, "text": "H_0" }, { "math_id": 86, "text": "\\mathcal{M}" }, { "math_id": 87, "text": "d(p, \\mathcal{M})=\\min_{h\\in\\mathcal{M}}d(p,h) " }, { "math_id": 88, "text": "H_0=\\{d(p,\\mathcal{M})\\geq \\varepsilon\\}" }, { "math_id": 89, "text": "H_1=\\{d(p,\\mathcal{M})< \\varepsilon\\}" }, { "math_id": 90, "text": "d(p,\\mathcal{M})" }, { "math_id": 91, "text": "p_i-p_j" }, { "math_id": 92, "text": "\\hat{p}_i = \\frac{X_i}{n} " }, { "math_id": 93, "text": "\\hat{p}_j = \\frac{X_j}{n}" }, { "math_id": 94, "text": "\n\\widehat{\\operatorname{SE}(\\hat{p}_i - \\hat{p}_j)} = \\sqrt{\\frac{(\\hat{p}_i + \\hat{p}_j) - (\\hat{p}_i - \\hat{p}_j)^2}{n}}\n" }, { "math_id": 95, "text": "100(1 - \\alpha)\\%" }, { "math_id": 96, "text": "(\\hat{p}_i - \\hat{p}_j) \\pm z_{\\alpha/2} \\cdot \\widehat{\\operatorname{SE}(\\hat{p}_i - \\hat{p}_j)}" }, { "math_id": 97, "text": "\\frac{(\\hat{p}_i - \\hat{p}_j) - (p_i - p_j)}{\\widehat{\\operatorname{SE}(\\hat{p}_i - \\hat{p}_j)}}" }, { "math_id": 98, "text": "\n\\begin{align}\n\\widehat{\\operatorname{SE}(\\hat{p}_i - \\hat{p}_j)} & = \\sqrt{\\frac{\\hat{p}_i (1 - \\hat{p}_i)}{n} + \\frac{\\hat{p}_j (1 - \\hat{p}_j)}{n} - 2\\left(-\\frac{\\hat{p}_i \\hat{p}_j}{n}\\right)} \\\\\n& = \\sqrt{\\frac{1}{n} \\left(\\hat{p}_i + \\hat{p}_j - \\hat{p}_i^2 - \\hat{p}_j^2 + 2\\hat{p}_i \\hat{p}_j\\right)} \\\\\n& = \\sqrt{\\frac{(\\hat{p}_i + \\hat{p}_j) - (\\hat{p}_i - \\hat{p}_j)^2}{n}}\n\\end{align}\n" }, { "math_id": 99, "text": "\\frac{1}{n}" }, { "math_id": 100, "text": "(\\hat{p}_i - \\hat{p}_j) \\pm \\left(z_{\\alpha/2} \\cdot \\widehat{\\operatorname{SE}(\\hat{p}_i - \\hat{p}_j)} + \\frac{1}{n}\\right)" }, { "math_id": 101, "text": "\\frac{k}{2}" }, { "math_id": 102, "text": "\n\\widehat{\\operatorname{SE}(\\hat{p}_i - \\hat{p}_j)}_{wald+\\frac{k}{2}} = \n\\sqrt{\\frac{\\left(\\hat{p}_i + \\hat{p}_j + \\frac{1}{n}\\right)\\frac{n}{n+\\frac{k}{2}} - \n\\left(\\hat{p}_i - \\hat{p}_j\\right)^2 \\left(\\frac{n}{n+\\frac{k}{2}}\\right)^2 }{n+\\frac{k}{2}}}\n\n" }, { "math_id": 103, "text": "\n\\begin{align}\n\\widehat{\\operatorname{SE}(\\hat{p}_i - \\hat{p}_j)}_{wald+\\frac{k}{2}} & = \\sqrt{\\frac{\\left(\\frac{x_i+1/2}{n+\\frac{k}{2}} + \\frac{x_j+1/2}{n+\\frac{k}{2}}\\right) - \\left(\\frac{x_i+1/2}{n+\\frac{k}{2}} - \\frac{x_j+1/2}{n+\\frac{k}{2}}\\right)^2}{n+\\frac{k}{2}}} \\\\\n & = \n\\sqrt{\\frac{\\left(\\frac{x_i}{n} + \\frac{x_j}{n} + \\frac{1}{n}\\right)\\frac{n}{n+\\frac{k}{2}} - \\left(\\frac{x_i}{n} - \\frac{x_j}{n}\\right)^2 \\left(\\frac{n}{n+\\frac{k}{2}}\\right)^2 }{n+\\frac{k}{2}}} \\\\\n& = \\sqrt{\\frac{\\left(\\hat{p}_i + \\hat{p}_j + \\frac{1}{n}\\right)\\frac{n}{n+\\frac{k}{2}} - \\left(\\hat{p}_i - \\hat{p}_j\\right)^2 \\left(\\frac{n}{n+\\frac{k}{2}}\\right)^2 }{n+\\frac{k}{2}}} \n\\end{align}\n" }, { "math_id": 104, "text": "(p_i - p_j)\\frac{n}{n+\\frac{k}{2}} \\pm z_{\\alpha/2} \\cdot \\widehat{\\operatorname{SE}(\\hat{p}_i - \\hat{p}_j)}_{wald+\\frac{k}{2}}" }, { "math_id": 105, "text": "f_{11}, f_{10}, f_{01}, f_{00}" }, { "math_id": 106, "text": "F_{11}, F_{10}, F_{01}, F_{00}" }, { "math_id": 107, "text": "p_{1*} = \\frac{F_{1*}}{N} = \\frac{F_{11} + F_{10}}{N}" }, { "math_id": 108, "text": "p_{*1} = \\frac{F_{*1}}{N} = \\frac{F_{11} + F_{01}}{N}" }, { "math_id": 109, "text": "p_{*1} - p_{1*} = \\frac{F_{11} + F_{01}}{N} - \\frac{F_{11} + F_{10}}{N} = \\frac{F_{01}}{N} - \\frac{F_{10}}{N} = p_{01} - p_{10}" }, { "math_id": 110, "text": "p_{*1} - p_{1*}" }, { "math_id": 111, "text": "p_{01} - p_{10}" }, { "math_id": 112, "text": "\n\\widehat{\\operatorname{SE}(p_{*1} - p_{1*})} = \\widehat{\\operatorname{SE}(p_{01} - p_{10})} = \\frac{\\sqrt{n(f_{10} + f_{01}) - (f_{10} - f_{01})^2}}{n\\sqrt{n}}\n" }, { "math_id": 113, "text": "\\frac{1}{2}" }, { "math_id": 114, "text": "\n\\widehat{\\operatorname{SE}(p_{*1} - p_{1*})} = \\frac{\\sqrt{(n+2)(f_{10} + f_{01} + 1) - (f_{10} - f_{01})^2}}{(n+2)\\sqrt{(n+2)}}\n" }, { "math_id": 115, "text": "(p_{*1} - p_{1*})\\frac{n}{n+2} \\pm z_{\\alpha/2} \\cdot \\widehat{\\operatorname{SE}(\\hat{p}_i - \\hat{p}_j)}_{wald+2}" }, { "math_id": 116, "text": "j = \\min \\left\\{ j' \\in \\{1,\\dots,k\\}\\colon \\left(\\sum_{i=1}^{j'} p_i\\right) - X \\geq 0 \\right\\}." }, { "math_id": 117, "text": "p_1, p_2, \\ldots, p_k" }, { "math_id": 118, "text": "\\sum_{i=1}^k X_i = n " }, { "math_id": 119, "text": "X_i " }, { "math_id": 120, "text": "i " } ]
https://en.wikipedia.org/wiki?curid=1045553
1045640
Covering problems
Type of computational problem In combinatorics and computer science, covering problems are computational problems that ask whether a certain combinatorial structure 'covers' another, or how large the structure has to be to do that. Covering problems are minimization problems and usually integer linear programs, whose dual problems are called packing problems. The most prominent examples of covering problems are the set cover problem, which is equivalent to the hitting set problem, and its special cases, the vertex cover problem and the edge cover problem. Covering problems allow the covering primitives to overlap; the process of covering something with non-overlapping primitives is called decomposition. General linear programming formulation. In the context of linear programming, one can think of any minimization linear program as a covering problem if the coefficients in the constraint matrix, the objective function, and right-hand side are nonnegative. More precisely, consider the following general integer linear program: Such an integer linear program is called a covering problem if formula_0 for all formula_1 and formula_2. Intuition: Assume having formula_3 types of object and each object of type formula_4 has an associated cost of formula_5. The number formula_6 indicates how many objects of type formula_4 we buy. If the constraints formula_7 are satisfied, it is said that "formula_8 is a covering" (the structures that are covered depend on the combinatorial context). Finally, an optimal solution to the above integer linear program is a covering of minimal cost. Kinds of covering problems. There are various kinds of covering problems in graph theory, computational geometry and more; see . Other stochastic related versions of the problem can be found. Covering in Petri nets. For Petri nets, the covering problem is defined as the question if for a given marking, there exists a run of the net, such that some larger (or equal) marking can be reached. "Larger" means here that all components are at least as large as the ones of the given marking and at least one is properly larger. Rainbow covering. In some covering problems, the covering should satisfy some additional requirements. In particular, in the rainbow covering problem, each of the original objects has a "color", and it is required that the covering contains exactly one (or at most one) object of each color. Rainbow covering was studied e.g. for covering points by intervals: The problem is NP-hard (by reduction from linear SAT). Conflict-free covering. A more general notion is conflict-free covering. In this problem: "Conflict-free set cover" is the problem of finding a conflict-free subset of "O" that is a covering of "P". Banik, Panolan, Raman, Sahlot and Saurabh prove the following for the special case in which the conflict-graph has bounded arboricity: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "a_{ji}, b_j, c_i \\geq 0" }, { "math_id": 1, "text": "i=1,\\dots,n" }, { "math_id": 2, "text": "j=1,\\dots,m" }, { "math_id": 3, "text": "n" }, { "math_id": 4, "text": "i" }, { "math_id": 5, "text": "c_i" }, { "math_id": 6, "text": "x_i" }, { "math_id": 7, "text": "A\\mathbf{x}\\geq \\mathbf{b}" }, { "math_id": 8, "text": "\\mathbf{x}" } ]
https://en.wikipedia.org/wiki?curid=1045640
10456725
Dunkerley's method
Dunkerley's method is used in mechanical engineering to determine the critical speed of a shaft-rotor system. Other methods include the Rayleigh–Ritz method. Whirling of a shaft. No shaft can ever be perfectly straight or perfectly balanced. When an element of mass is offset from the axis of rotation, centrifugal force will tend to pull the mass outward. The elastic properties of the shaft will act to restore the “straightness”. If the frequency of rotation is equal to one of the resonant frequencies of the shaft, "whirling" will occur. In order to save the machine from failure, operation at such whirling speeds must be avoided. Whirling is a complex phenomenon that can include harmonics but we are only going to consider "synchronous whirl", where the frequency of whirling is the same as the rotational speed. Dunkerley’s formula (approximation). The whirling frequency of a symmetric cross section of a given length between two points is given by: formula_0 where: E = Young's modulus, I = second moment of area, m = mass of the shaft, L = length of the shaft between points. A shaft with weights added will have an angular velocity of N (RPM) equivalent as follows: formula_1
[ { "math_id": 0, "text": " N = 94.251 \\sqrt{E I \\over m L^3} \\ \\text{RPM}" }, { "math_id": 1, "text": "\n\\frac{1}{N_N^2} = \\frac{1}{N_A^2} + \\frac{1}{N_B^2} + \\cdots + \\frac{1}{N_n^2}\n" } ]
https://en.wikipedia.org/wiki?curid=10456725
10456890
Euclid–Mullin sequence
Infinite sequence of prime numbers The Euclid–Mullin sequence is an infinite sequence of distinct prime numbers, in which each element is the least prime factor of one plus the product of all earlier elements. They are named after the ancient Greek mathematician Euclid, because their definition relies on an idea in Euclid's proof that there are infinitely many primes, and after Albert A. Mullin, who asked about the sequence in 1963. The first 51 elements of the sequence are 2, 3, 7, 43, 13, 53, 5, 6221671, 38709183810571, 139, 2801, 11, 17, 5471, 52662739, 23003, 30693651606209, 37, 1741, 1313797957, 887, 71, 7127, 109, 23, 97, 159227, 643679794963466223081509857, 103, 1079990819, 9539, 3143065813, 29, 3847, 89, 19, 577, 223, 139703, 457, 9649, 61, 4357, 87991098722552272708281251793312351581099392851768893748012603709343, 107, 127, 3313, 227432689108589532754984915075774848386671439568260420754414940780761245893, 59, 31, 211... (sequence in the OEIS) These are the only known elements as of  2012[ [update]]. Finding the next one requires finding the least prime factor of a 335-digit number (which is known to be composite). Definition. The formula_0th element of the sequence, formula_1, is the least prime factor of formula_2 The first element is therefore the least prime factor of the empty product plus one, which is 2. The third element is (2 × 3) + 1 = 7. A better illustration is the fifth element in the sequence, 13. This is calculated by (2 × 3 × 7 × 43) + 1 = 1806 + 1 = 1807, the product of two primes, 13 × 139. Of these two primes, 13 is the smallest and so included in the sequence. Similarly, the seventh element, 5, is the result of (2 × 3 × 7 × 43 × 13 × 53) + 1 = 1244335, the prime factors of which are 5 and 248867. These examples illustrate why the sequence can leap from very large to very small numbers. Properties. The sequence is infinitely long and does not contain repeated elements. This can be proved using the method of Euclid's proof that there are infinitely many primes. That proof is constructive, and the sequence is the result of performing a version of that construction. Conjecture. &lt;templatestyles src="Unsolved/styles.css" /&gt; Unsolved problem in mathematics: Does every prime number appear in the Euclid–Mullin sequence? asked whether every prime number appears in the Euclid–Mullin sequence and, if not, whether the problem of testing a given prime for membership in the sequence is computable. Daniel Shanks (1991) conjectured, on the basis of heuristic assumptions that the distribution of primes is random, that every prime does appear in the sequence. However, although similar recursive sequences over other domains do not contain all primes, these problems both remain open for the original Euclid–Mullin sequence. The least prime number not known to be an element of the sequence is 41. The positions of the prime numbers from 2 to 97 are: 2:1, 3:2, 5:7, 7:3, 11:12, 13:5, 17:13, 19:36, 23:25, 29:33, 31:50, 37:18, 41:?, 43:4, 47:?, 53:6, 59:49, 61:42, 67:?, 71:22, 73:?, 79:?, 83:?, 89:35, 97:26 (sequence in the OEIS) where ? indicates that the position (or whether it exists at all) is unknown as of 2012. Related sequences. A related sequence of numbers determined by the largest prime factor of one plus the product of the previous numbers (rather than the smallest prime factor) is also known as the Euclid–Mullin sequence. It grows more quickly, but is not monotonic. The numbers in this sequence are 2, 3, 7, 43, 139, 50207, 340999, 2365347734339, 4680225641471129, 1368845206580129, 889340324577880670089824574922371, … (sequence in the OEIS). Not every prime number appears in this sequence, and the sequence of missing primes, 5, 11, 13, 17, 19, 23, 29, 31, 37, 41, 47, 53, 59, 61, 67, 71, 73, ... (sequence in the OEIS) has been proven to be infinite. It is also possible to generate modified versions of the Euclid–Mullin sequence by using the same rule of choosing the smallest prime factor at each step, but beginning with a different prime than 2. Alternatively, taking each number to be one plus the product of the previous numbers (rather than factoring it) gives Sylvester's sequence. The sequence constructed by repeatedly appending all factors of one plus the product of the previous numbers is the same as the sequence of prime factors of Sylvester's sequence. Like the Euclid–Mullin sequence, this is a non-monotonic sequence of primes, but it is known not to include all primes. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n" }, { "math_id": 1, "text": "a_n" }, { "math_id": 2, "text": "\\Bigl(\\prod_{i < n} a_i\\Bigr)+1\\,." } ]
https://en.wikipedia.org/wiki?curid=10456890
10459464
Cerf theory
Study of smooth real-valued functions on manifold and their singularities In mathematics, at the junction of singularity theory and differential topology, Cerf theory is the study of families of smooth real-valued functions formula_0 on a smooth manifold formula_1, their generic singularities and the topology of the subspaces these singularities define, as subspaces of the function space. The theory is named after Jean Cerf, who initiated it in the late 1960s. An example. Marston Morse proved that, provided formula_1 is compact, any smooth function formula_0 can be approximated by a Morse function. Thus, for many purposes, one can replace arbitrary functions on formula_1 by Morse functions. As a next step, one could ask, 'if you have a one-parameter family of functions which start and end at Morse functions, can you assume the whole family is Morse?' In general, the answer is no. Consider, for example, the one-parameter family of functions on formula_2 given by formula_3 At time formula_4, it has no critical points, but at time formula_5, it is a Morse function with two critical points at formula_6. Cerf showed that a one-parameter family of functions between two Morse functions can be approximated by one that is Morse at all but finitely many degenerate times. The degeneracies involve a birth/death transition of critical points, as in the above example when, at formula_7, an index 0 and index 1 critical point are created as formula_8 increases. A "stratification" of an infinite-dimensional space. Returning to the general case where formula_1 is a compact manifold, let formula_9 denote the space of Morse functions on formula_1, and formula_10 the space of real-valued smooth functions on formula_1. Morse proved that formula_11 is an open and dense subset in the formula_12 topology. For the purposes of intuition, here is an analogy. Think of the Morse functions as the top-dimensional open stratum in a stratification of formula_10 (we make no claim that such a stratification exists, but suppose one does). Notice that in stratified spaces, the co-dimension 0 open stratum is open and dense. For notational purposes, reverse the conventions for indexing the stratifications in a stratified space, and index the open strata not by their dimension, but by their co-dimension. This is convenient since formula_10 is infinite-dimensional if formula_1 is not a finite set. By assumption, the open co-dimension 0 stratum of formula_10 is formula_9, i.e.: formula_13. In a stratified space formula_14, frequently formula_15 is disconnected. The essential property of the co-dimension 1 stratum formula_16 is that any path in formula_14 which starts and ends in formula_15 can be approximated by a path that intersects formula_16 transversely in finitely many points, and does not intersect formula_17 for any formula_18. Thus Cerf theory is the study of the positive co-dimensional strata of formula_10, i.e.: formula_19 for formula_20. In the case of formula_21, only for formula_7 is the function not Morse, and formula_22 has a cubic degenerate critical point corresponding to the birth/death transition. A single time parameter, statement of theorem. The Morse Theorem asserts that if formula_23 is a Morse function, then near a critical point formula_24 it is conjugate to a function formula_25 of the form formula_26 where formula_27. Cerf's one-parameter theorem asserts the essential property of the co-dimension one stratum. Precisely, if formula_28 is a one-parameter family of smooth functions on formula_1 with formula_29, and formula_30 Morse, then there exists a smooth one-parameter family formula_31 such that formula_32, formula_33 is uniformly close to formula_34 in the formula_35-topology on functions formula_36. Moreover, formula_37 is Morse at all but finitely many times. At a non-Morse time the function has only one degenerate critical point formula_24, and near that point the family formula_37 is conjugate to the family formula_38 where formula_39. If formula_40 this is a one-parameter family of functions where two critical points are created (as formula_8 increases), and for formula_41 it is a one-parameter family of functions where two critical points are destroyed. Origins. The PL-Schoenflies problem for formula_42 was solved by J. W. Alexander in 1924. His proof was adapted to the smooth case by Morse and Emilio Baiada. The essential property was used by Cerf in order to prove that every orientation-preserving diffeomorphism of formula_43 is isotopic to the identity, seen as a one-parameter extension of the Schoenflies theorem for formula_42. The corollary formula_44 at the time had wide implications in differential topology. The essential property was later used by Cerf to prove the pseudo-isotopy theorem for high-dimensional simply-connected manifolds. The proof is a one-parameter extension of Stephen Smale's proof of the h-cobordism theorem (the rewriting of Smale's proof into the functional framework was done by Morse, and also by John Milnor and by Cerf, André Gramain, and Bernard Morin following a suggestion of René Thom). Cerf's proof is built on the work of Thom and John Mather. A useful modern summary of Thom and Mather's work from that period is the book of Marty Golubitsky and Victor Guillemin. Applications. Beside the above-mentioned applications, Robion Kirby used Cerf Theory as a key step in justifying the Kirby calculus. Generalization. A stratification of the complement of an infinite co-dimension subspace of the space of smooth maps formula_45 was eventually developed by Francis Sergeraert. During the seventies, the classification problem for pseudo-isotopies of non-simply connected manifolds was solved by Allen Hatcher and John Wagoner, discovering algebraic formula_46-obstructions on formula_47 (formula_48) and formula_49 (formula_50) and by Kiyoshi Igusa, discovering obstructions of a similar nature on formula_47 (formula_51).
[ { "math_id": 0, "text": "f\\colon M \\to \\R" }, { "math_id": 1, "text": "M" }, { "math_id": 2, "text": "M=\\mathbb R" }, { "math_id": 3, "text": "f_t(x)=(1/3)x^3-tx." }, { "math_id": 4, "text": "t=-1" }, { "math_id": 5, "text": "t=1" }, { "math_id": 6, "text": "x=\\pm 1" }, { "math_id": 7, "text": "t=0" }, { "math_id": 8, "text": "t" }, { "math_id": 9, "text": "\\operatorname{Morse}(M)" }, { "math_id": 10, "text": "\\operatorname{Func}(M)" }, { "math_id": 11, "text": "\\operatorname{Morse}(M) \\subset \\operatorname{Func}(M)" }, { "math_id": 12, "text": "C^\\infty" }, { "math_id": 13, "text": "\\operatorname{Func}(M)^0=\\operatorname{Morse}(M)" }, { "math_id": 14, "text": "X" }, { "math_id": 15, "text": "X^0" }, { "math_id": 16, "text": "X^1" }, { "math_id": 17, "text": "X^i" }, { "math_id": 18, "text": "i>1" }, { "math_id": 19, "text": "\\operatorname{Func}(M)^i" }, { "math_id": 20, "text": "i>0" }, { "math_id": 21, "text": "f_t(x)=x^3-tx" }, { "math_id": 22, "text": "f_0(x)=x^3" }, { "math_id": 23, "text": "f \\colon M \\to \\mathbb R" }, { "math_id": 24, "text": "p" }, { "math_id": 25, "text": "g \\colon \\mathbb R^n \\to \\mathbb R" }, { "math_id": 26, "text": "g(x_1,x_2,\\dotsc,x_n) = f(p) + \\epsilon_1 x_1^2 + \\epsilon_2 x_2^2 + \\dotsb + \\epsilon_n x_n^2" }, { "math_id": 27, "text": "\\epsilon_i \\in \\{\\pm 1\\}" }, { "math_id": 28, "text": "f_t \\colon M \\to \\mathbb R" }, { "math_id": 29, "text": "t \\in [0,1]" }, { "math_id": 30, "text": "f_0, f_1" }, { "math_id": 31, "text": "F_t \\colon M \\to \\mathbb R" }, { "math_id": 32, "text": "F_0 = f_0, F_1 = f_1" }, { "math_id": 33, "text": "F" }, { "math_id": 34, "text": "f" }, { "math_id": 35, "text": "C^k" }, { "math_id": 36, "text": "M \\times [0,1] \\to \\mathbb R" }, { "math_id": 37, "text": "F_t" }, { "math_id": 38, "text": "g_t(x_1,x_2,\\dotsc,x_n) = f(p) + x_1^3+\\epsilon_1 tx_1 + \\epsilon_2 x_2^2 + \\dotsb + \\epsilon_n x_n^2" }, { "math_id": 39, "text": " \\epsilon_i \\in \\{\\pm 1\\}, t \\in [-1,1]" }, { "math_id": 40, "text": "\\epsilon_1 = -1" }, { "math_id": 41, "text": "\\epsilon_1 = 1" }, { "math_id": 42, "text": "S^2 \\subset \\R^3" }, { "math_id": 43, "text": "S^3" }, { "math_id": 44, "text": "\\Gamma_4 = 0" }, { "math_id": 45, "text": "\\{ f \\colon M \\to \\R \\}" }, { "math_id": 46, "text": "K_i" }, { "math_id": 47, "text": "\\pi_1 M" }, { "math_id": 48, "text": "i=2" }, { "math_id": 49, "text": "\\pi_2 M" }, { "math_id": 50, "text": "i=1" }, { "math_id": 51, "text": "i=3" } ]
https://en.wikipedia.org/wiki?curid=10459464
1045999
1,000,000
Natural number 1,000,000 (one million), or one thousand thousand, is the natural number following 999,999 and preceding 1,000,001. The word is derived from the early Italian "millione" ("milione" in modern Italian), from "mille", "thousand", plus the augmentative suffix "-one". It is commonly abbreviated: In scientific notation, it is written as or 106. Physical quantities can also be expressed using the SI prefix mega (M), when dealing with SI units; for example, 1 megawatt (1 MW) equals 1,000,000 watts. The meaning of the word "million" is common to the short scale and long scale numbering systems, unlike the larger numbers, which have different names in the two systems. The million is sometimes used in the English language as a metaphor for a very large number, as in "Not in a million years" and "You're one in a million", or a hyperbole, as in "I've walked a million miles" and "You've asked a million-dollar question". 1,000,000 is also the square of 1000 and also the cube of 100. Visualizing one million. Even though it is often stressed that counting to precisely a million would be an exceedingly tedious task due to the time and concentration required, there are many ways to bring the number "down to size" in approximate quantities, ignoring irregularities or packing effects. In Indian English and Pakistani English, it is also expressed as 10 lakh. Lakh is derived from for 100,000 in Sanskrit. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n\\le10^{10}" }, { "math_id": 1, "text": "\\sum_{k=0}^{22}\\omega(n+k)\\le57" }, { "math_id": 2, "text": "\\omega(n)" } ]
https://en.wikipedia.org/wiki?curid=1045999
10460511
Viviani's theorem
On the sum of the distances from an interior point to the sides of an equilateral triangle Viviani's theorem, named after Vincenzo Viviani, states that the sum of the shortest distances from "any" interior point to the sides of an equilateral triangle equals the length of the triangle's altitude. It is a theorem commonly employed in various math competitions, secondary school mathematics examinations, and has wide applicability to many problems in the real world. Proof. This proof depends on the readily-proved proposition that the area of a triangle is half its base times its height—that is, half the product of one side with the altitude from that side. Let ABC be an equilateral triangle whose height is "h" and whose side is "a". Let P be any point inside the triangle, and "s, t, u" the perpendicular distances of P from the sides. Draw a line from P to each of A, B, and C, forming three triangles PAB, PBC, and PCA. Now, the areas of these triangles are formula_0, formula_1, and formula_2. They exactly fill the enclosing triangle, so the sum of these areas is equal to the area of the enclosing triangle. So we can write: formula_3 and thus formula_4 Q.E.D. Converse. The converse also holds: If the sum of the distances from an interior point of a triangle to the sides is independent of the location of the point, the triangle is equilateral. Applications. Viviani's theorem means that lines parallel to the sides of an equilateral triangle give coordinates for making ternary plots, such as flammability diagrams. More generally, they allow one to give coordinates on a regular simplex in the same way. Extensions. Parallelogram. The sum of the distances from any interior point of a parallelogram to the sides is independent of the location of the point. The converse also holds: If the sum of the distances from a point in the interior of a quadrilateral to the sides is independent of the location of the point, then the quadrilateral is a parallelogram. The result generalizes to any 2"n"-gon with opposite sides parallel. Since the sum of distances between any pair of opposite parallel sides is constant, it follows that the sum of all pairwise sums between the pairs of parallel sides, is also constant. The converse in general is not true, as the result holds for an "equilateral" hexagon, which does not necessarily have opposite sides parallel. Regular polygon. If a polygon is regular (both equiangular and equilateral), the sum of the distances to the sides from an interior point is independent of the location of the point. Specifically, it equals "n" times the apothem, where "n" is the number of sides and the apothem is the distance from the center to a side. However, the converse does not hold; the non-square parallelogram is a counterexample. Equiangular polygon. The sum of the distances from an interior point to the sides of an equiangular polygon does not depend on the location of the point. Convex polygon. A necessary and sufficient condition for a convex polygon to have a constant sum of distances from any interior point to the sides is that there exist three non-collinear interior points with equal sums of distances. Regular polyhedron. The sum of the distances from any point in the interior of a regular polyhedron to the sides is independent of the location of the point. However, the converse does not hold, not even for tetrahedra. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\frac{u \\cdot a}{2}" }, { "math_id": 1, "text": "\\frac{s \\cdot a}{2}" }, { "math_id": 2, "text": "\\frac{t \\cdot a}{2}" }, { "math_id": 3, "text": "\\frac{u \\cdot a}{2} + \\frac{s \\cdot a}{2} + \\frac{t \\cdot a}{2} = \\frac{h \\cdot a}{2}" }, { "math_id": 4, "text": "u + s + t = h" } ]
https://en.wikipedia.org/wiki?curid=10460511
10461151
Keystone effect
Image distortion caused by projection The keystone effect is the apparent distortion of an image caused by projecting it onto an angled surface. It is the distortion of the image dimensions, such as making a square look like a trapezoid, the shape of an architectural keystone, hence the name of the feature. In the typical case of a projector sitting on a table, and looking upwards to the screen, the image is larger at the top than on the bottom. Some areas of the screen may not be focused correctly as the projector lens is focused at the average distance only. In photography, the term is used to describe the apparent leaning of buildings towards the vertical centerline of the photo when shooting upwards, a common effect in architectural photography. Likewise, when taking photos looking down, e.g., from a skyscraper, buildings appear to get broader towards the top. The effect is usually corrected by either using special lenses in tilt–shift photography or in post-processing using modern image editing software. Theory. The distortion suffered by the image depends on the angle of the projector to the screen, and the beam angle. The distortion (on a two-dimensional model, and for small focus angles) is best approximated by: formula_0 where formula_1 is the angle between the screen axis and the central ray from the projector, and formula_2 is the width of the focus. From the formula, it is clear that there will be no distortion when formula_1 is zero, or perpendicular to the screen. In stereo imaging. In stereoscopy, two lenses are used to view the same subject image, each from a slightly different perspective, allowing a three-dimensional view of the subject. If the two images are not exactly parallel, this causes a keystone effect. This is particularly noticeable when the lenses are close to the subject, as with a stereo microscope, but is also a common problem with many 3D stereo camera lenses. Solving the problem. The problem arises for screen projectors that don't have the depth of focus necessary to keep all lines (from top to bottom) focused at the same time. Common solutions to this problem are: Correction. Keystone correction, colloquially also called keystoning, is a function that allows multimedia projectors that are not placed perpendicular to the horizontal centerline of the screen (too high or too low) to skew the output image, thereby making it rectangular. It is often necessary for a projector to be placed in a position outside the line perpendicular to the screen and going through the screen's center, for example, when the projector is mounted to a ceiling or placed on a table top that is lower or higher than the projection screen. Most ceiling-mounted projectors have to be mounted upside down to accommodate for the throw of the image from the lens, with the image rotated right-side-up with software. Keystone correction is a feature included with many projectors that provides the ability to intentionally "distort" the output image to recreate the original rectangular image provided by the video or computer source, thus eliminating the skewed output that would otherwise result due to angled projection. The ability to correct horizontal keystone distortion is generally only available on larger or professional level projectors. In most consumer units, this is easily corrected by moving the projector left or right as necessary, or less often by lens shifting, with similar principles as tilt–shift photography. Functionality. In modern projectors keystone correction technology is performed digitally (rather than optically) via the internal (LCD) panels or (DLP) mirrors of the projector, depending on the technology used. Thus, when applying keystone correction to an image, the number of individual pixels used is reduced, lowering the resolution and thus degrading the quality of the image projected. Home theater enthusiasts would argue that keystoning should not be used because of the impact it has on image quality. However, it is a useful technology in cases where the projector cannot be mounted directly in front of the screen, or on projectors utilizing lens shift technology where the projector must be mounted outside the frame of the screen.
[ { "math_id": 0, "text": "\\frac{\\cos\\left(\\varepsilon - \\frac{\\alpha}{2}\\right)}{\\cos\\left(\\varepsilon + \\frac{\\alpha}{2}\\right)}" }, { "math_id": 1, "text": "\\epsilon" }, { "math_id": 2, "text": "\\alpha" } ]
https://en.wikipedia.org/wiki?curid=10461151
1046155
Projection-valued measure
Mathematical operator-value measure of interest in quantum mechanics and functional analysis In mathematics, particularly in functional analysis, a projection-valued measure (or spectral measure) is a function defined on certain subsets of a fixed set and whose values are self-adjoint projections on a fixed Hilbert space. A projection-valued measure (PVM) is formally similar to a real-valued measure, except that its values are self-adjoint projections rather than real numbers. As in the case of ordinary measures, it is possible to integrate complex-valued functions with respect to a PVM; the result of such an integration is a linear operator on the given Hilbert space. Projection-valued measures are used to express results in spectral theory, such as the important spectral theorem for self-adjoint operators, in which case the PVM is sometimes referred to as the spectral measure. The Borel functional calculus for self-adjoint operators is constructed using integrals with respect to PVMs. In quantum mechanics, PVMs are the mathematical description of projective measurements. They are generalized by positive operator valued measures (POVMs) in the same sense that a mixed state or density matrix generalizes the notion of a pure state. Definition. Let formula_0 denote a separable complex Hilbert space and formula_1 a measurable space consisting of a set formula_2 and a Borel σ-algebra formula_3 on formula_2. A projection-valued measure formula_4 is a map from formula_3 to the set of bounded self-adjoint operators on formula_0 satisfying the following properties: formula_13 The second and fourth property show that if formula_16 and formula_17 are disjoint, i.e., formula_18, the images formula_19 and formula_20 are orthogonal to each other. Let formula_21 and its orthogonal complement formula_22 denote the image and kernel, respectively, of formula_5. If formula_23 is a closed subspace of formula_0 then formula_0 can be wrtitten as the "orthogonal decomposition" formula_24 and formula_25 is the unique identity operator on formula_23 satisfying all four properties. For every formula_26 and formula_27 the projection-valued measure forms a complex-valued measure on formula_0 defined as formula_28 with total variation at most formula_29. It reduces to a real-valued measure when formula_30 and a probability measure when formula_31 is a unit vector. Example Let formula_32 be a "σ"-finite measure space and, for all formula_33, let formula_34 be defined as formula_35 i.e., as multiplication by the indicator function formula_36 on "L"2("X"). Then formula_37 defines a projection-valued measure. For example, if formula_38, formula_39, and formula_40 there is then the associated complex measure formula_41 which takes a measurable function formula_42 and gives the integral formula_43 Extensions of projection-valued measures. If π is a projection-valued measure on a measurable space ("X", "M"), then the map formula_44 extends to a linear map on the vector space of step functions on "X". In fact, it is easy to check that this map is a ring homomorphism. This map extends in a canonical way to all bounded complex-valued measurable functions on "X", and we have the following. &lt;templatestyles src="Math_theorem/styles.css" /&gt; Theorem — For any bounded Borel function formula_45 on formula_2, there exists a unique bounded operator formula_46 such that formula_47 where formula_48 is a finite Borel measure given by formula_49 Hence, formula_50 is a finite measure space. The theorem is also correct for unbounded measurable functions formula_45 but then formula_51 will be an unbounded linear operator on the Hilbert space formula_0. This allows to define the Borel functional calculus for such operators and then pass to measurable functions via the Riesz–Markov–Kakutani representation theorem. That is, if formula_52 is a measurable function, then a unique measure exists such that formula_53 Spectral theorem. Let formula_0 be a separable complex Hilbert space, formula_54 be a bounded self-adjoint operator and formula_55 the spectrum of formula_56. Then the spectral theorem says that there exists a unique projection-valued measure formula_57, defined on a Borel subset formula_58, such that formula_59 where the integral extends to an unbounded function formula_60 when the spectrum of formula_56 is unbounded. Direct integrals. First we provide a general example of projection-valued measure based on direct integrals. Suppose ("X", "M", μ) is a measure space and let {"H""x"}"x" ∈ "X" be a μ-measurable family of separable Hilbert spaces. For every "E" ∈ "M", let π("E") be the operator of multiplication by 1"E" on the Hilbert space formula_61 Then π is a projection-valued measure on ("X", "M"). Suppose π, ρ are projection-valued measures on ("X", "M") with values in the projections of "H", "K". π, ρ are unitarily equivalent if and only if there is a unitary operator "U":"H" → "K" such that formula_62 for every "E" ∈ "M". Theorem. If ("X", "M") is a standard Borel space, then for every projection-valued measure π on ("X", "M") taking values in the projections of a "separable" Hilbert space, there is a Borel measure μ and a μ-measurable family of Hilbert spaces {"H""x"}"x" ∈ "X" , such that π is unitarily equivalent to multiplication by 1"E" on the Hilbert space formula_61 The measure class of μ and the measure equivalence class of the multiplicity function "x" → dim "H""x" completely characterize the projection-valued measure up to unitary equivalence. A projection-valued measure π is "homogeneous of multiplicity" "n" if and only if the multiplicity function has constant value "n". Clearly, Theorem. Any projection-valued measure π taking values in the projections of a separable Hilbert space is an orthogonal direct sum of homogeneous projection-valued measures: formula_63 where formula_64 and formula_65 Application in quantum mechanics. In quantum mechanics, given a projection-valued measure of a measurable space formula_2 to the space of continuous endomorphisms upon a Hilbert space formula_0, A common choice for formula_2 is the real line, but it may also be Let formula_69 be a measurable subset of formula_2 and formula_67 a normalized vector quantum state in formula_0, so that its Hilbert norm is unitary, formula_70. The probability that the observable takes its value in formula_69, given the system in state formula_67, is formula_71 We can parse this in two ways. First, for each fixed formula_69, the projection formula_5 is a self-adjoint operator on formula_0 whose 1-eigenspace are the states formula_67 for which the value of the observable always lies in formula_69, and whose 0-eigenspace are the states formula_67 for which the value of the observable never lies in formula_69. Second, for each fixed normalized vector state formula_67, the association formula_72 is a probability measure on formula_2 making the values of the observable into a random variable. A measurement that can be performed by a projection-valued measure formula_4 is called a projective measurement. If formula_2 is the real number line, there exists, associated to formula_4, a self-adjoint operator formula_56 defined on formula_0 by formula_73 which reduces to formula_74 if the support of formula_4 is a discrete subset of formula_2. The above operator formula_56 is called the observable associated with the spectral measure. Generalizations. The idea of a projection-valued measure is generalized by the positive operator-valued measure (POVM), where the need for the orthogonality implied by projection operators is replaced by the idea of a set of operators that are a non-orthogonal partition of unity. This generalization is motivated by applications to quantum information theory. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "H" }, { "math_id": 1, "text": "(X, M)" }, { "math_id": 2, "text": "X" }, { "math_id": 3, "text": "M" }, { "math_id": 4, "text": "\\pi" }, { "math_id": 5, "text": "\\pi(E)" }, { "math_id": 6, "text": "E \\in M." }, { "math_id": 7, "text": "\\pi(\\emptyset) = 0" }, { "math_id": 8, "text": "\\pi(X) = I" }, { "math_id": 9, "text": "\\emptyset" }, { "math_id": 10, "text": "I" }, { "math_id": 11, "text": "E_1, E_2, E_3,\\dotsc" }, { "math_id": 12, "text": "v \\in H" }, { "math_id": 13, "text": "\\pi\\left(\\bigcup_{j=1}^{\\infty} E_j \\right)v = \\sum_{j=1}^{\\infty} \\pi(E_j) v." }, { "math_id": 14, "text": "\\pi(E_1 \\cap E_2)= \\pi(E_1)\\pi(E_2)" }, { "math_id": 15, "text": "E_1, E_2 \\in M." }, { "math_id": 16, "text": "\nE_1 " }, { "math_id": 17, "text": "E_2" }, { "math_id": 18, "text": "E_1 \\cap E_2 = \\emptyset" }, { "math_id": 19, "text": "\\pi(E_1)" }, { "math_id": 20, "text": "\\pi(E_2)" }, { "math_id": 21, "text": "V_E = \\operatorname{im}(\\pi(E))" }, { "math_id": 22, "text": "V^\\perp_E=\\ker(\\pi(E))" }, { "math_id": 23, "text": "V_E " }, { "math_id": 24, "text": "H=V_E \\oplus V^\\perp_E" }, { "math_id": 25, "text": "\\pi(E)=I_E" }, { "math_id": 26, "text": "\\xi,\\eta\\in H" }, { "math_id": 27, "text": "E\\in M" }, { "math_id": 28, "text": " \\mu_{\\xi,\\eta}(E) := \\langle \\pi(E)\\xi \\mid \\eta \\rangle \n" }, { "math_id": 29, "text": "\\|\\xi\\|\\|\\eta\\|" }, { "math_id": 30, "text": " \\mu_{\\xi}(E) := \\langle \\pi(E)\\xi \\mid \\xi \\rangle \n" }, { "math_id": 31, "text": "\\xi" }, { "math_id": 32, "text": "(X, M, \\mu)" }, { "math_id": 33, "text": "E \\in M" }, { "math_id": 34, "text": "\n\\pi(E) : L^2(X) \\to L^2 (X)\n" }, { "math_id": 35, "text": "\\psi \\mapsto \\pi(E)\\psi=1_E \\psi," }, { "math_id": 36, "text": "1_E" }, { "math_id": 37, "text": "\\pi(E)=1_E" }, { "math_id": 38, "text": "X = \\mathbb{R}" }, { "math_id": 39, "text": "E = (0,1)" }, { "math_id": 40, "text": "\\phi,\\psi \\in L^2(\\mathbb{R})" }, { "math_id": 41, "text": "\\mu_{\\phi,\\psi}" }, { "math_id": 42, "text": "f: \\mathbb{R} \\to \\mathbb{R}" }, { "math_id": 43, "text": "\\int_E f\\,d\\mu_{\\phi,\\psi} = \\int_0^1 f(x)\\psi(x)\\overline{\\phi}(x)\\,dx" }, { "math_id": 44, "text": "\n \\chi_E \\mapsto \\pi(E)\n" }, { "math_id": 45, "text": "f" }, { "math_id": 46, "text": " T : H \\to H " }, { "math_id": 47, "text": "\\langle T \\xi \\mid \\xi \\rangle = \\int_X f(\\lambda) \\,d\\mu_{\\xi}(\\lambda), \\quad \\forall \\xi \\in H." }, { "math_id": 48, "text": "\\mu_{\\xi}" }, { "math_id": 49, "text": "\\mu_{\\xi}(E) := \\langle \\pi(E)\\xi \\mid \\xi \\rangle, \\quad \\forall E \\in M." }, { "math_id": 50, "text": "(X,M,\\mu)" }, { "math_id": 51, "text": "T" }, { "math_id": 52, "text": "g:\\mathbb{R}\\to\\mathbb{C}" }, { "math_id": 53, "text": "g(T) :=\\int_\\mathbb{R} g(x) \\, d\\pi(x)." }, { "math_id": 54, "text": "A:H\\to H" }, { "math_id": 55, "text": "\\sigma(A)" }, { "math_id": 56, "text": "A" }, { "math_id": 57, "text": "\\pi^A" }, { "math_id": 58, "text": " E \\subset \\sigma(A)" }, { "math_id": 59, "text": "A =\\int_{\\sigma(A)} \\lambda \\, d\\pi^A(\\lambda)," }, { "math_id": 60, "text": "\\lambda" }, { "math_id": 61, "text": " \\int_X^\\oplus H_x \\ d \\mu(x). " }, { "math_id": 62, "text": " \\pi(E) = U^* \\rho(E) U \\quad " }, { "math_id": 63, "text": " \\pi = \\bigoplus_{1 \\leq n \\leq \\omega} (\\pi \\mid H_n) " }, { "math_id": 64, "text": " H_n = \\int_{X_n}^\\oplus H_x \\ d (\\mu \\mid X_n) (x) " }, { "math_id": 65, "text": " X_n = \\{x \\in X: \\dim H_x = n\\}. " }, { "math_id": 66, "text": "\\mathbf{P}(H)" }, { "math_id": 67, "text": "\\varphi" }, { "math_id": 68, "text": "\\mathbb{R}^3" }, { "math_id": 69, "text": "E" }, { "math_id": 70, "text": "\\|\\varphi\\|=1" }, { "math_id": 71, "text": "\n P_\\pi(\\varphi)(E) = \\langle \\varphi\\mid\\pi(E)(\\varphi)\\rangle = \\langle \\varphi|\\pi(E)|\\varphi\\rangle." }, { "math_id": 72, "text": "\nP_\\pi(\\varphi) :\nE \\mapsto \\langle\\varphi\\mid\\pi(E)\\varphi\\rangle\n" }, { "math_id": 73, "text": "A(\\varphi) = \\int_{\\mathbb{R}} \\lambda \\,d\\pi(\\lambda)(\\varphi)," }, { "math_id": 74, "text": "A(\\varphi) = \\sum_i \\lambda_i \\pi({\\lambda_i})(\\varphi)" } ]
https://en.wikipedia.org/wiki?curid=1046155
1046247
Reciprocity (photography)
A photography term In photography, reciprocity is the inverse relationship between the intensity and duration of light that determines the reaction of light-sensitive material. Within a normal exposure range for film stock, for example, the reciprocity law states that the film response will be determined by the total exposure, defined as intensity × time. Therefore, the same response (for example, the optical density of the developed film) can result from reducing duration and increasing light intensity, and vice versa. The reciprocal relationship is assumed in most sensitometry, for example when measuring a Hurter and Driffield curve (optical density versus logarithm of total exposure) for a photographic emulsion. Total exposure of the film or sensor, the product of focal-plane illuminance times exposure time, is measured in lux seconds. History. The idea of reciprocity, once known as Bunsen–Roscoe reciprocity, originated from the work of Robert Bunsen and Henry Roscoe in 1862. Deviations from the reciprocity law were reported by Captain William de Wiveleslie Abney in 1893, and extensively studied by Karl Schwarzschild in 1899. Schwarzschild's model was found wanting by Abney and by Englisch, and better models have been proposed in subsequent decades of the early twentieth century. In 1913, Kron formulated an equation to describe the effect in terms of curves of constant density, which J. Halm adopted and modified, leading to the "Kron–Halm catenary equation" or "Kron–Halm–Webb formula" to describe departures from reciprocity. In chemical photography. In photography, "reciprocity" refers to the relationship whereby the total light energy – proportional to the total exposure, the product of the light intensity and exposure time, controlled by aperture and shutter speed, respectively – determines the effect of the light on the film. That is, an increase of brightness by a certain factor is exactly compensated by a decrease of exposure time by the same factor, and vice versa. In other words, there is under normal circumstances a reciprocal proportion between aperture area and shutter speed for a given photographic result, with a wider aperture requiring a faster shutter speed for the same effect. For example, an EV of 10 may be achieved with an aperture (f-number) of &lt;templatestyles src="F//sandbox/styles.css" /&gt;f/2.8 and a shutter speed of 1/125 s. The same exposure is achieved by doubling the aperture area to &lt;templatestyles src="F//sandbox/styles.css" /&gt;f/2 and halving the exposure time to 1/250 s, or by halving the aperture area to &lt;templatestyles src="F//sandbox/styles.css" /&gt;f/4 and doubling the exposure time to 1/60 s; in each case the response of the film is expected to be the same. Reciprocity failure. For most photographic materials, reciprocity is valid with good accuracy over a range of values of exposure duration, but becomes increasingly inaccurate as this range is departed from: this is reciprocity failure (reciprocity law failure, or the Schwarzschild effect). As the light level decreases out of the reciprocity range, the increase in duration, and hence of total exposure, required to produce an equivalent response becomes higher than the formula states; for instance, at half of the light required for a normal exposure, the duration must be more than doubled for the same result. Multipliers used to correct for this effect are called "reciprocity factors" (see model below). At very low light levels, film is less responsive. Light can be considered to be a stream of discrete photons, and a light-sensitive emulsion is composed of discrete light-sensitive grains, usually silver halide crystals. Each grain must absorb a certain number of photons in order for the light-driven reaction to occur and the latent image to form. In particular, if the surface of the silver halide crystal has a cluster of approximately four or more reduced silver atoms, resulting from absorption of a sufficient number of photons (usually a few dozen photons are required), it is rendered developable. At low light levels, "i.e." few photons per unit time, photons impinge upon each grain relatively infrequently; if the four photons required arrive over a long enough interval, the partial change due to the first one or two is not stable enough to survive before enough photons arrive to make a permanent latent image center. This breakdown in the usual tradeoff between aperture and shutter speed is known as reciprocity failure. Each different film type has a different response at low light levels. Some films are very susceptible to reciprocity failure, and others much less so. Some films that are very light sensitive at normal illumination levels and normal exposure times lose much of their sensitivity at low light levels, becoming effectively "slow" films for long exposures. Conversely some films that are "slow" under normal exposure duration retain their light sensitivity better at low light levels. For example, for a given film, if a light meter indicates a required EV of 5 and the photographer sets the aperture to f/11, then ordinarily a 4-second exposure would be required; a reciprocity correction factor of 1.5 would require the exposure to be extended to 6 seconds for the same result. Reciprocity failure generally becomes significant at exposures of longer than about 1 sec for film, and above 30 sec for paper. Reciprocity also breaks down at extremely high levels of illumination with very short exposures. This is a concern for scientific and technical photography, but rarely to general photographers, as exposures significantly shorter than a millisecond are only required for subjects such as explosions and in particle physics, or when taking high-speed motion pictures with very high shutter speeds (1/10,000 sec or faster). Schwarzschild law. In response to astronomical observations of low intensity reciprocity failure, Karl Schwarzschild wrote (circa 1900): "In determinations of stellar brightness by the photographic method I have recently been able to confirm once more the existence of such deviations, and to follow them up in a quantitative way, and to express them in the following rule, which should replace the law of reciprocity: Sources of light of different intensity "I" cause the same degree of blackening under different exposures "t" if the products formula_0 are equal." Unfortunately, Schwarzschild's empirically determined "0.86" coefficient turned out to be of limited usefulness. A modern formulation of Schwarzschild's law is given as formula_1 where "E" is a measure of the "effect of the exposure" that leads to changes in the opacity of the photosensitive material (in the same degree that an equal value of exposure "H = It" does in the reciprocity region), "I" is illuminance, "t" is exposure duration and "p" is the "Schwarzschild coefficient". However, a constant value for "p" remains elusive, and has not replaced the need for more realistic models or empirical sensitometric data in critical applications. When reciprocity holds, Schwarzschild's law uses "p" = 1.0. Since the Schwarzschild's law formula gives unreasonable values for times in the region where reciprocity holds, a modified formula has been found that fits better across a wider range of exposure times. The modification is in terms of a factor the multiplies the ISO film speed: Relative film speed formula_2 where the "t" + 1 term implies a breakpoint near 1 second separating the region where reciprocity holds from the region where it fails. Simple model for "t" &gt; 1 second. Some models of microscope use automatic electronic models for reciprocity failure compensation, generally of a form for correct time, "Tc", expressible as a power law of metered time, "Tm", that is, "Tc=(Tm)p", for times in seconds. Typical values of "p" are 1.25 to 1.45, but some are low as 1.1 and high as 1.8. The Kron–Halm catenary equation. Kron's equation as modified by Halm states that the response of the film is a function of formula_3, with the factor defined by a catenary (hyperbolic cosine) equation accounting for reciprocity failure at both very high and very low intensities: formula_4 where "I"0 is the photographic material's optimum intensity level and "a" is a constant that characterizes the material's reciprocity failure. Quantum reciprocity-failure model. Modern models of reciprocity failure incorporate an exponential function, as opposed to power law, dependence on time or intensity at long exposure times or low intensities, based on the distribution of "interquantic times" (times between photon absorptions in a grain) and the temperature-dependent "lifetimes" of the intermediate states of the partially exposed grains. Baines and Bomback explain the "low intensity inefficiency" this way: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;Electrons are released at a very low rate. They are trapped and neutralised and must remain as isolated silver atoms for much longer than in normal latent image formation. It has already been observed that such extreme sub-latent image is unstable, and it is postulated that ineffiency is caused by many isolated atoms of silver losing their acquired electrons during the period of instability. Astrophotography. Reciprocity failure is an important effect in the field of film-based astrophotography. Deep-sky objects such as galaxies and nebulae are often so faint that they are not visible to the un-aided eye. To make matters worse, many objects' spectra do not line up with the film emulsion's sensitivity curves. Many of these targets are small and require long focal lengths, which can push the focal ratio far above &lt;templatestyles src="F//sandbox/styles.css" /&gt;f/5. Combined, these parameters make these targets extremely difficult to capture with film; exposures from 30 minutes to well over an hour are typical. As a typical example, capturing an image of the Andromeda Galaxy at &lt;templatestyles src="F//sandbox/styles.css" /&gt;f/4 will take about 30 minutes; to get the same density at &lt;templatestyles src="F//sandbox/styles.css" /&gt;f/8 would require an exposure of about 200 minutes. When a telescope is tracking an object, every minute is difficult; therefore, reciprocity failure is one of the biggest motivations for astronomers to switch to digital imaging. Electronic image sensors have their own limitation at long exposure time and low illuminance levels, not usually referred to as reciprocity failure, namely noise from dark current, but this effect can be controlled by cooling the sensor. Holography. A similar problem exists in holography. The total energy required when exposing holographic film using a continuous wave laser (i.e. for several seconds) is significantly less than the total energy required when exposing holographic film using a pulsed laser (i.e. around 20–40 nanoseconds) due to a reciprocity failure. It can also be caused by very long or very short exposures with a continuous wave laser. To try to offset the reduced brightness of the film due to reciprocity failure, a method called latensification can be used. This is usually done directly after the holographic exposure and using an incoherent light source (such as a 25–40 W light bulb). Exposing the holographic film to the light for a few seconds can increase the brightness of the hologram by an order of magnitude. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "I \\times t^{0.86}" }, { "math_id": 1, "text": " E = It^p \\ " }, { "math_id": 2, "text": " = (t + 1)^{(p - 1)} \\ " }, { "math_id": 3, "text": "I t / \\psi \\ " }, { "math_id": 4, "text": " \\psi = \\frac{1}{2}[(I/I_0)^a + (I/I_0)^{-a}] " } ]
https://en.wikipedia.org/wiki?curid=1046247
10462678
Freund–Rubin compactification
Freund–Rubin compactification is a form of dimensional reduction in which a field theory in "d"-dimensional spacetime, containing gravity and some field whose field strength formula_0 is a rank "s" antisymmetric tensor, 'prefers' to be reduced down to a spacetime with a dimension of either "s" or "d-s". Derivation. Consider general relativity in "d" spacetime dimensions. In the presence of an antisymmetric tensor field (without external sources), the Einstein field equations, and the equations of motion for the antisymmetric tensor are formula_1 Where the stress–energy tensor takes the form formula_2 Being a rank "s" antisymmetric tensor, the field strength formula_0 has a natural ansatz for its solution, proportional to the Levi-Civita tensor on some "s"-dimensional manifold. formula_3 Here, the indices formula_4 run over "s" of the dimensions of the ambient "d"-dimensional spacetime, formula_5 is the determinant of the metric of this "s"-dimensional subspace, and formula_6 is some constant with dimensions of mass-squared (in natural units). Since the field strength is nonzero only on an "s"-dimensional submanifold, the metric formula_7 is naturally separated into two parts, of block-diagonal form formula_8 with formula_9, formula_10, and formula_11 extending over the same "s" dimensions as the field strength formula_0, and formula_12, formula_13, and formula_14 covering the remaining "d-s" dimensions. Having separated our "d" dimensional space into the product of two subspaces, Einstein's field equations allow us to solve for the curvature of these two sub-manifolds, and we find formula_15 We find that the Ricci curvatures of the "s"- and "(d-s)"-dimensional sub-manifolds are necessarily opposite in sign. One must have positive curvature, and the other must have negative curvature, and so one of these manifolds must be compact. Consequently, at scales significantly larger than that of the compact manifold, the universe appears to have either "s" or "(d-s)" dimensions, as opposed to the underlying "d". As an important example of this, contains a 3-form antisymmetric tensor with a 4-form field strength, and consequently prefers to compactify 7 or 4 of its space-like dimensions, so the large-scale spacetime must be either 4 or 7 dimensional, the former of which is attractive from a phenomenological perspective Perspective from string theory. Some important examples of Freund–Rubin compactification come from looking at the behavior of branes in string theory. Similar to the way that coupling to the electromagnetic field stabilizes electrically charged particles, the presence of antisymmetric tensor fields of various rank in a string theory stabilizes branes of various dimensions. In turn, the geometry of the spacetime near stacks of branes becomes warped in such a way that Freund–Rubin compactification is realized. In Type IIB string theory, which requires ten spacetime dimensions, there is a five-form field strength formula_16 that allows for three dimensional D-branes, and the near horizon geometry of a stack of D3-branes is five-dimensional anti-de Sitter space times a five-dimensional sphere, formula_17, which is compact in five dimensions. This geometry is an important part of the AdS/CFT correspondence. Similarly, M-theory and its low energy limit of eleven dimensional supergravity contain a 4-form field strength, that stabilizes M2 and M5 branes. The near horizon geometry of stacks of these branes are formula_18 and formula_19, respectively. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "F" }, { "math_id": 1, "text": "\\begin{align}\nR^{\\mu \\nu}-\\frac{1}{2}R g^{\\mu \\nu} = 8 \\pi T^{\\mu \\nu}, ~~~~\\nabla_\\mu F^{\\mu \\, \\alpha_2 ... \\alpha_s} = 0\n\\end{align}" }, { "math_id": 2, "text": "\nT^{\\mu \\nu} = F_{\\alpha_1 ... \\alpha_{s-1}}~^\\mu F^{\\alpha_1 ... \\alpha_{s-1} \\nu}-\\frac{1}{2s} F_{\\alpha_1 ... \\alpha_{s}} F^{\\alpha_1 ... \\alpha_{s}}g^{\\mu \\nu}\n" }, { "math_id": 3, "text": "F^{\\mu_1 ... \\mu_{s}}=(\\epsilon^{\\mu_1 ... \\mu_{s}}/\\sqrt{g_s}) f " }, { "math_id": 4, "text": "\\mu_i" }, { "math_id": 5, "text": "g_s" }, { "math_id": 6, "text": "f" }, { "math_id": 7, "text": "g" }, { "math_id": 8, "text": "\ng_{\\mu \\nu}=\\begin{bmatrix}\n g_{ m n}(x^p) & 0 \\\\\n 0 & g_{\\bar{m} \\bar{n}}(x^{\\bar{p}})\n\\end{bmatrix}\n" }, { "math_id": 9, "text": "m" }, { "math_id": 10, "text": "n" }, { "math_id": 11, "text": "p" }, { "math_id": 12, "text": "\\bar{m}" }, { "math_id": 13, "text": "\\bar{n}" }, { "math_id": 14, "text": "\\bar{p}" }, { "math_id": 15, "text": "\\begin{align}\nR_{d-s} &= \\frac{(s-1)(d-s)}{(d-2)}\\lambda, ~ ~ R_{s}=-\\frac{s(d-s-1)}{(d-2)}\\lambda \\\\\n\\lambda &= 8 \\pi G \\sgn(g_s) f^2\n\\end{align}" }, { "math_id": 16, "text": "F_5" }, { "math_id": 17, "text": "AdS_5 \\times S^5" }, { "math_id": 18, "text": "AdS_4 \\times S^7" }, { "math_id": 19, "text": "AdS_7 \\times S^4" } ]
https://en.wikipedia.org/wiki?curid=10462678
10464507
Monoidal t-norm logic
In mathematical logic, monoidal t-norm based logic (or shortly MTL), the logic of left-continuous t-norms, is one of the t-norm fuzzy logics. It belongs to the broader class of substructural logics, or logics of residuated lattices; it extends the logic of commutative bounded integral residuated lattices (known as Höhle's monoidal logic, Ono's FLew, or intuitionistic logic without contraction) by the axiom of prelinearity. Motivation. In fuzzy logic, rather than regarding statements as being either true or false, we associate each statement with a numerical "confidence" in that statement. By convention the confidences range over the unit interval formula_0, where the maximal confidence formula_1 corresponds to the classical concept of true and the minimal confidence formula_2 corresponds to the classical concept of false. T-norms are binary functions on the real unit interval [0, 1], which in fuzzy logic are often used to represent a conjunction connective; if formula_3 are the confidences we ascribe to the statements formula_4 and formula_5 respectively, then one uses a t-norm formula_6 to calculate the confidence formula_7 ascribed to the compound statement ‘formula_4 and formula_5’. A t-norm formula_6 has to satisfy the properties of commutativity formula_8, associativity formula_9, monotonicity — if formula_10 and formula_11 then formula_12, and having 1 as identity element formula_13. Notably absent from this list is the property of "idempotence" formula_14; the closest one gets is that formula_15. It may seem strange to be less confident in ‘formula_4 and formula_4’ than in just formula_4, but we generally do want to allow for letting the confidence formula_7 in a combined ‘formula_4 and formula_5’ be less than both the confidence formula_16 in formula_4 and the confidence formula_17 in formula_5, and then the ordering formula_18 by monotonicity requires formula_19. Another way of putting it is that the t-norm can only take into account the confidences as numbers, not the reasons that may be behind ascribing those confidences; thus it cannot treat ‘formula_4 and formula_4’ differently from ‘formula_4 and formula_5, where we are equally confident in both’. Because the symbol formula_20 via its use in lattice theory is very closely associated with the idempotence property, it can be useful to switch to a different symbol for conjunction that is not necessarily idempotent. In the fuzzy logic tradition one sometimes uses formula_21 for this "strong" conjunction, but this article follows the substructural logic tradition of using formula_22 for the strong conjunction; thus formula_7 is the confidence we ascribe to the statement formula_23 (still read ‘formula_4 and formula_5’, perhaps with ‘strong’ or ‘multiplicative’ as qualification of the ‘and’). Having formalised conjunction formula_22, one wishes to continue with the other connectives. One approach to doing that is to introduce negation as an order-reversing map formula_24, then defining remaining connectives using De Morgan's laws, material implication, and the like. A problem with doing so is that the resulting logics may have undesirable properties: they may be too close to classical logic, or if not conversely not support expected inference rules. An alternative that makes the consequences of different choices more predictable is to instead continue with implication formula_25 as the second connective: this is overall the most common connective in axiomatisations of logic, and it has closer ties to the deductive aspects of logic than most other connectives. A confidence counterpart formula_26 of the implication connective may in fact be defined directly as the residuum of the t-norm. The logical link between conjunction and implication is provided by something as fundamental as the inference rule modus ponens formula_27: from formula_4 and formula_28 follows formula_5. In the fuzzy logic case that is more rigorously written as formula_29, because this makes explicit that our confidence for the premise(s) here is that in formula_30, not those in formula_4 and formula_28 separately. So if formula_16 and formula_17 are our confidences in formula_4 and formula_5 respectively, then formula_31 is the sought confidence in formula_28, and formula_32 is the combined confidence in formula_30. We require that formula_33 since our confidence formula_17 for formula_5 should not be less than our confidence formula_32 in the statement formula_30 from which formula_5 logically follows. This bounds the sought confidence formula_31, and one approach for turning formula_26 into a binary operation like formula_6 would be to make it as large as possible while respecting this bound: formula_34. Taking formula_35 gives formula_36, so the supremum is always of a nonempty bounded set and thus well-defined. For a general t-norm there remains the possibility that formula_37 has a jump discontinuity at formula_38, in which case formula_39 could come out strictly larger than formula_17 even though formula_40 is defined as the least upper bound of formula_41s satisfying formula_42; to prevent that and have the construction work as expected, we require that the t-norm formula_6 is left-continuous. The residuum of a left-continuous t-norm thus can be characterized as the weakest function that makes the fuzzy modus ponens valid, which makes it a suitable truth function for implication in fuzzy logic. More algebraically, we say that an operation formula_26 is a residuum of a t-norm formula_6 if for all formula_16, formula_17, and formula_43 it satisfies formula_44 if and only if formula_45. This equivalence of numerical comparisons mirrors the equivalence of entailments formula_46 if and only if formula_47 that exists because any proof of formula_48 from the premise formula_23 can be converted into a proof of formula_49 from the premise formula_4 by doing an extra implication introduction step, and conversely any proof of formula_49 from the premise formula_4 can be converted into a proof of formula_48 from the premise formula_23 by doing an extra implication elimination step. Left-continuity of the t-norm is the necessary and sufficient condition for this relationship between a t-norm conjunction and its residual implication to hold. Truth functions of further propositional connectives can be defined by means of the t-norm and its residuum, for instance the residual negation formula_50 In this way, the left-continuous t-norm, its residuum, and the truth functions of additional propositional connectives (see the section "Standard semantics" below) determine the truth values of complex propositional formulae in [0, 1]. Formulae that always evaluate to 1 are then called "tautologies" with respect to the given left-continuous t-norm formula_51 or "formula_52tautologies." The set of all formula_52tautologies is called the "logic" of the t-norm formula_51 since these formulae represent the laws of fuzzy logic (determined by the t-norm) that hold (to degree 1) regardless of the truth degrees of atomic formulae. Some formulae are tautologies with respect to "all" left-continuous t-norms: they represent general laws of propositional fuzzy logic that are independent of the choice of a particular left-continuous t-norm. These formulae form the logic MTL, which can thus be characterized as the "logic of left-continuous t-norms." Syntax. Language. The language of the propositional logic MTL consists of countably many propositional variables and the following primitive logical connectives: The following are the most common defined logical connectives: formula_57 formula_59 In MTL, the definition is equivalent to formula_60 formula_62 formula_65 Well-formed formulae of MTL are defined as usual in propositional logics. In order to save parentheses, it is common to use the following order of precedence: Axioms. A Hilbert-style deduction system for MTL has been introduced by Esteva and Godo (2001). Its single derivation rule is modus ponens: from formula_4 and formula_66 derive formula_67 The following are its axiom schemata: formula_68 The traditional numbering of axioms, given in the left column, is derived from the numbering of axioms of Hájek's basic fuzzy logic BL. The axioms (MTL4a)–(MTL4c) replace the axiom of "divisibility" (BL4) of BL. The axioms (MTL5a) and (MTL5b) express the law of residuation and the axiom (MTL6) corresponds to the condition of prelinearity. The axioms (MTL2) and (MTL3) of the original axiomatic system were shown to be redundant (Chvalovský, 2012) and (Cintula, 2005). All the other axioms were shown to be independent (Chvalovský, 2012). Semantics. Like in other propositional t-norm fuzzy logics, algebraic semantics is predominantly used for MTL, with three main classes of algebras with respect to which the logic is complete: General semantics. MTL-algebras. Algebras for which the logic MTL is sound are called "MTL-algebras." They can be characterized as "prelinear commutative bounded integral residuated lattices." In more detail, an algebraic structure formula_69 is an MTL-algebra if Important examples of MTL algebras are "standard" MTL-algebras on the real unit interval [0, 1]. Further examples include all Boolean algebras, all linear Heyting algebras (both with formula_79), all MV-algebras, all BL-algebras, etc. Since the residuation condition can equivalently be expressed by identities, MTL-algebras form a variety. Interpretation of the logic MTL in MTL-algebras. The connectives of MTL are interpreted in MTL-algebras as follows: formula_82 Due to the prelinearity condition, this definition is equivalent to one that uses formula_72 instead of formula_83 thus formula_84 With this interpretation of connectives, any evaluation "e"v of propositional variables in "L" uniquely extends to an evaluation "e" of all well-formed formulae of MTL, by the following inductive definition (which generalizes Tarski's truth conditions), for any formulae "A", "B", and any propositional variable "p": formula_86 Informally, the truth value 1 represents full truth and the truth value 0 represents full falsity; intermediate truth values represent intermediate degrees of truth. Thus a formula is considered fully true under an evaluation "e" if "e"("A") = 1. A formula "A" is said to be "valid" in an MTL-algebra "L" if it is fully true under all evaluations in "L", that is, if "e"("A") = 1 for all evaluations "e" in "L". Some formulae (for instance, "p" → "p") are valid in any MTL-algebra; these are called "tautologies" of MTL. The notion of global entailment (or: global consequence) is defined for MTL as follows: a set of formulae Γ entails a formula "A" (or: "A" is a global consequence of Γ), in symbols formula_87 if for any evaluation "e" in any MTL-algebra, whenever "e"("B") = 1 for all formulae "B" in Γ, then also "e"("A") = 1. Informally, the global consequence relation represents the transmission of full truth in any MTL-algebra of truth values. General soundness and completeness theorems. The logic MTL is sound and complete with respect to the class of all MTL-algebras (Esteva &amp; Godo, 2001): A formula is provable in MTL if and only if it is valid in all MTL-algebras. The notion of MTL-algebra is in fact so defined that MTL-algebras form the class of "all" algebras for which the logic MTL is sound. Furthermore, the "strong completeness theorem" holds: A formula "A" is a global consequence in MTL of a set of formulae Γ if and only if "A" is derivable from Γ in MTL. Linear semantics. Like algebras for other fuzzy logics, MTL-algebras enjoy the following "linear subdirect decomposition property": Every MTL-algebra is a subdirect product of linearly ordered MTL-algebras. In consequence of the linear subdirect decomposition property of all MTL-algebras, the "completeness theorem with respect to linear MTL-algebras" (Esteva &amp; Godo, 2001) holds: Standard semantics. "Standard" are called those MTL-algebras whose lattice reduct is the real unit interval [0, 1]. They are uniquely determined by the real-valued function that interprets strong conjunction, which can be any left-continuous t-norm formula_72. The standard MTL-algebra determined by a left-continuous t-norm formula_72 is usually denoted by formula_88 In formula_89 implication is represented by the residuum of formula_90 weak conjunction and disjunction respectively by the minimum and maximum, and the truth constants zero and one respectively by the real numbers 0 and 1. The logic MTL is complete with respect to standard MTL-algebras; this fact is expressed by the "standard completeness theorem" (Jenei &amp; Montagna, 2002): A formula is provable in MTL if and only if it is valid in all standard MTL-algebras. Since MTL is complete with respect to standard MTL-algebras, which are determined by left-continuous t-norms, MTL is often referred to as the "logic of left-continuous t-norms" (similarly as BL is the logic of continuous t-norms).
[ { "math_id": 0, "text": "[0,1]" }, { "math_id": 1, "text": "1" }, { "math_id": 2, "text": "0" }, { "math_id": 3, "text": "a,b \\in [0,1]" }, { "math_id": 4, "text": "A" }, { "math_id": 5, "text": "B" }, { "math_id": 6, "text": "*" }, { "math_id": 7, "text": "a*b" }, { "math_id": 8, "text": " a*b = b*a " }, { "math_id": 9, "text": " (a*b)*c = a*(b*c) " }, { "math_id": 10, "text": " a \\leqslant b " }, { "math_id": 11, "text": " c \\leqslant d " }, { "math_id": 12, "text": " a*c \\leqslant b*d " }, { "math_id": 13, "text": " 1*a = a " }, { "math_id": 14, "text": " a*a = a " }, { "math_id": 15, "text": " a*a \\leqslant 1*a = a " }, { "math_id": 16, "text": "a" }, { "math_id": 17, "text": "b" }, { "math_id": 18, "text": " a*b < a \\leqslant b " }, { "math_id": 19, "text": " a*a \\leqslant a*b < a " }, { "math_id": 20, "text": "\\wedge" }, { "math_id": 21, "text": "\\&" }, { "math_id": 22, "text": "\\otimes" }, { "math_id": 23, "text": "A \\otimes B" }, { "math_id": 24, "text": "[0,1] \\longrightarrow [0,1]" }, { "math_id": 25, "text": "\\to" }, { "math_id": 26, "text": "\\Rightarrow" }, { "math_id": 27, "text": "A, A \\to B \\vdash B" }, { "math_id": 28, "text": "A \\to B" }, { "math_id": 29, "text": "A \\otimes (A \\to B) \\vdash B" }, { "math_id": 30, "text": "A \\otimes (A \\to B)" }, { "math_id": 31, "text": "a \\Rightarrow b" }, { "math_id": 32, "text": " a * (a \\Rightarrow b) " }, { "math_id": 33, "text": " a * (a \\mathbin{\\Rightarrow} b) \\leqslant b " }, { "math_id": 34, "text": " a \\mathbin{\\Rightarrow} b \\equiv \\sup \\left\\{ x \\in [0,1] \\;\\big|\\; a*x \\leqslant b \\right\\} " }, { "math_id": 35, "text": "x=0" }, { "math_id": 36, "text": " a*x = a*0 \\leqslant 1*0 = 0 \\leqslant b " }, { "math_id": 37, "text": " f_a(x) = a*x " }, { "math_id": 38, "text": " x = a \\mathbin{\\Rightarrow} b " }, { "math_id": 39, "text": " a * (a \\mathbin{\\Rightarrow} b) " }, { "math_id": 40, "text": " a \\mathbin{\\Rightarrow} b " }, { "math_id": 41, "text": "x" }, { "math_id": 42, "text": " a*x \\leqslant b " }, { "math_id": 43, "text": "c" }, { "math_id": 44, "text": "a*b\\le c" }, { "math_id": 45, "text": "a\\le (b \\mathbin{\\Rightarrow} c)" }, { "math_id": 46, "text": " A \\otimes B \\vdash C " }, { "math_id": 47, "text": " A \\vdash B \\to C " }, { "math_id": 48, "text": "C" }, { "math_id": 49, "text": "B \\to C" }, { "math_id": 50, "text": "\\neg x=(x\\mathbin{\\Rightarrow} 0)." }, { "math_id": 51, "text": "*," }, { "math_id": 52, "text": "*\\mbox{-}" }, { "math_id": 53, "text": "\\rightarrow" }, { "math_id": 54, "text": "\\bot" }, { "math_id": 55, "text": "\\overline{0}" }, { "math_id": 56, "text": "\\neg" }, { "math_id": 57, "text": "\\neg A \\equiv A \\rightarrow \\bot" }, { "math_id": 58, "text": "\\leftrightarrow" }, { "math_id": 59, "text": "A \\leftrightarrow B \\equiv (A \\rightarrow B) \\wedge (B \\rightarrow A)" }, { "math_id": 60, "text": "(A \\rightarrow B) \\otimes (B \\rightarrow A)." }, { "math_id": 61, "text": "\\vee" }, { "math_id": 62, "text": "A \\vee B \\equiv ((A \\rightarrow B) \\rightarrow B) \\wedge ((B \\rightarrow A) \\rightarrow A)" }, { "math_id": 63, "text": "\\top" }, { "math_id": 64, "text": "\\overline{1}" }, { "math_id": 65, "text": "\\top \\equiv \\bot \\rightarrow \\bot" }, { "math_id": 66, "text": "A \\rightarrow B" }, { "math_id": 67, "text": "B." }, { "math_id": 68, "text": "\\begin{array}{ll}\n {\\rm (MTL1)}\\colon & (A \\rightarrow B) \\rightarrow ((B \\rightarrow C) \\rightarrow (A \\rightarrow C)) \\\\\n {\\rm (MTL2)}\\colon & A \\otimes B \\rightarrow A\\\\\n {\\rm (MTL3)}\\colon & A \\otimes B \\rightarrow B \\otimes A\\\\\n {\\rm (MTL4a)}\\colon & A \\wedge B \\rightarrow A\\\\\n {\\rm (MTL4b)}\\colon & A \\wedge B \\rightarrow B \\wedge A\\\\\n {\\rm (MTL4c)}\\colon & A \\otimes (A \\rightarrow B) \\rightarrow A \\wedge B\\\\\n {\\rm (MTL5a)}\\colon & (A \\rightarrow (B \\rightarrow C)) \\rightarrow (A \\otimes B \\rightarrow C)\\\\\n {\\rm (MTL5b)}\\colon & (A \\otimes B \\rightarrow C) \\rightarrow (A \\rightarrow (B \\rightarrow C))\\\\\n {\\rm (MTL6)}\\colon & ((A \\rightarrow B) \\rightarrow C) \\rightarrow (((B \\rightarrow A) \\rightarrow C) \\rightarrow C)\\\\\n {\\rm (MTL7)}\\colon & \\bot \\rightarrow A\n\\end{array}" }, { "math_id": 69, "text": "(L,\\wedge,\\vee,\\ast,\\Rightarrow,0,1)" }, { "math_id": 70, "text": "(L,\\wedge,\\vee,0,1)" }, { "math_id": 71, "text": "(L,\\ast,1)" }, { "math_id": 72, "text": "\\ast" }, { "math_id": 73, "text": "z*x\\le y" }, { "math_id": 74, "text": "z\\le x\\Rightarrow y," }, { "math_id": 75, "text": "\\le" }, { "math_id": 76, "text": "(L,\\wedge,\\vee)," }, { "math_id": 77, "text": "L" }, { "math_id": 78, "text": "(x\\Rightarrow y)\\vee(y\\Rightarrow x)=1" }, { "math_id": 79, "text": "\\ast=\\wedge" }, { "math_id": 80, "text": "\\vee," }, { "math_id": 81, "text": "\\Leftrightarrow" }, { "math_id": 82, "text": "x\\Leftrightarrow y \\equiv (x\\Rightarrow y)\\wedge(y\\Rightarrow x)" }, { "math_id": 83, "text": "\\wedge," }, { "math_id": 84, "text": "x\\Leftrightarrow y \\equiv (x\\Rightarrow y)\\ast(y\\Rightarrow x)" }, { "math_id": 85, "text": "-x \\equiv x\\Rightarrow 0" }, { "math_id": 86, "text": "\\begin{array}{rcl}\n e(p) &=& e_{\\mathrm v}(p)\n\\\\ e(\\bot) &=& 0\n\\\\ e(\\top) &=& 1\n\\\\ e(A\\otimes B) &=& e(A) \\ast e(B)\n\\\\ e(A\\rightarrow B) &=& e(A) \\Rightarrow e(B)\n\\\\ e(A\\wedge B) &=& e(A) \\wedge e(B)\n\\\\ e(A\\vee B) &=& e(A) \\vee e(B)\n\\\\ e(A\\leftrightarrow B) &=& e(A) \\Leftrightarrow e(B)\n\\\\ e(\\neg A) &=& e(A) \\Rightarrow 0\n\\end{array}" }, { "math_id": 87, "text": "\\Gamma\\models A," }, { "math_id": 88, "text": "[0,1]_{\\ast}." }, { "math_id": 89, "text": "[0,1]_{\\ast}," }, { "math_id": 90, "text": "\\ast," } ]
https://en.wikipedia.org/wiki?curid=10464507
104646
Gravitational binding energy
Minimum energy to remove a system from a gravitationally bound state The gravitational binding energy of a system is the minimum energy which must be added to it in order for the system to cease being in a gravitationally bound state. A gravitationally bound system has a lower ("i.e.", more negative) gravitational potential energy than the sum of the energies of its parts when these are completely separated—this is what keeps the system in accordance with the minimum total potential energy principle. The gravitational binding energy can be conceptually different within the theories of newtonian gravity and Albert Einstein's theory of gravity called General Relativity. In newtonian gravity, the binding energy can be considered to be the linear sum of the interactions between all pairs of microscopic components of the system, while in General Relativity, this is only approximately true if the gravitational fields are all weak. When stronger fields are present within a system, the binding energy is a nonlinear property of the entire system, and it cannot be conceptually attributed among the elements of the system. In this case the binding energy can be considered to be the (negative) difference between the ADM mass of the system, as it is manifest in its gravitational interaction with other distant systems, and the sum of the energies of all the atoms and other elementary particles of the system if disassembled. For a spherical body of uniform density, the gravitational binding energy "U" is given in newtonian gravity by the formula formula_0 where "G" is the gravitational constant, "M" is the mass of the sphere, and "R" is its radius. Assuming that the Earth is a sphere of uniform density (which it is not, but is close enough to get an order-of-magnitude estimate) with "M" = and "r" = , then "U" = . This is roughly equal to one week of the Sun's total energy output. It is , 60% of the absolute value of the potential energy per kilogram at the surface. The actual depth-dependence of density, inferred from seismic travel times (see Adams–Williamson equation), is given in the Preliminary Reference Earth Model (PREM). Using this, the real gravitational binding energy of Earth can be calculated numerically as "U" = . According to the virial theorem, the gravitational binding energy of a star is about two times its internal thermal energy in order for hydrostatic equilibrium to be maintained. As the gas in a star becomes more relativistic, the gravitational binding energy required for hydrostatic equilibrium approaches zero and the star becomes unstable (highly sensitive to perturbations), which may lead to a supernova in the case of a high-mass star due to strong radiation pressure or to a black hole in the case of a neutron star. Derivation within newtonian gravity for a uniform sphere. The gravitational binding energy of a sphere with radius formula_1 is found by imagining that it is pulled apart by successively moving spherical shells to infinity, the outermost first, and finding the total energy needed for that. Assuming a constant density formula_2, the masses of a shell and the sphere inside it are: formula_3 and formula_4 The required energy for a shell is the negative of the gravitational potential energy: formula_5 Integrating over all shells yields: formula_6 Since formula_2 is simply equal to the mass of the whole divided by its volume for objects with uniform density, therefore formula_7 And finally, plugging this into our result leads to formula_8 Gravitational binding energy formula_9 Negative mass component. Two bodies, placed at the distance "R" from each other and reciprocally not moving, exert a gravitational force on a third body slightly smaller when "R" is small. This can be seen as a negative mass component of the system, equal, for uniformly spherical solutions, to: formula_10 For example, the fact that Earth is a gravitationally-bound sphere of its current size "costs" of mass (roughly one fourth the mass of Phobos – see above for the same value in Joules), and if its atoms were sparse over an arbitrarily large volume the Earth would weigh its current mass plus kilograms (and its gravitational pull over a third body would be accordingly stronger). It can be easily demonstrated that this negative component can never exceed the positive component of a system. A negative binding energy greater than the mass of the system itself would indeed require that the radius of the system be smaller than: formula_11 which is smaller than formula_12 its Schwarzschild radius: formula_13 and therefore never visible to an external observer. However this is only a Newtonian approximation and in relativistic conditions other factors must be taken into account as well. Non-uniform spheres. Planets and stars have radial density gradients from their lower density surfaces to their much denser compressed cores. Degenerate matter objects (white dwarfs; neutron star pulsars) have radial density gradients plus relativistic corrections. Neutron star relativistic equations of state include a graph of radius vs. mass for various models. The most likely radii for a given neutron star mass are bracketed by models AP4 (smallest radius) and MS2 (largest radius). BE is the ratio of gravitational binding energy mass equivalent to observed neutron star gravitational mass of "M" with radius "R", formula_14 formula_15 Given current values and the star mass "M" expressed relative to the solar mass, formula_19 then the relativistic fractional binding energy of a neutron star is formula_20
[ { "math_id": 0, "text": "U = -\\frac{3GM^2}{5R}" }, { "math_id": 1, "text": "R" }, { "math_id": 2, "text": "\\rho" }, { "math_id": 3, "text": "m_\\mathrm{shell} = 4\\pi r^{2}\\rho\\,dr" }, { "math_id": 4, "text": "m_\\mathrm{interior} = \\frac{4}{3}\\pi r^3 \\rho" }, { "math_id": 5, "text": "dU = -G\\frac{m_\\mathrm{shell} m_\\mathrm{interior}}{r}" }, { "math_id": 6, "text": "U = -G\\int_0^R {\\frac{\\left(4\\pi r^2\\rho\\right)\\left(\\tfrac{4}{3}\\pi r^{3}\\rho\\right)}{r}} dr = -G{\\frac{16}{3}}\\pi^2 \\rho^2 \\int_0^R {r^4} dr = -G{\\frac{16}{15}}{\\pi}^2{\\rho}^2 R^5" }, { "math_id": 7, "text": "\\rho=\\frac{M}{\\frac{4}{3}\\pi R^3}" }, { "math_id": 8, "text": "U = -G\\frac{16}{15} \\pi^2 R^5 \\left(\\frac{M}{\\frac{4}{3}\\pi R^3}\\right)^2 = -\\frac{3GM^2}{5R}" }, { "math_id": 9, "text": "U = -\\frac{3GM^2}{5R}" }, { "math_id": 10, "text": "M_\\mathrm{binding}=-\\frac{3GM^2}{5Rc^2}" }, { "math_id": 11, "text": "R\\leq\\frac{3GM}{5c^2}" }, { "math_id": 12, "text": "\\frac{3}{10}" }, { "math_id": 13, "text": "R\\leq\\frac{3}{10} r_\\mathrm{s}" }, { "math_id": 14, "text": "BE = \\frac{0.60\\,\\beta}{1 - \\frac{\\beta}{2}}" }, { "math_id": 15, "text": "\\beta = \\frac{G M}{R c^2} ." }, { "math_id": 16, "text": "G = 6.6743\\times10^{-11}\\, \\mathrm{m^3 \\cdot kg^{-1} \\cdot s^{-2}}" }, { "math_id": 17, "text": "c^2 = 8.98755\\times10^{16}\\, \\mathrm{m^2 \\cdot s^{-2}}" }, { "math_id": 18, "text": "M_\\odot = 1.98844\\times10^{30}\\, \\mathrm{kg}" }, { "math_id": 19, "text": "M_x = \\frac{M}{M_\\odot} ," }, { "math_id": 20, "text": "BE = \\frac{885.975\\,M_x}{R - 738.313\\,M_x}" } ]
https://en.wikipedia.org/wiki?curid=104646
10465001
Eigenvalue perturbation
In mathematics, an eigenvalue perturbation problem is that of finding the eigenvectors and eigenvalues of a system formula_0 that is perturbed from one with known eigenvectors and eigenvalues formula_1. This is useful for studying how sensitive the original system's eigenvectors and eigenvalues formula_2 are to changes in the system. This type of analysis was popularized by Lord Rayleigh, in his investigation of harmonic vibrations of a string perturbed by small inhomogeneities. The derivations in this article are essentially self-contained and can be found in many texts on numerical linear algebra or numerical functional analysis. This article is focused on the case of the perturbation of a simple eigenvalue (see in multiplicity of eigenvalues). Why generalized eigenvalues? In the entry applications of eigenvalues and eigenvectors we find numerous scientific fields in which eigenvalues are used to obtain solutions. Generalized eigenvalue problems are less widespread but are a key in the study of vibrations. They are useful when we use the Galerkin method or Rayleigh-Ritz method to find approximate solutions of partial differential equations modeling vibrations of structures such as strings and plates; the paper of Courant (1943) is fundamental. The Finite element method is a widespread particular case. In classical mechanics, we may find generalized eigenvalues when we look for vibrations of multiple degrees of freedom systems close to equilibrium; the kinetic energy provides the mass matrix formula_3, the potential strain energy provides the rigidity matrix formula_4. To get details, for example see the first section of this article of Weinstein (1941, in French) With both methods, we obtain a system of differential equations or Matrix differential equation formula_5 with the mass matrix formula_3 , the damping matrix formula_6 and the rigidity matrix formula_4. If we neglect the damping effect, we use formula_7, we can look for a solution of the following form formula_8; we obtain that formula_9 and formula_10are solution of the generalized eigenvalue problem formula_11 Setting of perturbation for a generalized eigenvalue problem. Suppose we have solutions to the generalized eigenvalue problem, formula_12 where formula_13 and formula_14 are matrices. That is, we know the eigenvalues "λ"0"i" and eigenvectors x0"i" for "i" 1, ..., "N". It is also required that "the eigenvalues are distinct." Now suppose we want to change the matrices by a small amount. That is, we want to find the eigenvalues and eigenvectors of formula_15 where formula_16 with the perturbations formula_17 and formula_18 much smaller than formula_19 and formula_20 respectively. Then we expect the new eigenvalues and eigenvectors to be similar to the original, plus small perturbations: formula_21 Steps. We assume that the matrices are symmetric and positive definite, and assume we have scaled the eigenvectors such that formula_22formula_23 where "δij" is the Kronecker delta. Now we want to solve the equation formula_24 In this article we restrict the study to first order perturbation. First order expansion of the equation. Substituting in (1), we get formula_25 which expands to formula_26 Canceling from (0) (formula_27) leaves formula_28 Removing the higher-order terms, this simplifies to formula_29 In other words, formula_30 no longer denotes the exact variation of the eigenvalue but its first order approximation. As the matrix is symmetric, the unperturbed eigenvectors are formula_3 orthogonal and so we use them as a basis for the perturbed eigenvectors. That is, we want to construct formula_31 with formula_32, where the "εij" are small constants that are to be determined. In the same way, substituting in (2), and removing higher order terms, we get formula_33 The derivation can go on with two forks. We start with (3)formula_34 First fork: get first eigenvalue perturbation. Eigenvalue perturbation. we left multiply with formula_35 and use (2) as well as its first order variation (5); we get formula_36 or formula_37 We notice that it is the first order perturbation of the generalized Rayleigh quotient with fixed formula_38: formula_39 Moreover, for formula_40, the formula formula_41 should be compared with Bauer-Fike theorem which provides a bound for eigenvalue perturbation. Eigenvector perturbation. We left multiply (3) with formula_42 for formula_43 and get formula_44 We use formula_45 for formula_43. formula_46 or formula_47 As the eigenvalues are assumed to be simple, for formula_43 formula_48 Moreover (5) (the first order variation of (2) ) yields formula_49 We have obtained all the components of formula_50 . Second fork: Straightforward manipulations. Substituting (4) into (3) and rearranging gives formula_51 Because the eigenvectors are M0-orthogonal when M0 is positive definite, we can remove the summations by left-multiplying by formula_52: formula_53 By use of equation (1) again: formula_54 The two terms containing "εii" are equal because left-multiplying (1) by formula_52 gives formula_55 Canceling those terms in (6) leaves formula_56 Rearranging gives formula_57 But by (2), this denominator is equal to 1. Thus formula_58 Then, as formula_59 for formula_60 (assumption simple eigenvalues) by left-multiplying equation (5) by formula_61: formula_62 Or by changing the name of the indices: formula_63 To find "εii", use the fact that: formula_64 implies: formula_65 Summary of the first order perturbation result. In the case where all the matrices are Hermitian positive definite and all the eigenvalues are distinct, formula_66 for infinitesimal formula_17 and formula_18 (the higher order terms in (3) being neglected). So far, we have not proved that these higher order terms may be neglected. This point may be derived using the implicit function theorem; in next section, we summarize the use of this theorem in order to obtain a first order expansion. Theoretical derivation. Perturbation of an implicit function.. In the next paragraph, we shall use the Implicit function theorem (Statement of the theorem ); we notice that for a continuously differentiable function formula_67, with an invertible Jacobian matrix formula_68, from a point formula_69 solution of formula_70, we get solutions of formula_71 with formula_72 close to formula_73 in the form formula_74 where formula_75 is a continuously differentiable function ; moreover the Jacobian marix of formula_76 is provided by the linear system formula_77. As soon as the hypothesis of the theorem is satisfied, the Jacobian matrix of formula_76 may be computed with a first order expansion of formula_78, we get formula_79; as formula_80, it is equivalent to equation formula_81. Eigenvalue perturbation: a theoretical basis.. We use the previous paragraph (Perturbation of an implicit function) with somewhat different notations suited to eigenvalue perturbation; we introduce formula_82, with formula_84. In order to use the Implicit function theorem, we study the invertibility of the Jacobian formula_85 with formula_86. Indeed, the solution of formula_87formula_88 may be derived with computations similar to the derivation of the expansion. formula_89 formula_90 When formula_91 is a simple eigenvalue, as the eigenvectors formula_92 form an orthonormal basis, for any right-hand side, we have obtained one solution therefore, the Jacobian is invertible. The implicit function theorem provides a continuously differentiable function formula_93 hence the expansion with little o notation: formula_94 formula_95. with formula_96 formula_97formula_98 This is the first order expansion of the perturbed eigenvalues and eigenvectors. which is proved. Results of sensitivity analysis with respect to the entries of the matrices. The results. This means it is possible to efficiently do a sensitivity analysis on "λi" as a function of changes in the entries of the matrices. (Recall that the matrices are symmetric and so changing K"k"ℓ will also change Kℓ"k", hence the (2 − "δ""k"ℓ) term.) formula_99 Similarly formula_100 Eigenvalue sensitivity, a small example. A simple case is formula_101; however you can compute eigenvalues and eigenvectors with the help of online tools such as (see introduction in Wikipedia WIMS) or using Sage SageMath. You get the smallest eigenvalue formula_102 and an explicit computation formula_103; more over, an associated eigenvector is formula_104; it is not an unitary vector; so formula_105; we get formula_106 and formula_107 ; hence formula_108; for this example , we have checked that formula_109 or formula_110. Existence of eigenvectors. Note that in the above example we assumed that both the unperturbed and the perturbed systems involved symmetric matrices, which guaranteed the existence of formula_111 linearly independent eigenvectors. An eigenvalue problem involving non-symmetric matrices is not guaranteed to have formula_111 linearly independent eigenvectors, though a sufficient condition is that formula_19 and formula_20 be simultaneously diagonalizable. The case of repeated eigenvalues. A technical report of Rellich for perturbation of eigenvalue problems provides several examples. The elementary examples are in chapter 2. The report may be downloaded from archive.org. We draw an example in which the eigenvectors have a nasty behavior. Example 1. Consider the following matrix formula_112 and formula_113 formula_114 For formula_115, the matrix formula_116 has eigenvectors formula_117 belonging to eigenvalues formula_118. Since formula_119 for formula_120 if formula_121 are any normalized eigenvectors belonging to formula_122 respectively then formula_123 where formula_124 are real for formula_125 It is obviously impossible to define formula_126 , say, in such a way that formula_127 tends to a limit as formula_128 because formula_129 has no limit as formula_130 Note in this example that formula_131 is not only continuous but also has continuous derivatives of all orders. Rellich draws the following important consequence. « Since in general the individual eigenvectors do not depend continuously on the perturbation parameter even though the operator formula_132 does, it is necessary to work, not with an eigenvector, but rather with the space spanned by all the eigenvectors belonging to the same eigenvalue. » Example 2. This example is less nasty that the previous one. Suppose formula_133 is the 2x2 identity matrix, any vector is an eigenvector; then formula_134 is one possible eigenvector. But if one makes a small perturbation, such as formula_135 Then the eigenvectors are formula_136 and formula_137; they are constant with respect to formula_138 so that formula_139 is constant and does not go to zero.
[ { "math_id": 0, "text": " Ax=\\lambda x " }, { "math_id": 1, "text": " A_0 x_0=\\lambda_0x_0 " }, { "math_id": 2, "text": " x_{0i}, \\lambda_{0i}, i=1, \\dots n " }, { "math_id": 3, "text": " M " }, { "math_id": 4, "text": " K " }, { "math_id": 5, "text": " M \\ddot x+B \\dot x +Kx=0 " }, { "math_id": 6, "text": " B " }, { "math_id": 7, "text": " B=0" }, { "math_id": 8, "text": " x=e^{i \\omega t} u" }, { "math_id": 9, "text": "u " }, { "math_id": 10, "text": "\\omega^2 " }, { "math_id": 11, "text": " -\\omega^2 M u+Ku =0 " }, { "math_id": 12, "text": "\\mathbf{K}_0 \\mathbf{x}_{0i} = \\lambda_{0i} \\mathbf{M}_0 \\mathbf{x}_{0i}. \\qquad (0)" }, { "math_id": 13, "text": "\\mathbf{K}_0" }, { "math_id": 14, "text": "\\mathbf{M}_0" }, { "math_id": 15, "text": "\\mathbf{K} \\mathbf{x}_i = \\lambda_i \\mathbf{M} \\mathbf{x}_i \\qquad (1)" }, { "math_id": 16, "text": "\\begin{align}\n\\mathbf{K} &= \\mathbf{K}_0 + \\delta \\mathbf{K}\\\\\n\\mathbf{M} &= \\mathbf{M}_0 + \\delta \\mathbf{M}\n\\end{align}" }, { "math_id": 17, "text": "\\delta\\mathbf{K}" }, { "math_id": 18, "text": "\\delta\\mathbf{M}" }, { "math_id": 19, "text": "\\mathbf{K}" }, { "math_id": 20, "text": "\\mathbf{M}" }, { "math_id": 21, "text": "\\begin{align}\n\\lambda_i &= \\lambda_{0i}+\\delta\\lambda_{i} \\\\\n\\mathbf{x}_i &= \\mathbf{x}_{0i} + \\delta\\mathbf{x}_{i} \n\\end{align}" }, { "math_id": 22, "text": "\\mathbf{x}_{0j}^\\top \\mathbf{M}_0\\mathbf{x}_{0i} = \\delta_{ij}, \\quad \n" }, { "math_id": 23, "text": "\\mathbf{x}_{i}^T \\mathbf{M} \\mathbf{x}_{j}= \\delta_{ij} \\qquad(2)\n \n" }, { "math_id": 24, "text": "\\mathbf{K}\\mathbf{x}_i - \\lambda_i \\mathbf{M} \\mathbf{x}_i=0. " }, { "math_id": 25, "text": "(\\mathbf{K}_0+\\delta \\mathbf{K})(\\mathbf{x}_{0i} + \\delta \\mathbf{x}_{i}) = \\left (\\lambda_{0i}+\\delta\\lambda_{i} \\right ) \\left (\\mathbf{M}_0+ \\delta \\mathbf{M} \\right ) \\left (\\mathbf{x}_{0i}+\\delta\\mathbf{x}_{i} \\right )," }, { "math_id": 26, "text": "\\begin{align}\n\\mathbf{K}_0\\mathbf{x}_{0i} &+ \\delta \\mathbf{K}\\mathbf{x}_{0i} + \\mathbf{K}_0\\delta \\mathbf{x}_i + \\delta \\mathbf{K}\\delta \\mathbf{x}_i = \\\\[6pt]\n&\\lambda_{0i}\\mathbf{M}_0\\mathbf{x}_{0i}+\\lambda_{0i}\\mathbf{M}_0\\delta\\mathbf{x}_i + \\lambda_{0i} \\delta \\mathbf{M} \\mathbf{x}_{0i} +\\delta\\lambda_i\\mathbf{M}_0\\mathbf{x}_{0i} + \\\\\n& \\quad \\lambda_{0i} \\delta \\mathbf{M} \\delta\\mathbf{x}_i + \\delta\\lambda_i \\delta \\mathbf{M}\\mathbf{x}_{0i} + \\delta\\lambda_i\\mathbf{M}_0\\delta\\mathbf{x}_i + \\delta\\lambda_i \\delta \\mathbf{M} \\delta\\mathbf{x}_i.\n\\end{align}" }, { "math_id": 27, "text": "\\mathbf{K}_0 \\mathbf{x}_{0i} = \\lambda_{0i} \\mathbf{M}_0 \\mathbf{x}_{0i}" }, { "math_id": 28, "text": "\\begin{align}\n\\delta \\mathbf{K} \\mathbf{x}_{0i} + & \\mathbf{K}_0\\delta \\mathbf{x}_i + \\delta \\mathbf{K}\\delta \\mathbf{x}_i = \\lambda_{0i}\\mathbf{M}_0\\delta\\mathbf{x}_i + \\lambda_{0i} \\delta \\mathbf{M} \\mathbf{x}_{0i} + \\delta\\lambda_i\\mathbf{M}_0\\mathbf{x}_{0i} + \\\\ & \\lambda_{0i} \\delta \\mathbf{M} \\delta\\mathbf{x}_i + \\delta\\lambda_i \\delta \\mathbf{M} \\mathbf{x}_{0i} + \\delta\\lambda_i\\mathbf{M}_0\\delta\\mathbf{x}_i + \\delta\\lambda_i \\delta \\mathbf{M} \\delta\\mathbf{x}_i.\n\\end{align}" }, { "math_id": 29, "text": "\\mathbf{K}_0 \\delta\\mathbf{x}_i+ \\delta \\mathbf{K} \\mathbf{x}_{0i} = \\lambda_{0i}\\mathbf{M}_0 \\delta \\mathbf{x}_i + \\lambda_{0i}\\delta \\mathbf{M} \\mathrm{x}_{0i} + \\delta \\lambda_i \\mathbf{M}_0\\mathbf{x}_{0i}. \\qquad(3)" }, { "math_id": 30, "text": "\\delta \\lambda_i" }, { "math_id": 31, "text": "\\delta \\mathbf{x}_i = \\sum_{j=1}^N \\varepsilon_{ij} \\mathbf{x}_{0j} \\qquad (4) \\quad " }, { "math_id": 32, "text": " \\varepsilon_{ij}=\\mathbf{x}_{0j}^T M \\delta \\mathbf{x}_i " }, { "math_id": 33, "text": "\\delta\\mathbf{x}_j \\mathbf{M}_0 \\mathbf{x}_{0i} + \\mathbf{x}_{0j} \\mathbf{M}_0 \\delta \\mathbf{x}_{i} \n+ \\mathbf{x}_{0j} \\delta \\mathbf{M}_0 \\mathbf{x}_{0i}=0 \\quad{(5)}" }, { "math_id": 34, "text": "\\quad \\mathbf{K}_0 \\delta\\mathbf{x}_i+ \\delta \\mathbf{K} \\mathbf{x}_{0i} = \\lambda_{0i}\\mathbf{M}_0 \\delta \\mathbf{x}_i + \\lambda_{0i}\\delta \\mathbf{M} \\mathrm{x}_{0i} + \\delta \\lambda_i \\mathbf{M}_0\\mathbf{x}_{0i}; " }, { "math_id": 35, "text": "\\mathbf{x}_{0i}^T " }, { "math_id": 36, "text": " \\mathbf{x}_{0i}^T \\delta \\mathbf{K} \\mathbf{x}_{0i} = \\lambda_{0i} \\mathbf{x}_{0i}^T\\delta \\mathbf{M} \\mathrm{x}_{0i} + \\delta \\lambda_i\n" }, { "math_id": 37, "text": " \\delta \\lambda_i=\\mathbf{x}_{0i}^T \\delta \\mathbf{K} \\mathbf{x}_{0i} -\\lambda_{0i} \\mathbf{x}_{0i}^T\\delta \\mathbf{M} \\mathrm{x}_{0i}\n" }, { "math_id": 38, "text": "x_{0i}" }, { "math_id": 39, "text": " R(K,M;x_{0i})=x_{0i}^T K x_{0i}/x_{0i}^TMx_{0i}, \\text{ with }x_{0i}^TMx_{0i}=1 " }, { "math_id": 40, "text": "M=I" }, { "math_id": 41, "text": "\\delta \\lambda_i = x_{0i} ^T \\delta K x_{0i}" }, { "math_id": 42, "text": " x_{0j}^T " }, { "math_id": 43, "text": " j \\neq i " }, { "math_id": 44, "text": "\\mathbf{x}_{0j}^T\\mathbf{K}_0 \\delta\\mathbf{x}_i+ \\mathbf{x}_{0j}^T \\delta \\mathbf{K} \\mathbf{x}_{0i} = \\lambda_{0i} \\mathbf{x}_{0j}^T \\mathbf{M}_0 \\delta \\mathbf{x}_i + \\lambda_{0i} \\mathbf{x}_{0j}^T\\delta \\mathbf{M} \\mathrm{x}_{0i} + \\delta \\lambda_i \\mathbf{x}_{0j}^T\\mathbf{M}_0\\mathbf{x}_{0i}. " }, { "math_id": 45, "text": " \\mathbf{x}_{0j}^T K=\\lambda_{0j} \\mathbf{x}_{0j}^TM \\text{ and } \\mathbf{x}_{0j}^T\\mathbf{M}_0\\mathbf{x}_{0i}=0,\n" }, { "math_id": 46, "text": "\\lambda_{0j} \\mathbf{x}_{0j}^T\\mathbf{M}_0 \\delta\\mathbf{x}_i+ \\mathbf{x}_{0j}^T \\delta \\mathbf{K} \\mathbf{x}_{0i} = \\lambda_{0i} \\mathbf{x}_{0j}^T \\mathbf{M}_0 \\delta \\mathbf{x}_i + \\lambda_{0i} \\mathbf{x}_{0j}^T\\delta \\mathbf{M} \\mathrm{x}_{0i} . " }, { "math_id": 47, "text": "(\\lambda_{0j}-\\lambda_{0i}) \\mathbf{x}_{0j}^T\\mathbf{M}_0 \\delta\\mathbf{x}_i+ \\mathbf{x}_{0j}^T \\delta \\mathbf{K} \\mathbf{x}_{0i} = \\lambda_{0i} \\mathbf{x}_{0j}^T\\delta \\mathbf{M} \\mathrm{x}_{0i} . " }, { "math_id": 48, "text": " \\epsilon_{ij}=\\mathbf{x}_{0j}^T\\mathbf{M}_0 \\delta\\mathbf{x}_i =\\frac{-\\mathbf{x}_{0j}^T \\delta \\mathbf{K} \\mathbf{x}_{0i} + \\lambda_{0i} \\mathbf{x}_{0j}^T\\delta \\mathbf{M} \\mathrm{x}_{0i}}{ (\\lambda_{0j}-\\lambda_{0i})} , i=1, \\dots N; j=1, \\dots N; j \\neq i. " }, { "math_id": 49, "text": " 2 \\epsilon_{ii}=2 \\mathbf{x}_{0i}^T \\mathbf{M}_0 \\delta x_i=-\\mathbf{x}_{0i}^T \\delta M \\mathbf{x}_{0i} ." }, { "math_id": 50, "text": " \\delta x_i " }, { "math_id": 51, "text": "\\begin{align}\n\\mathbf{K}_0 \\sum_{j=1}^N \\varepsilon_{ij} \\mathbf{x}_{0j} + \\delta \\mathbf{K} \\mathbf{x}_{0i} &= \\lambda_{0i} \\mathbf{M}_0 \\sum_{j=1}^N \\varepsilon_{ij} \\mathbf{x}_{0j} + \\lambda_{0i} \\delta \\mathbf{M} \\mathbf{x}_{0i} + \\delta\\lambda_i \\mathbf{M}_0\\mathbf{x}_{0i} && (5) \\\\\n\\sum_{j=1}^N \\varepsilon_{ij} \\mathbf{K}_0 \\mathbf{x}_{0j} + \\delta \\mathbf{K} \\mathbf{x}_{0i} &= \\lambda_{0i} \\mathbf{M}_0 \\sum_{j=1}^N \\varepsilon_{ij} \\mathbf{x}_{0j} + \\lambda_{0i} \\delta \\mathbf{M} \\mathbf{x}_{0i} + \\delta\\lambda_i \\mathbf{M}_0 \\mathbf{x}_{0i} && \\\\ (\\text{applying } \\mathbf{K}_0 \\text{ to the sum} )\\\\\n\\sum_{j=1}^N \\varepsilon_{ij} \\lambda_{0j} \\mathbf{M}_0 \\mathbf{x}_{0j} + \\delta \\mathbf{K} \\mathbf{x}_{0i} &= \\lambda_{0i} \\mathbf{M}_0 \\sum_{j=1}^N \\varepsilon_{ij} \\mathbf{x}_{0j} + \\lambda_{0i} \\delta \\mathbf{M} \\mathbf{x}_{0i} + \\delta\\lambda_i \\mathbf{M}_0 \\mathbf{x}_{0i} && (\\text{using Eq. } (1) )\n\\end{align}" }, { "math_id": 52, "text": "\\mathbf{x}_{0i}^\\top" }, { "math_id": 53, "text": "\\mathbf{x}_{0i}^\\top \\varepsilon_{ii} \\lambda_{0i} \\mathbf{M}_0 \\mathbf{x}_{0i} + \\mathbf{x}_{0i}^\\top \\delta \\mathbf{K} \\mathbf{x}_{0i} = \\lambda_{0i} \\mathbf{x}_{0i}^\\top \\mathbf{M}_0 \\varepsilon_{ii} \\mathbf{x}_{0i} + \\lambda_{0i}\\mathbf{x}_{0i}^\\top \\delta \\mathbf{M} \\mathbf{x}_{0i} + \\delta\\lambda_i\\mathbf{x}_{0i}^\\top \\mathbf{M}_0 \\mathbf{x}_{0i}. " }, { "math_id": 54, "text": "\\mathbf{x}_{0i}^\\top \\mathbf{K}_0 \\varepsilon_{ii} \\mathbf{x}_{0i} + \\mathbf{x}_{0i}^\\top \\delta \\mathbf{K} \\mathbf{x}_{0i} = \\lambda_{0i} \\mathbf{x}_{0i}^\\top \\mathbf{M}_0\\varepsilon_{ii} \\mathbf{x}_{0i} + \\lambda_{0i}\\mathbf{x}_{0i}^\\top \\delta \\mathbf{M}\\mathbf{x}_{0i} + \\delta\\lambda_i\\mathbf{x}_{0i}^\\top \\mathbf{M}_0 \\mathbf{x}_{0i}. \\qquad (6) " }, { "math_id": 55, "text": "\\mathbf{x}_{0i}^\\top\\mathbf{K}_0\\mathbf{x}_{0i} = \\lambda_{0i}\\mathbf{x}_{0i}^\\top \\mathbf{M}_0 \\mathbf{x}_{0i}." }, { "math_id": 56, "text": "\\mathbf{x}_{0i}^\\top \\delta \\mathbf{K} \\mathbf{x}_{0i} = \\lambda_{0i} \\mathbf{x}_{0i}^\\top \\delta \\mathbf{M} \\mathbf{x}_{0i} + \\delta\\lambda_i \\mathbf{x}_{0i}^\\top \\mathbf{M}_0\\mathbf{x}_{0i}." }, { "math_id": 57, "text": "\\delta\\lambda_i = \\frac{\\mathbf{x}^\\top_{0i} \\left (\\delta \\mathbf{K}- \\lambda_{0i} \\delta \\mathbf{M} \\right )\\mathbf{x}_{0i}}{\\mathbf{x}_{0i}^\\top\\mathbf{M}_0 \\mathbf{x}_{0i}}" }, { "math_id": 58, "text": "\\delta\\lambda_i = \\mathbf{x}^\\top_{0i} \\left (\\delta \\mathbf{K} - \\lambda_{0i} \\delta \\mathbf{M} \\right )\\mathbf{x}_{0i}." }, { "math_id": 59, "text": "\\lambda_i \\neq \\lambda_k " }, { "math_id": 60, "text": "i \\neq k" }, { "math_id": 61, "text": "\\mathbf{x}_{0k}^\\top" }, { "math_id": 62, "text": "\\varepsilon_{ik} = \\frac{\\mathbf{x}^\\top_{0k} \\left (\\delta \\mathbf{K} - \\lambda_{0i}\\delta \\mathbf{M} \\right )\\mathbf{x}_{0i}}{\\lambda_{0i}-\\lambda_{0k}}, \\qquad i\\neq k." }, { "math_id": 63, "text": "\\varepsilon_{ij} = \\frac{\\mathbf{x}^\\top_{0j} \\left (\\delta \\mathbf{K} - \\lambda_{0i} \\delta \\mathbf{M} \\right )\\mathbf{x}_{0i}}{\\lambda_{0i}-\\lambda_{0j}}, \\qquad i\\neq j." }, { "math_id": 64, "text": "\\mathbf{x}^\\top_i \\mathbf{M} \\mathbf{x}_i = 1" }, { "math_id": 65, "text": "\\varepsilon_{ii}=-\\tfrac{1}{2}\\mathbf{x}^\\top_{0i} \\delta \\mathbf{M} \\mathbf{x}_{0i}." }, { "math_id": 66, "text": "\\begin{align}\n\\lambda_i &= \\lambda_{0i} + \\mathbf{x}^\\top_{0i} \\left (\\delta \\mathbf{K} - \\lambda_{0i}\\delta \\mathbf{M} \\right ) \\mathbf{x}_{0i} \\\\ \n\\mathbf{x}_i &= \\mathbf{x}_{0i} \\left (1 - \\tfrac{1}{2} \\mathbf{x}^\\top_{0i} \\delta \\mathbf{M} \\mathbf{x}_{0i} \\right ) + \\sum_{j=1\\atop j\\neq i}^N \\frac{\\mathbf{x}^\\top_{0j}\\left (\\delta \\mathbf{K} - \\lambda_{0i}\\delta \\mathbf{M} \\right ) \\mathbf{x}_{0i}}{\\lambda_{0i}-\\lambda_{0j}} \\mathbf{x}_{0j}\n\\end{align}" }, { "math_id": 67, "text": "f:\\R^{n+m} \\to \\R^m, \\; f: (x,y) \\mapsto f(x,y)" }, { "math_id": 68, "text": " J_{f,b}(x_0,y_0) " }, { "math_id": 69, "text": " (x_0,y_0) " }, { "math_id": 70, "text": "f(x_0,y_0)=0 " }, { "math_id": 71, "text": "f(x,y)=0 " }, { "math_id": 72, "text": " x" }, { "math_id": 73, "text": " x_0" }, { "math_id": 74, "text": " y=g(x)" }, { "math_id": 75, "text": " g" }, { "math_id": 76, "text": " g " }, { "math_id": 77, "text": " J_{f,y}(x,g(x)) J_{g,x}(x)+J_{f,x}(x,g(x))=0 \\quad (6) " }, { "math_id": 78, "text": " f(x_0+ \\delta x, y_0+\\delta y)=0 " }, { "math_id": 79, "text": " J_{f,x}(x,g(x)) \\delta x+ J_{f,y}(x,g(x))\\delta y=0 " }, { "math_id": 80, "text": "\\delta y=J_{g,x}(x) \\delta x " }, { "math_id": 81, "text": " (6) " }, { "math_id": 82, "text": " \\tilde{f}: \\R^{2n^2} \\times \\R^{n+1} \\to \\R^{n+1}" }, { "math_id": 83, "text": " \\tilde{f} (K,M, \\lambda,x)= \\binom{f(K,M,\\lambda,x)}{f_{n+1}(x)}" }, { "math_id": 84, "text": "f(K,M, \\lambda,x) =Kx -\\lambda x, f_{n+1}(M,x)=x^T Mx -1" }, { "math_id": 85, "text": " J_{\\tilde{f};\\lambda,x} (K,M;\\lambda_{0i},x_{0i})" }, { "math_id": 86, "text": " J_{\\tilde{f};\\lambda,x} (K,M;\\lambda_i,x_i)(\\delta \\lambda,\\delta x)=\\binom{-Mx_i}{0} \\delta \\lambda +\\binom{K-\\lambda M}{2 x_i^T M} \\delta x_i" }, { "math_id": 87, "text": "J_{\\tilde{f};\\lambda_{0i},x_{0i} } (K,M;\\lambda_{0i},x_{0i})(\\delta \\lambda_i,\\delta x_i)=" }, { "math_id": 88, "text": "\\binom{y}{y_{n+1}} " }, { "math_id": 89, "text": " \\delta \\lambda_i= -x_{0i}^T y, \\; \\text{ and } (\\lambda_{0i}-\\lambda_{0j})x_{0j}^T M \\delta x_i=x_j^T y, j=1, \\dots, n, j \\neq i\\;;\n" }, { "math_id": 90, "text": "\n \\text{ or }x_{0j}^T M \\delta x_i=x_j^T y/(\\lambda_{0i}-\\lambda_{0j}), \\text{ and }\n\\; 2x_{0i}^TM \\delta x_i=y_{n+1} " }, { "math_id": 91, "text": " \\lambda_i" }, { "math_id": 92, "text": "x_{0j}, j=1, \\dots,n " }, { "math_id": 93, "text": " (K,M) \\mapsto (\\lambda_i(K,M), x_i(K,M))" }, { "math_id": 94, "text": " \\lambda_i=\\lambda_{0i}+ \\delta \\lambda_i +o(\\| \\delta K \\|+\\|\\delta M \\|)" }, { "math_id": 95, "text": " x_i=x_{0i}+ \\delta x_i +o(\\| \\delta K \\|+\\|\\delta M \\|)" }, { "math_id": 96, "text": " \\delta \\lambda_i=\\mathbf{x}_{0i}^T \\delta \\mathbf{K} \\mathbf{x}_{0i} -\\lambda_{0i} \\mathbf{x}_{0i}^T\\delta \\mathbf{M} \\mathrm{x}_{0i};" }, { "math_id": 97, "text": " \\delta x_i=\\mathbf{x}_{0j}^T\\mathbf{M}_0 \\delta\\mathbf{x}_i \\mathbf{x}_{0j} \n\\text{ with}\n" }, { "math_id": 98, "text": " \\mathbf{x}_{0j}^T\\mathbf{M}_0 \\delta\\mathbf{x}_i =\\frac{-\\mathbf{x}_{0j}^T \\delta \\mathbf{K} \\mathbf{x}_{0i} + \\lambda_{0i} \\mathbf{x}_{0j}^T\\delta \\mathbf{M} \\mathrm{x}_{0i}}{ (\\lambda_{0j}-\\lambda_{0i})} , i=1, \\dots n; j=1, \\dots n; j \\neq i." }, { "math_id": 99, "text": "\\begin{align}\n\\frac{\\partial \\lambda_i}{\\partial \\mathbf{K}_{(k\\ell)}} &= \\frac{\\partial}{\\partial \\mathbf{K}_{(k\\ell)}}\\left(\\lambda_{0i} + \\mathbf{x}^\\top_{0i} \\left (\\delta \\mathbf{K} - \\lambda_{0i} \\delta \\mathbf{M} \\right ) \\mathbf{x}_{0i} \\right) = x_{0i(k)} x_{0i(\\ell)} \\left (2 - \\delta_{k\\ell} \\right ) \\\\\n\\frac{\\partial \\lambda_i}{\\partial \\mathbf{M}_{(k\\ell)}} &= \\frac{\\partial}{\\partial \\mathbf{M}_{(k\\ell)}}\\left(\\lambda_{0i} + \\mathbf{x}^\\top_{0i} \\left (\\delta \\mathbf{K} - \\lambda_{0i} \\delta \\mathbf{M} \\right ) \\mathbf{x}_{0i}\\right) = - \\lambda_i x_{0i(k)} x_{0i(\\ell)} \\left (2- \\delta_{k\\ell} \\right ).\n\\end{align}" }, { "math_id": 100, "text": "\\begin{align}\n\\frac{\\partial\\mathbf{x}_i}{\\partial \\mathbf{K}_{(k\\ell)}} &= \\sum_{j=1\\atop j\\neq i}^N \\frac{x_{0j(k)} x_{0i(\\ell)} \\left (2-\\delta_{k\\ell} \\right )}{\\lambda_{0i}-\\lambda_{0j}}\\mathbf{x}_{0j} \\\\\n\\frac{\\partial \\mathbf{x}_i}{\\partial \\mathbf{M}_{(k\\ell)}} &= -\\mathbf{x}_{0i}\\frac{x_{0i(k)}x_{0i(\\ell)}}{2}(2-\\delta_{k\\ell}) - \\sum_{j=1\\atop j\\neq i}^N \\frac{\\lambda_{0i}x_{0j(k)} x_{0i(\\ell)}}{\\lambda_{0i}-\\lambda_{0j}}\\mathbf{x}_{0j} \\left (2-\\delta_{k\\ell} \\right ).\n\\end{align}" }, { "math_id": 101, "text": "K=\\begin{bmatrix} 2 & b \\\\ b & 0 \\end{bmatrix}" }, { "math_id": 102, "text": "\\lambda=- \\left [\\sqrt{ b^2+1} +1 \\right]" }, { "math_id": 103, "text": "\\frac{\\partial \\lambda}{\\partial b}=\\frac{-x}{\\sqrt{x^2+1}}" }, { "math_id": 104, "text": "\\tilde x_0=[x,-(\\sqrt{x^2+1}+1))]^T" }, { "math_id": 105, "text": "x_{01}x_{02} = \\tilde x_{01} \\tilde x_{02}/\\| \\tilde x_0 \\|^2" }, { "math_id": 106, "text": "\\| \\tilde x_0 \\|^2=2 \\sqrt{x^2+1}(\\sqrt{x^2+1}+1)" }, { "math_id": 107, "text": "\\tilde x_{01} \\tilde x_{02} =-x (\\sqrt{x^2+1}+1)" }, { "math_id": 108, "text": "x_{01} x_{02}=-\\frac{x}{2 \\sqrt{x^2+1}}" }, { "math_id": 109, "text": "\\frac{\\partial \\lambda}{\\partial b}= 2x_{01} x_{02}" }, { "math_id": 110, "text": " \\delta \\lambda=2x_{01} x_{02} \\delta b" }, { "math_id": 111, "text": "N" }, { "math_id": 112, "text": " B(\\epsilon)= \\epsilon \\begin{bmatrix}\n \\cos(2/\\epsilon) &, \\sin(2/\\epsilon) \\\\\n \\sin(2/\\epsilon) &,s \\cos(2/\\epsilon) \n\\end{bmatrix} " }, { "math_id": 113, "text": "A(\\epsilon)=I- \ne^{-1/\\epsilon^2} B; " }, { "math_id": 114, "text": " A(0)=I. " }, { "math_id": 115, "text": "\\epsilon \\neq 0" }, { "math_id": 116, "text": "A(\\epsilon)" }, { "math_id": 117, "text": "\\Phi^1=[\\cos(1/\\epsilon), -\\sin(1/\\epsilon)]^T; \\Phi^2=[\\sin(1/\\epsilon), -\\cos(1/\\epsilon)]^T " }, { "math_id": 118, "text": " \\lambda_1= 1-e^{-1/\\epsilon^2)} , \\lambda_2= 1+e^{-1/\\epsilon^2)} " }, { "math_id": 119, "text": " \\lambda_1 \\neq \\lambda_2 " }, { "math_id": 120, "text": " \\epsilon \\neq 0 " }, { "math_id": 121, "text": " u^j (\\epsilon),\nj= 1,2, " }, { "math_id": 122, "text": " \\lambda_j(\\epsilon),j=1,2 " }, { "math_id": 123, "text": " u^j=e^{\\alpha_j(\\epsilon)} \\Phi^j(\\epsilon) " }, { "math_id": 124, "text": " \\alpha_j , j=1,2 " }, { "math_id": 125, "text": " \\epsilon \\neq 0 ." }, { "math_id": 126, "text": " \\alpha_1(\\epsilon) \n" }, { "math_id": 127, "text": " u^1 (\\epsilon) " }, { "math_id": 128, "text": " \\epsilon \\rightarrow 0 ," }, { "math_id": 129, "text": " |u^1(\\epsilon)|=|\\cos(1/\\epsilon)|" }, { "math_id": 130, "text": " \\epsilon \\rightarrow 0 ." }, { "math_id": 131, "text": " A_{jk} (\\epsilon) " }, { "math_id": 132, "text": " A(\\epsilon) " }, { "math_id": 133, "text": "[K_0]" }, { "math_id": 134, "text": "u_0=[1, 1]^T/\\sqrt{2}" }, { "math_id": 135, "text": "[K] = [K_0] + \\begin{bmatrix}\\epsilon & 0 \\\\0 & 0 \\end{bmatrix} " }, { "math_id": 136, "text": "v_1=[1, 0]^T" }, { "math_id": 137, "text": "v_2=[0, 1]^T" }, { "math_id": 138, "text": " \\epsilon " }, { "math_id": 139, "text": " \\|u_0-v_1 \\| " } ]
https://en.wikipedia.org/wiki?curid=10465001
1046736
Partial least squares regression
Statistical method Partial least squares (PLS) regression is a statistical method that bears some relation to principal components regression; instead of finding hyperplanes of maximum variance between the response and independent variables, it finds a linear regression model by projecting the predicted variables and the observable variables to a new space. Because both the "X" and "Y" data are projected to new spaces, the PLS family of methods are known as bilinear factor models. Partial least squares discriminant analysis (PLS-DA) is a variant used when the "Y" is categorical. PLS is used to find the fundamental relations between two matrices ("X" and "Y"), i.e. a latent variable approach to modeling the covariance structures in these two spaces. A PLS model will try to find the multidimensional direction in the "X" space that explains the maximum multidimensional variance direction in the "Y" space. PLS regression is particularly suited when the matrix of predictors has more variables than observations, and when there is multicollinearity among "X" values. By contrast, standard regression will fail in these cases (unless it is regularized). Partial least squares was introduced by the Swedish statistician Herman O. A. Wold, who then developed it with his son, Svante Wold. An alternative term for PLS is projection to latent structures, but the term "partial least squares" is still dominant in many areas. Although the original applications were in the social sciences, PLS regression is today most widely used in chemometrics and related areas. It is also used in bioinformatics, sensometrics, neuroscience, and anthropology. Core idea. We are given a sample of formula_0 paired observations formula_1. In the first step formula_2, the partial least squares regression searches for the normalized direction formula_3, formula_4 that maximizes the covariance formula_5 Note below, the algorithm is denoted in matrix notation. Underlying model. The general underlying model of multivariate PLS with formula_6 components is formula_7 formula_8 where The decompositions of X and Y are made so as to maximise the covariance between T and U. Note that this covariance is defined pair by pair: the covariance of column "i" of T (length "n") with the column "i" of U (length "n") is maximized. Additionally, the covariance of the column i of T with the column "j" of U (with formula_14) is zero. In PLSR, the loadings are thus chosen so that the scores form an orthogonal basis. This is a major difference with PCA where orthogonality is imposed onto loadings (and not the scores). Algorithms. A number of variants of PLS exist for estimating the factor and loading matrices T, U, P and Q. Most of them construct estimates of the linear regression between X and Y as formula_15. Some PLS algorithms are only appropriate for the case where Y is a column vector, while others deal with the general case of a matrix Y. Algorithms also differ on whether they estimate the factor matrix T as an orthogonal (that is, orthonormal) matrix or not. The final prediction will be the same for all these varieties of PLS, but the components will differ. PLS is composed of iteratively repeating the following steps "k" times (for "k" components): PLS1. PLS1 is a widely used algorithm appropriate for the vector Y case. It estimates T as an orthonormal matrix. In pseudocode it is expressed below (capital letters are matrices, lower case letters are vectors if they are superscripted and scalars if they are subscripted). 1 function PLS1(X, y, ℓ) 2 formula_18 3 formula_19, an initial estimate of w. 4 for formula_20 to formula_21 5 formula_22 6 formula_23 (note this is a scalar) 7 formula_24 8 formula_25 9 formula_26 (note this is a scalar) 10 if formula_27 11 formula_28, break the for loop 12 if formula_29 13 formula_30 14 formula_31 15 end for 16 define W to be the matrix with columns formula_32. Do the same to form the P matrix and q vector. 17 formula_33 18 formula_34 19 return formula_35 This form of the algorithm does not require centering of the input X and Y, as this is performed implicitly by the algorithm. This algorithm features 'deflation' of the matrix X (subtraction of formula_36), but deflation of the vector y is not performed, as it is not necessary (it can be proved that deflating y yields the same results as not deflating). The user-supplied variable l is the limit on the number of latent factors in the regression; if it equals the rank of the matrix X, the algorithm will yield the least squares regression estimates for B and formula_37 Extensions. OPLS. In 2002 a new method was published called orthogonal projections to latent structures (OPLS). In OPLS, continuous variable data is separated into predictive and uncorrelated (orthogonal) information. This leads to improved diagnostics, as well as more easily interpreted visualization. However, these changes only improve the interpretability, not the predictivity, of the PLS models. Similarly, OPLS-DA (Discriminant Analysis) may be applied when working with discrete variables, as in classification and biomarker studies. The general underlying model of OPLS is formula_38 formula_8 or in O2-PLS formula_38 formula_39 L-PLS. Another extension of PLS regression, named L-PLS for its L-shaped matrices, connects 3 related data blocks to improve predictability. In brief, a new "Z" matrix, with the same number of columns as the "X" matrix, is added to the PLS regression analysis and may be suitable for including additional background information on the interdependence of the predictor variables. 3PRF. In 2015 partial least squares was related to a procedure called the three-pass regression filter (3PRF). Supposing the number of observations and variables are large, the 3PRF (and hence PLS) is asymptotically normal for the "best" forecast implied by a linear latent factor model. In stock market data, PLS has been shown to provide accurate out-of-sample forecasts of returns and cash-flow growth. Partial least squares SVD. A PLS version based on singular value decomposition (SVD) provides a memory efficient implementation that can be used to address high-dimensional problems, such as relating millions of genetic markers to thousands of imaging features in imaging genetics, on consumer-grade hardware. PLS correlation. PLS correlation (PLSC) is another methodology related to PLS regression, which has been used in neuroimaging and sport science, to quantify the strength of the relationship between data sets. Typically, PLSC divides the data into two blocks (sub-groups) each containing one or more variables, and then uses singular value decomposition (SVD) to establish the strength of any relationship (i.e. the amount of shared information) that might exist between the two component sub-groups. It does this by using SVD to determine the inertia (i.e. the sum of the singular values) of the covariance matrix of the sub-groups under consideration. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n" }, { "math_id": 1, "text": "(\\vec{x}_i, \\vec{y}_i), i \\in {1,\\ldots,n}" }, { "math_id": 2, "text": "j=1" }, { "math_id": 3, "text": "\\vec{p}_j" }, { "math_id": 4, "text": "\\vec{q}_j" }, { "math_id": 5, "text": "\\max_{\\vec{p}_j, \\vec{q}_j} \\operatorname E [\\underbrace{(\\vec{p}_j\\cdot \\vec{X})}_{t_j} \\underbrace{(\\vec{q}_j\\cdot \\vec{Y})}_{u_j} ]. " }, { "math_id": 6, "text": "l" }, { "math_id": 7, "text": "X = T P^\\mathrm{T} + E" }, { "math_id": 8, "text": "Y = U Q^\\mathrm{T} + F" }, { "math_id": 9, "text": "n \\times m" }, { "math_id": 10, "text": "n \\times p" }, { "math_id": 11, "text": "n \\times \\ell" }, { "math_id": 12, "text": "m \\times \\ell" }, { "math_id": 13, "text": "p \\times \\ell" }, { "math_id": 14, "text": "i \\ne j" }, { "math_id": 15, "text": "Y = X \\tilde{B} + \\tilde{B}_0" }, { "math_id": 16, "text": "X" }, { "math_id": 17, "text": "Y" }, { "math_id": 18, "text": "X^{(0)} \\gets X" }, { "math_id": 19, "text": "w^{(0)} \\gets X^\\mathrm{T} y/\\|X^\\mathrm{T}y\\|" }, { "math_id": 20, "text": "k = 0" }, { "math_id": 21, "text": "\\ell-1" }, { "math_id": 22, "text": "t^{(k)} \\gets X^{(k)}w^{(k)}" }, { "math_id": 23, "text": "t_k \\gets {t^{(k)}}^\\mathrm{T} t^{(k)}" }, { "math_id": 24, "text": "t^{(k)} \\gets t^{(k)} / t_k" }, { "math_id": 25, "text": "p^{(k)} \\gets {X^{(k)}}^\\mathrm{T} t^{(k)}" }, { "math_id": 26, "text": "q_k \\gets {y}^\\mathrm{T} t^{(k)}" }, { "math_id": 27, "text": "q_k = 0" }, { "math_id": 28, "text": "\\ell \\gets k" }, { "math_id": 29, "text": "k < (\\ell-1)" }, { "math_id": 30, "text": "X^{(k+1)} \\gets X^{(k)} - t_k t^{(k)} {p^{(k)}}^\\mathrm{T}" }, { "math_id": 31, "text": "w^{(k+1)} \\gets {X^{(k+1)}}^\\mathrm{T} y " }, { "math_id": 32, "text": "w^{(0)},w^{(1)},\\ldots,w^{(\\ell-1)}" }, { "math_id": 33, "text": "B \\gets W {(P^\\mathrm{T} W)}^{-1} q" }, { "math_id": 34, "text": "B_0 \\gets q_0 - {P^{(0)}}^\\mathrm{T} B" }, { "math_id": 35, "text": "B, B_0" }, { "math_id": 36, "text": "t_k t^{(k)} {p^{(k)}}^\\mathrm{T}" }, { "math_id": 37, "text": "B_0" }, { "math_id": 38, "text": "X = T P^\\mathrm{T} +T_\\text{Y-orth} P^\\mathrm{T}_\\text{Y-orth} + E" }, { "math_id": 39, "text": "Y = U Q^\\mathrm{T} +U_\\text{X-orth} Q^\\mathrm{T}_\\text{X-orth} + F" } ]
https://en.wikipedia.org/wiki?curid=1046736
10468961
Elliptic boundary value problem
In mathematics, an elliptic boundary value problem is a special kind of boundary value problem which can be thought of as the stable state of an evolution problem. For example, the Dirichlet problem for the Laplacian gives the eventual distribution of heat in a room several hours after the heating is turned on. Differential equations describe a large class of natural phenomena, from the heat equation describing the evolution of heat in (for instance) a metal plate, to the Navier-Stokes equation describing the movement of fluids, including Einstein's equations describing the physical universe in a relativistic way. Although all these equations are boundary value problems, they are further subdivided into categories. This is necessary because each category must be analyzed using different techniques. The present article deals with the category of boundary value problems known as linear elliptic problems. Boundary value problems and partial differential equations specify relations between two or more quantities. For instance, in the heat equation, the rate of change of temperature at a point is related to the difference of temperature between that point and the nearby points so that, over time, the heat flows from hotter points to cooler points. Boundary value problems can involve space, time and other quantities such as temperature, velocity, pressure, magnetic field, etc. Some problems do not involve time. For instance, if one hangs a clothesline between the house and a tree, then in the absence of wind, the clothesline will not move and will adopt a gentle hanging curved shape known as the catenary. This curved shape can be computed as the solution of a differential equation relating position, tension, angle and gravity, but since the shape does not change over time, there is no time variable. Elliptic boundary value problems are a class of problems which do not involve the time variable, and instead only depend on space variables. The main example. In two dimensions, let formula_0 be the coordinates. We will use the notation formula_1 for the first and second partial derivatives of formula_2 with respect to formula_3, and a similar notation for formula_4. We will use the symbols formula_5 and formula_6 for the partial differential operators in formula_3 and formula_4. The second partial derivatives will be denoted formula_7 and formula_8. We also define the gradient formula_9, the Laplace operator formula_10 and the divergence formula_11. Note from the definitions that formula_12. The main example for boundary value problems is the Laplace operator, formula_13 formula_14 where formula_15 is a region in the plane and formula_16 is the boundary of that region. The function formula_17 is known data and the solution formula_2 is what must be computed. This example has the same essential properties as all other elliptic boundary value problems. The solution formula_2 can be interpreted as the stationary or limit distribution of heat in a metal plate shaped like formula_15, if this metal plate has its boundary adjacent to ice (which is kept at zero degrees, thus the Dirichlet boundary condition.) The function formula_17 represents the intensity of heat generation at each point in the plate (perhaps there is an electric heater resting on the metal plate, pumping heat into the plate at rate formula_18, which does not vary over time, but may be nonuniform in space on the metal plate.) After waiting for a long time, the temperature distribution in the metal plate will approach formula_2. Nomenclature. Let formula_19 where formula_20 and formula_21 are constants. formula_22 is called a second order differential operator. If we formally replace the derivatives formula_5 by formula_3 and formula_6 by formula_4, we obtain the expression formula_23. If we set this expression equal to some constant formula_24, then we obtain either an ellipse (if formula_25 are all the same sign) or a hyperbola (if formula_20 and formula_21 are of opposite signs.) For that reason, formula_26 is said to be elliptic when formula_27 and hyperbolic if formula_28. Similarly, the operator formula_29 leads to a parabola, and so this formula_26 is said to be parabolic. We now generalize the notion of ellipticity. While it may not be obvious that our generalization is the right one, it turns out that it does preserve most of the necessary properties for the purpose of analysis. General linear elliptic boundary value problems of the second degree. Let formula_30 be the space variables. Let formula_31 be real valued functions of formula_32. Let formula_26 be a second degree linear operator. That is, formula_33 (divergence form). formula_34 (nondivergence form) We have used the subscript formula_35 to denote the partial derivative with respect to the space variable formula_36. The two formulae are equivalent, provided that formula_37. In matrix notation, we can let formula_38 be an formula_39 matrix valued function of formula_3 and formula_40 be a formula_41-dimensional column vector-valued function of formula_3, and then we may write formula_42 (divergence form). One may assume, without loss of generality, that the matrix formula_20 is symmetric (that is, for all formula_43, formula_44. We make that assumption in the rest of this article. We say that the operator formula_26 is "elliptic" if, for some constant formula_45, any of the following equivalent conditions hold: An elliptic boundary value problem is then a system of equations like formula_49 (the PDE) and formula_50 (the boundary value). This particular example is the Dirichlet problem. The Neumann problem is formula_49 and formula_51 where formula_52 is the derivative of formula_2 in the direction of the outwards pointing normal of formula_16. In general, if formula_53 is any trace operator, one can construct the boundary value problem formula_49 and formula_54. In the rest of this article, we assume that formula_26 is elliptic and that the boundary condition is the Dirichlet condition formula_55. Sobolev spaces. The analysis of elliptic boundary value problems requires some fairly sophisticated tools of functional analysis. We require the space formula_56, the Sobolev space of "once-differentiable" functions on formula_15, such that both the function formula_2 and its partial derivatives formula_57, formula_58 are all square integrable. There is a subtlety here in that the partial derivatives must be defined "in the weak sense" (see the article on Sobolev spaces for details.) The space formula_59 is a Hilbert space, which accounts for much of the ease with which these problems are analyzed. The discussion in details of Sobolev spaces is beyond the scope of this article, but we will quote required results as they arise. Unless otherwise noted, all derivatives in this article are to be interpreted in the weak, Sobolev sense. We use the term "strong derivative" to refer to the classical derivative of calculus. We also specify that the spaces formula_60, formula_61 consist of functions that are formula_24 times strongly differentiable, and that the formula_24th derivative is continuous. Weak or variational formulation. The first step to cast the boundary value problem as in the language of Sobolev spaces is to rephrase it in its weak form. Consider the Laplace problem formula_62. Multiply each side of the equation by a "test function" formula_63 and integrate by parts using Green's theorem to obtain formula_64. We will be solving the Dirichlet problem, so that formula_65. For technical reasons, it is useful to assume that formula_63 is taken from the same space of functions as formula_2 is so we also assume that formula_66. This gets rid of the formula_67 term, yielding formula_68 (*) where formula_69 and formula_70. If formula_26 is a general elliptic operator, the same reasoning leads to the bilinear form formula_71. We do not discuss the Neumann problem but note that it is analyzed in a similar way. Continuous and coercive bilinear forms. The map formula_72 is defined on the Sobolev space formula_73 of functions which are once differentiable and zero on the boundary formula_16, provided we impose some conditions on formula_74 and formula_15. There are many possible choices, but for the purpose of this article, we will assume that The reader may verify that the map formula_72 is furthermore bilinear and continuous, and that the map formula_81 is linear in formula_63, and continuous if (for instance) formula_17 is square integrable. We say that the map formula_82 is coercive if there is an formula_45 for all formula_83, formula_84 This is trivially true for the Laplacian (with formula_85) and is also true for an elliptic operator if we assume formula_86 and formula_87. (Recall that formula_88 when formula_26 is elliptic.) Existence and uniqueness of the weak solution. One may show, via the Lax–Milgram lemma, that whenever formula_72 is coercive and formula_81 is continuous, then there exists a unique solution formula_89 to the weak problem (*). If further formula_72 is symmetric (i.e., formula_90), one can show the same result using the Riesz representation theorem instead. This relies on the fact that formula_72 forms an inner product on formula_91, which itself depends on Poincaré's inequality. Strong solutions. We have shown that there is a formula_89 which solves the weak system, but we do not know if this formula_2 solves the strong system formula_92 formula_93 Even more vexing is that we are not even sure that formula_2 is twice differentiable, rendering the expressions formula_94 in formula_95 apparently meaningless. There are many ways to remedy the situation, the main one being regularity. Regularity. A regularity theorem for a linear elliptic boundary value problem of the second order takes the form Theorem "If (some condition), then the solution formula_2 is in formula_96, the space of "twice differentiable" functions whose second derivatives are square integrable." There is no known simple condition necessary and sufficient for the conclusion of the theorem to hold, but the following conditions are known to be sufficient: It may be tempting to infer that if formula_16 is piecewise formula_97 then formula_2 is indeed in formula_98, but that is unfortunately false. Almost everywhere solutions. In the case that formula_99 then the second derivatives of formula_2 are defined almost everywhere, and in that case formula_100 almost everywhere. Strong solutions. One may further prove that if the boundary of formula_101 is a smooth manifold and formula_17 is infinitely differentiable in the strong sense, then formula_2 is also infinitely differentiable in the strong sense. In this case, formula_100 with the strong definition of the derivative. The proof of this relies upon an improved regularity theorem that says that if formula_16 is formula_60 and formula_102, formula_103, then formula_104, together with a Sobolev imbedding theorem saying that functions in formula_105 are also in formula_106 whenever formula_107. Numerical solutions. While in exceptional circumstances, it is possible to solve elliptic problems explicitly, in general it is an impossible task. The natural solution is to approximate the elliptic problem with a simpler one and to solve this simpler problem on a computer. Because of the good properties we have enumerated (as well as many we have not), there are extremely efficient numerical solvers for linear elliptic boundary value problems (see finite element method, finite difference method and spectral method for examples.) Eigenvalues and eigensolutions. Another Sobolev imbedding theorem states that the inclusion formula_108 is a compact linear map. Equipped with the spectral theorem for compact linear operators, one obtains the following result. Theorem "Assume that formula_72 is coercive, continuous and symmetric. The map formula_109 from formula_110 to formula_110 is a compact linear map. It has a basis of eigenvectors formula_111 and matching eigenvalues formula_112 such that" Series solutions and the importance of eigensolutions. If one has computed the eigenvalues and eigenvectors, then one may find the "explicit" solution of formula_100, formula_121 via the formula formula_122 where formula_123 The series converges in formula_124. Implemented on a computer using numerical approximations, this is known as the spectral method. An example. Consider the problem formula_125 on formula_126 formula_127 (Dirichlet conditions). The reader may verify that the eigenvectors are exactly formula_128, formula_129 with eigenvalues formula_130 The Fourier coefficients of formula_131 can be looked up in a table, getting formula_132. Therefore, formula_133 yielding the solution formula_134 Maximum principle. There are many variants of the maximum principle. We give a simple one. Theorem. "(Weak maximum principle.) Let formula_135, and assume that formula_136. Say that formula_137 in formula_15. Then formula_138. In other words, the maximum is attained on the boundary." A strong maximum principle would conclude that formula_139 for all formula_140 unless formula_2 is constant. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "x,y" }, { "math_id": 1, "text": "u_x, u_{xx}" }, { "math_id": 2, "text": "u" }, { "math_id": 3, "text": "x" }, { "math_id": 4, "text": "y" }, { "math_id": 5, "text": "D_x" }, { "math_id": 6, "text": "D_y" }, { "math_id": 7, "text": "D_x^2" }, { "math_id": 8, "text": "D_y^2" }, { "math_id": 9, "text": "\\nabla u = (u_x,u_y)" }, { "math_id": 10, "text": "\\Delta u = u_{xx}+u_{yy}" }, { "math_id": 11, "text": "\\nabla \\cdot (u,v) = u_x + v_y" }, { "math_id": 12, "text": "\\Delta u = \\nabla \\cdot (\\nabla u)" }, { "math_id": 13, "text": "\\Delta u = f \\text{ in }\\Omega," }, { "math_id": 14, "text": "u = 0 \\text { on }\\partial \\Omega;" }, { "math_id": 15, "text": "\\Omega" }, { "math_id": 16, "text": "\\partial \\Omega" }, { "math_id": 17, "text": "f" }, { "math_id": 18, "text": "f(x)" }, { "math_id": 19, "text": "Lu=a u_{xx} + b u_{yy}" }, { "math_id": 20, "text": "a" }, { "math_id": 21, "text": "b" }, { "math_id": 22, "text": "L=aD_x^2+bD_y^2" }, { "math_id": 23, "text": "a x^2 + b y^2" }, { "math_id": 24, "text": "k" }, { "math_id": 25, "text": "a,b,k" }, { "math_id": 26, "text": "L" }, { "math_id": 27, "text": "ab>0" }, { "math_id": 28, "text": "ab<0" }, { "math_id": 29, "text": "L=D_x+D_y^2" }, { "math_id": 30, "text": "x_1,...,x_n" }, { "math_id": 31, "text": "a_{ij}(x), b_i(x), c(x)" }, { "math_id": 32, "text": "x=(x_1,...,x_n)" }, { "math_id": 33, "text": "Lu(x)=\\sum_{i,j=1}^n (a_{ij} (x) u_{x_i})_{x_j} + \\sum_{i=1}^n b_i(x) u_{x_i}(x) + c(x) u(x)" }, { "math_id": 34, "text": "Lu(x)=\\sum_{i,j=1}^n a_{ij} (x) u_{x_i x_j} + \\sum_{i=1}^n \\tilde b_i(x) u_{x_i}(x) + c(x) u(x)" }, { "math_id": 35, "text": "\\cdot_{x_i}" }, { "math_id": 36, "text": "x_i" }, { "math_id": 37, "text": "\\tilde b_i(x) = b_i(x) + \\sum_j a_{ij,x_j}(x)" }, { "math_id": 38, "text": "a(x)" }, { "math_id": 39, "text": "n \\times n" }, { "math_id": 40, "text": "b(x)" }, { "math_id": 41, "text": "n" }, { "math_id": 42, "text": "Lu = \\nabla \\cdot (a \\nabla u) + b^T \\nabla u + c u" }, { "math_id": 43, "text": "i,j,x" }, { "math_id": 44, "text": "a_{ij}(x)=a_{ji}(x)" }, { "math_id": 45, "text": "\\alpha>0" }, { "math_id": 46, "text": "\\lambda_{\\min} (a(x)) > \\alpha \\;\\;\\; \\forall x" }, { "math_id": 47, "text": "u^T a(x) u > \\alpha u^T u \\;\\;\\; \\forall u \\in \\mathbb{R}^n" }, { "math_id": 48, "text": "\\sum_{i,j=1}^n a_{ij} u_i u_j > \\alpha \\sum_{i=1}^n u_i^2 \\;\\;\\; \\forall u \\in \\mathbb{R}^n" }, { "math_id": 49, "text": "Lu=f \\text{ in } \\Omega" }, { "math_id": 50, "text": "u=0 \\text{ on } \\partial \\Omega" }, { "math_id": 51, "text": "u_\\nu = g \\text{ on } \\partial \\Omega" }, { "math_id": 52, "text": "u_\\nu" }, { "math_id": 53, "text": "B" }, { "math_id": 54, "text": "Bu=g \\text{ on } \\partial \\Omega" }, { "math_id": 55, "text": "u=0 \\text{ on }\\partial \\Omega" }, { "math_id": 56, "text": "H^1(\\Omega)" }, { "math_id": 57, "text": "u_{x_i}" }, { "math_id": 58, "text": "i=1,\\dots,n" }, { "math_id": 59, "text": "H^1" }, { "math_id": 60, "text": "C^k" }, { "math_id": 61, "text": "k=0,1,\\dots" }, { "math_id": 62, "text": "\\Delta u = f" }, { "math_id": 63, "text": "\\varphi" }, { "math_id": 64, "text": "-\\int_\\Omega \\nabla u \\cdot \\nabla \\varphi + \\int_{\\partial \\Omega} u_\\nu \\varphi = \\int_\\Omega f \\varphi" }, { "math_id": 65, "text": "u=0\\text{ on }\\partial \\Omega" }, { "math_id": 66, "text": "\\varphi=0\\text{ on }\\partial \\Omega" }, { "math_id": 67, "text": "\\int_{\\partial \\Omega}" }, { "math_id": 68, "text": "A(u,\\varphi) = F(\\varphi)" }, { "math_id": 69, "text": "A(u,\\varphi) = \\int_\\Omega \\nabla u \\cdot \\nabla \\varphi" }, { "math_id": 70, "text": "F(\\varphi) = -\\int_\\Omega f \\varphi" }, { "math_id": 71, "text": "A(u,\\varphi) = \\int_\\Omega \\nabla u ^T a \\nabla \\varphi - \\int_\\Omega b^T \\nabla u \\varphi - \\int_\\Omega c u \\varphi" }, { "math_id": 72, "text": "A(u,\\varphi)" }, { "math_id": 73, "text": "H^1_0\\subset H^1" }, { "math_id": 74, "text": "a,b,c" }, { "math_id": 75, "text": "a_{ij}(x)" }, { "math_id": 76, "text": "\\bar\\Omega" }, { "math_id": 77, "text": "i,j=1,\\dots,n," }, { "math_id": 78, "text": "b_i(x)" }, { "math_id": 79, "text": "i=1,\\dots,n," }, { "math_id": 80, "text": "c(x)" }, { "math_id": 81, "text": "F(\\varphi)" }, { "math_id": 82, "text": "A" }, { "math_id": 83, "text": "u,\\varphi \\in H_0^1(\\Omega)" }, { "math_id": 84, "text": "A(u,\\varphi) \\geq \\alpha \\int_\\Omega \\nabla u \\cdot \\nabla \\varphi." }, { "math_id": 85, "text": "\\alpha=1" }, { "math_id": 86, "text": "b = 0" }, { "math_id": 87, "text": "c \\leq 0" }, { "math_id": 88, "text": "u^T a u > \\alpha u^T u" }, { "math_id": 89, "text": "u\\in H_0^1(\\Omega)" }, { "math_id": 90, "text": "b=0" }, { "math_id": 91, "text": "H_0^1(\\Omega)" }, { "math_id": 92, "text": "Lu=f\\text{ in }\\Omega," }, { "math_id": 93, "text": "u=0\\text{ on }\\partial \\Omega," }, { "math_id": 94, "text": "u_{x_i x_j}" }, { "math_id": 95, "text": "Lu" }, { "math_id": 96, "text": "H^2(\\Omega)" }, { "math_id": 97, "text": "C^2" }, { "math_id": 98, "text": "H^2" }, { "math_id": 99, "text": "u \\in H^2(\\Omega)" }, { "math_id": 100, "text": "Lu=f" }, { "math_id": 101, "text": "\\Omega \\subset \\mathbb{R}^n" }, { "math_id": 102, "text": "f \\in H^{k-2}(\\Omega)" }, { "math_id": 103, "text": "k\\geq 2" }, { "math_id": 104, "text": "u\\in H^k(\\Omega)" }, { "math_id": 105, "text": "H^k(\\Omega)" }, { "math_id": 106, "text": "C^m(\\bar \\Omega)" }, { "math_id": 107, "text": "0 \\leq m < k-n/2" }, { "math_id": 108, "text": "H^1\\subset L^2" }, { "math_id": 109, "text": "S : f \\rightarrow u" }, { "math_id": 110, "text": "L^2(\\Omega)" }, { "math_id": 111, "text": "u_1, u_2, \\dots \\in H^1(\\Omega)" }, { "math_id": 112, "text": "\\lambda_1,\\lambda_2,\\dots \\in \\mathbb{R}" }, { "math_id": 113, "text": "Su_k = \\lambda_k u_k, k=1,2,\\dots," }, { "math_id": 114, "text": "\\lambda_k \\rightarrow 0" }, { "math_id": 115, "text": "k \\rightarrow \\infty" }, { "math_id": 116, "text": "\\lambda_k \\gneqq 0\\;\\;\\forall k" }, { "math_id": 117, "text": "\\int_\\Omega u_j u_k = 0" }, { "math_id": 118, "text": "j \\neq k" }, { "math_id": 119, "text": "\\int_\\Omega u_j u_j = 1" }, { "math_id": 120, "text": "j=1,2,\\dots\\,." }, { "math_id": 121, "text": "u=\\sum_{k=1}^\\infty \\hat u(k) u_k" }, { "math_id": 122, "text": "\\hat u(k) = \\lambda_k \\hat f(k) ,\\;\\;k=1,2,\\dots" }, { "math_id": 123, "text": "\\hat f(k) = \\int_{\\Omega} f(x) u_k(x) \\, dx." }, { "math_id": 124, "text": "L^2" }, { "math_id": 125, "text": "u-u_{xx}-u_{yy}=f(x,y)=xy" }, { "math_id": 126, "text": "(0,1)\\times(0,1)," }, { "math_id": 127, "text": "u(x,0)=u(x,1)=u(0,y)=u(1,y)=0 \\;\\;\\forall (x,y)\\in(0,1)\\times(0,1)" }, { "math_id": 128, "text": "u_{jk}(x,y)=\\sin(\\pi jx)\\sin(\\pi ky)" }, { "math_id": 129, "text": "j,k\\in \\mathbb{N}" }, { "math_id": 130, "text": "\\lambda_{jk}={ 1 \\over 1+\\pi^2 j^2+\\pi^2 k^2 }." }, { "math_id": 131, "text": "g(x)=x" }, { "math_id": 132, "text": "\\hat g(n) = { (-1)^{n+1} \\over \\pi n }" }, { "math_id": 133, "text": "\\hat f(j,k) = { (-1)^{j+k+1} \\over \\pi^2 jk }" }, { "math_id": 134, "text": "u(x,y) = \\sum_{j,k=1}^\\infty { (-1)^{j+k+1} \\over \\pi^2 jk (1+\\pi^2 j^2+\\pi^2 k^2) } \\sin(\\pi jx) \\sin (\\pi ky)." }, { "math_id": 135, "text": "u \\in C^2(\\Omega) \\cap C^1(\\bar \\Omega)" }, { "math_id": 136, "text": "c(x)=0\\;\\forall x\\in\\Omega" }, { "math_id": 137, "text": "Lu \\leq 0" }, { "math_id": 138, "text": "\\max_{x \\in \\bar \\Omega} u(x) = \\max_{x \\in \\partial \\Omega} u(x)" }, { "math_id": 139, "text": "u(x) \\lneqq \\max_{y \\in \\partial \\Omega} u(y)" }, { "math_id": 140, "text": "x \\in \\Omega" } ]
https://en.wikipedia.org/wiki?curid=10468961
10469923
Centrifugal fan
Mechanical fan that forces fluid to move radially outward A centrifugal fan is a mechanical device for moving air or other gases in a direction at an angle to the incoming fluid. Centrifugal fans often contain a ducted housing to direct outgoing air in a specific direction or across a heat sink; such a fan is also called a blower, blower fan, or squirrel-cage fan (because it looks like a hamster wheel). Tiny ones used in computers are sometimes called biscuit blowers. These fans move air from the rotating inlet of the fan to an outlet. They are typically used in ducted applications to either draw air through ductwork/heat exchanger, or push air through similar impellers. Compared to standard axial fans, they can provide similar air movement from a smaller fan package, and overcome higher resistance in air streams. Centrifugal fans use the kinetic energy of the impellers to move the air stream, which in turn moves against the resistance caused by ducts, dampers and other components. Centrifugal fans displace air radially, changing the direction (typically by 90°) of the airflow. They are sturdy, quiet, reliable, and capable of operating over a wide range of conditions. Centrifugal fans are, like axial fans, constant-volume devices, meaning that, at a constant fan speed, a centrifugal fan moves a relatively constant volume of air rather than a constant mass. This means that the air velocity in a system is fixed, but the actual mass of air flowing will vary based on the density of the air. Variations in density can be caused by changes in incoming air temperature and elevation above sea level, making these fans unsuitable for applications where a constant mass of air is required to be provided. Centrifugal fans are not positive-displacement devices and centrifugal fans have certain advantages and disadvantages when contrasted with positive-displacement blowers: centrifugal fans are more efficient, whereas positive-displacement blowers may have a lower capital cost, and are capable of achieving much higher compression ratios. Centrifugal fans are usually compared to axial fans for residential, industrial, and commercial applications. Axial fans typically operate at higher volumes, operate at lower static pressures, and have higher efficiency. Therefore axial fans are usually used for high volume air movement, such as warehouse exhaust or room circulation, while centrifugal fans are used to move air in ducted applications such as a house or typical office environment. The centrifugal fan has a drum shape composed of a number of fan blades mounted around a hub. As shown in the animated figure, the hub turns on a driveshaft mounted in bearings in the fan housing. The gas enters from the side of the fan wheel, turns 90 degrees and accelerates due to centrifugal force as it flows over the fan blades and exits the fan housing. History. The earliest mention of centrifugal fans was in 1556 by Georg Pawer (Latin: Georgius Agricola) in his book "De Re Metallica", where he shows how such fans were used to ventilate mines. Thereafter, centrifugal fans gradually fell into disuse. It wasn't until the early decades of the nineteenth century that interest in centrifugal fans revived. In 1815 the Marquis de Chabannes advocated the use of a centrifugal fan and took out a British patent in the same year. In 1827, Edwin A. Stevens of Bordentown, New Jersey, installed a fan for blowing air into the boilers of the steamship "North America". Similarly, in 1832, the Swedish-American engineer John Ericsson used a centrifugal fan as blower on the steamship "Corsair". A centrifugal fan was invented by Russian military engineer Alexander Sablukov in 1832, and was used both in the Russian light industry (such as sugar making) and abroad. One of the most important developments for the mining industry was the Guibal fan, which was patented in Belgium in 1862 by the French engineer . The Guibal fan had a spiral case surrounding the fan blades, as well as a flexible shutter to control the escape velocity, which made it far superior to previous open-fan designs and led to the possibility of mining at great depths. Such fans were used extensively for mine ventilation throughout Britain. Construction. The main parts of a centrifugal fan are: Other components used may include bearings, couplings, impeller locking device, fan discharge casing, shaft seal plates etc. Drive mechanisms. The fan drive determines the speed of the fan wheel (impeller) and the extent to which this speed can be varied. There are two basic types of fan drives. Direct. The fan wheel can be linked directly to the shaft of an electric motor. This means that the fan wheel speed is identical to the motor's rotational speed. Direct drive is the most efficient form of fan drive since there are no losses converting from the motors rotational speed to the fan's. Some electronics manufacturers have made centrifugal fans with external rotor motors (the stator is inside the rotor), and the rotor is directly mounted on the fan wheel (impeller). Belt. A set of sheaves is mounted on the motor shaft and the fan wheel shaft, and a belt transmits the mechanical energy from the motor to the fan. The fan wheel speed depends upon the ratio of the diameter of the motor sheave to the diameter of the fan wheel sheave. Fan wheel speeds in belt-driven fans are fixed unless the belt(s) slip. Belt slippage can reduce the fan wheel speed by several hundred revolutions per minute (RPM). Belts also introduce an additional maintenance item Bearings. Bearings are an important part of a fan. Sleeve-ring bearings are used for smaller fans such as computer fans, while larger residential and commercial applications use ball bearings. Industrial applications may used specialized bearings such as water-cooled sleeve bearings for exhausting hot gasses. Many turbo blowers use either an air bearing or a magnetic bearing. Magnetic bearing blowers provide low transmitted vibration, high-speed levitation, low power consumption, high reliability, oil-free operation and tolerance to particle contaminants in the air stream. Speed control. Fan speed for modern fans is done through Variable Frequency Drives that directly control the motors speed, ramping up and down the speed of the motor to different airflows. The amount of air moved is non-linear with the motor speed, and must be individually balanced for each fan installation. Typically this is done at time of install by testing and balancing contractors, although some modern systems directly monitor airflow with instruments near the outlet, and can use the feedback to vary the motor speed. Older fan installations would use inlet or outlet vanes - metal flaps that could be adjusted open and closed on the outlet of the fan. As the vanes closed they would raise the pressure and lower the airflow from the fan. This is less efficient than a VFD, as the VFD directly reduces electricity used by the fan motor, while vanes worked with a constant motor speed. Fan blades. The fan wheel consists of a hub with a number of fan blades attached. The fan blades on the hub can be arranged in three different ways: forward-curved, backward-curved or radial. Forward-curved. Forward-curved blades, as in Figure 3(a), curve in the direction of the fan wheel's rotation. These are especially sensitive to particulates and commonly are only specified for clean-air applications such as air conditioning. Forward-curved fans are typically used in applications where the static pressure is too high for a vane axial fan or the smaller size of a centrifugal fan is required, but the noise characteristics of a backwards curved fan are disruptive for the space. They are capable of providing lower air flow with a higher increase in static pressure compared to a vane axial fan. They are typically used in fan coil units. They are less efficient than backwards curved fans. Backward-curved. Backward-curved blades, as in Figure 3(b), curve against the direction of the fan wheel's rotation. Smaller blowers may have backward-inclined blades, which are straight, not curved. Larger backward-inclined/-curved blowers have blades whose backward curvatures mimic that of an airfoil cross section, but both designs provide good operating efficiency with relatively economical construction techniques. These types of blowers are designed to handle gas streams with low to moderate particulate loadings . They can be easily fitted with wear protection but certain blade curvatures can be prone to solids build-up.. Backward curved wheels are often heavier than corresponding forward-curved equivalents, as they run at higher speeds and require stronger construction. Backward curved fans can have a high range of specific speeds but are most often used for medium specific speed applications—high pressure, medium flow applications such as in air handling units. Backward-curved fans are more energy efficient than radial blade and forward curved fans and so, for high power applications may be a suitable alternative to the lower cost radial bladed fan. Straight radial. Radial blowers, as in Figure 3(c), have wheels whose blades extend straight out from the centre of the hub. Radial bladed wheels are often used on particulate-laden gas streams because they are the least sensitive to solid build-up on the blades, but they are often characterized by greater noise output. High speeds, low volumes, and high pressures are common with radial blowers, and are often used in vacuum cleaners, pneumatic material conveying systems, and similar processes. Principles of operation. The centrifugal fan uses the centrifugal power supplied from the rotation of impellers to increase the kinetic energy of air/gases. When the impellers rotate, the gas particles near the impellers are thrown off from the impellers, then move into the fan casing. As a result, the kinetic energy of gas is measured as pressure because of the system resistance offered by the casing and duct. The gas is then guided to the exit via outlet ducts. After the gas is thrown-off, the gas pressure in the middle region of the impellers decreases. The gas from the impeller eye rushes in to normalize this. This cycle repeats and therefore the gas can be continuously transferred. Velocity triangle. A diagram called a velocity triangle helps us in determining the flow geometry at the entry and exit of a blade. A minimum number of data are required to draw a velocity triangle at a point on blade. Some component of velocity varies at different point on the blade due to changes in the direction of flow. Hence an infinite number of velocity triangles are possible for a given blade. To describe the flow using only two velocity triangles, we define mean values of velocity and their direction. Velocity triangle of any turbo machine has three components as shown: These velocities are related by the triangle law of vector addition: formula_0 This relatively simple equation is used frequently while drawing the velocity diagram. The velocity diagram for the forward, backward face blades shown are drawn using this law. The angle α is the angle made by the absolute velocity with the axial direction and angle β is the angle made by blade with respect to axial direction. Difference between fans and blowers. The property that distinguishes a centrifugal fan from a blower is the pressure ratio it can achieve. In general, a blower can produce a higher pressure ratio. Per the American Society of Mechanical Engineers (ASME), the specific ratio – the ratio of the discharge pressure over the suction pressure – is used for defining fans, blowers and compressors. Fans have a specific ratio of up to 1.11, blowers from 1.11 to 1.20 and compressors have more than 1.20. Typically due to the higher pressures involved blowers and compressors have much sturdier builds than fans. Ratings. Ratings found in centrifugal fan performance tables and curves are based on standard air SCFM. Fan manufacturers define standard air as clean, dry air with a density of 0.075 pounds mass per cubic foot (1.2 kg/m3), with the barometric pressure at sea level of 29.92 inches of mercury (101.325 kPa) and a temperature of 70 °F (21 °C). Selecting a centrifugal fan to operate at conditions other than standard air requires adjustment to both static pressure and power. At higher-than-standard elevation (sea level) and higher-than-standard temperature, air density is lower than standard density. Air density corrections must account for centrifugal fans that are specified for continuous operation at higher temperatures. The centrifugal fan displaces a constant volume of air in a given system regardless of air density. When a centrifugal fan is specified for a given CFM and static pressure at conditions other than standard, an air density correction factor must be applied to select the proper size fan to meet the new condition. Since air weighs only 80% of air, the centrifugal fan creates less pressure and requires less power. To get the actual pressure required at , the designer must multiply the pressure at standard conditions by an air density correction factor of 1.25 (i.e., 1.0/0.8) to get the system to operate correctly. To get the actual power at , the designer must divide the power at standard conditions by the air density correction factor. Air Movement and Control Association (AMCA). The centrifugal fan performance tables provide the fan RPM and power requirements for the given CFM and static pressure at standard air density. When the centrifugal fan performance is not at standard conditions, the performance must be converted to standard conditions before entering the performance tables. Centrifugal fans rated by the Air Movement and Control Association (AMCA) are tested in laboratories with test setups that simulate installations that are typical for that type of fan. Usually they are tested and rated as one of four standard installation types as designated in AMCA Standard 210. AMCA Standard 210 defines uniform methods for conducting laboratory tests on housed fans to determine airflow rate, pressure, power and efficiency, at a given speed of rotation. The purpose of AMCA Standard 210 is to define exact procedures and conditions of fan testing so that ratings provided by various manufacturers are on the same basis and may be compared. For this reason, fans must be rated in standardized SCFM. Losses. Centrifugal fans suffer efficiency losses in both stationary and moving parts, increasing the energy input required for a given level of airflow performance. Impeller entry. Flow at the intake and its turning from axial to radial direction causes losses at the intake. Friction and flow separation cause impeller blade losses since there is change in incidence angle. These impeller blade losses are also included in the category. Leakage. Leakage of some air and disturbance in the main flow field is caused due to the clearance provided between the rotating periphery of the impeller and the casing at the entry. Diffuser and volute. Friction and flow separation also causes losses in the diffuser. Further losses due to incidence occur if the device is working beyond its design conditions. Flow from the impeller or diffuser expands in the volute, which has a larger cross section leading to the formation of eddy, which in turn reduces pressure head. Friction and flow separation losses also occur due to the volute passage. Disc friction. Viscous drag on the back surface of the impeller disc causes disc friction losses. In literature. In Walter Miller's science-fiction novel "A Canticle for Leibowitz" (1959), an order of monks in a post-apocalyptic 26th century safeguard an electrical blueprint for a "squirrel cage" as a holy relic, though puzzled over how to reveal the "squirrel". References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "V = U + V_r" } ]
https://en.wikipedia.org/wiki?curid=10469923
10469986
Optimal facility location
The study of facility location problems (FLP), also known as location analysis, is a branch of operations research and computational geometry concerned with the optimal placement of facilities to minimize transportation costs while considering factors like avoiding placing hazardous materials near housing, and competitors' facilities. The techniques also apply to cluster analysis. Minimum facility location. A simple facility location problem is the Weber problem, in which a single facility is to be placed, with the only optimization criterion being the minimization of the weighted sum of distances from a given set of point sites. More complex problems considered in this discipline include the placement of multiple facilities, constraints on the locations of facilities, and more complex optimization criteria. In a basic formulation, the facility location problem consists of a set of potential facility sites "L" where a facility can be opened, and a set of demand points "D" that must be serviced. The goal is to pick a subset "F" of facilities to open, to minimize the sum of distances from each demand point to its nearest facility, plus the sum of opening costs of the facilities. The facility location problem on general graphs is NP-hard to solve optimally, by reduction from (for example) the set cover problem. A number of approximation algorithms have been developed for the facility location problem and many of its variants. Without assumptions on the set of distances between clients and sites (in particular, without assuming that the distances satisfy the triangle inequality), the problem is known as non-metric facility location and can be approximated to within a factor O(log "n"). This factor is tight, via an approximation-preserving reduction from the set cover problem. If we assume distances between clients and sites are undirected and satisfy the triangle inequality, we are talking about a metric facility location (MFL) problem. The MFL is still NP-hard and hard to approximate within factor better than 1.463. The currently best known approximation algorithm achieves approximation ratio of 1.488. Minimax facility location. The minimax facility location problem seeks a location which minimizes the maximum distance to the sites, where the distance from one point to the sites is the distance from the point to its nearest site. A formal definition is as follows: Given a point set P ⊂ formula_0, find a point set S ⊂ formula_0, |S| "k", so that maxp ∈ P(minq ∈ S(d(p, q)) ) is minimized. In the case of the Euclidean metric for "k" 1, it is known as the smallest enclosing sphere problem or 1-center problem. Its study traced at least to the year of 1860. NP hardness. It has been proven that exact solution of "k"-center problem is NP hard. Approximation to the problem was found to be also NP hard when the error is small. The error level in the approximation algorithm is measured as an approximation factor, which is defined as the ratio between the approximation and the optimum. It's proved that the "k"-center problem approximation is NP hard when approximation factor is less than 1.822 (dimension = 2) or 2 (dimension &gt; 2). Algorithms. Exact solver There exist algorithms to produce exact solutions to this problem. One exact solver runs in time formula_1. 1 + "ε" approximation 1 + "ε" approximation is to find a solution with approximation factor no greater than 1 + "ε". This approximation is NP hard as "ε" is arbitrary. One approach based on the coreset concept is proposed with execution complexity of formula_2. As an alternative, another algorithm also based on core sets is available. It runs in formula_3. The author claims that the running time is much less than the worst case and thus it's possible to solve some problems when "k" is small (say "k" &lt; 5). Farthest-point clustering For the hardness of the problem, it's impractical to get an exact solution or precise approximation. Instead, an approximation with factor = 2 is widely used for large "k" cases. The approximation is referred to as the farthest-point clustering (FPC) algorithm, or farthest-first traversal. The algorithm is quite simple: pick any point from the set as one center; search for the farthest point from remaining set as another center; repeat the process until "k" centers are found. It is easy to see that this algorithm runs in linear time. As approximation with factor less than 2 is proved to be NP hard, FPC was regarded as the best approximation one can find. As per the performance of execution, the time complexity is later improved to O("n" log "k") with box decomposition technique. Maxmin facility location. The maxmin facility location or obnoxious facility location problem seeks a location which maximizes the minimum distance to the sites. In the case of the Euclidean metric, it is known as the largest empty sphere problem. The planar case (largest empty circle problem) may be solved in optimal time Θ("n" log n). Integer programming formulations. Facility location problems are often solved as integer programs. In this context, facility location problems are often posed as follows: suppose there are formula_4 facilities and formula_5 customers. We wish to choose (1) which of the formula_4 facilities to open, and (2) which (open) facilities to use to supply the formula_5 customers, in order to satisfy some fixed demand at minimum cost. We introduce the following notation: let formula_6 denote the (fixed) cost of opening facility formula_7, for formula_8. Let formula_9denote the cost to ship a product from facility formula_7 to customer formula_10 for formula_8 and formula_11. Let formula_12 denote the demand of customer formula_10 for formula_11. Further suppose that each facility has a maximum output. Let formula_13 denote the maximum amount of product that can be produced by facility formula_7, that is, let formula_13 denote the "capacity" of facility formula_7. The remainder of this section follows Capacitated facility location. In our initial formulation, introduce a binary variable formula_14 for formula_8, where formula_15 if facility formula_7 is open, and formula_16 otherwise. Further introduce the variable formula_17 for formula_8 and formula_11 which represents the fraction of the demand formula_12 filled by facility formula_7. The so-called capacitated facility location problem is then given byformula_18 Note that the second set of constraints ensures that if formula_16, that is, facility formula_7 isn't open, then formula_19 for all formula_10, that is, no demand for any customer can be filled from facility formula_7. Uncapacitated facility location. A common case of the capacitated facility location problem above is the case when formula_20 for all formula_8. In this case, it is always optimal to satisfy all of the demand from customer formula_10 from the nearest open facility. Because of this, we may replace the continuous variables formula_17 from above with the binary variables formula_21, where formula_22 if customer formula_10 is supplied by facility formula_7, and formula_23 otherwise. The uncapacitated facility location problem is then given byformula_24 where formula_25 is a constant chosen to be suitably large. The choice of formula_25 can affect computation results—the best choice in this instance is obvious: take formula_26. Then, if formula_15, any choice of the formula_21 will satisfy the second set of constraints. Another formulation possibility for the uncapacitated facility location problem is to "disaggregate" the capacity constraints (the big-formula_25 constraints). That is, replace the constraintsformula_27with the constraintsformula_28In practice, this new formulation performs significantly better, in the sense that it has a tighter Linear programming relaxation than the first formulation. Notice that summing the new constraints together yields the original big-formula_25 constraints. In the capacitated case, these formulations are not equivalent. More information about the uncapacitated facility location problem can be found in Chapter 3 of "Discrete location theory". Applications. Healthcare. In healthcare, incorrect facility location decisions have a serious impact on the community beyond simple cost and service metrics; for instance, hard-to-access healthcare facilities are likely to be associated with increased morbidity and mortality. From this perspective, facility location modeling for healthcare is more critical than similar modeling for other areas. Solid waste management. Municipal solid waste management still remains a challenge for developing countries because of increasing waste production and high costs associated with waste management. Through the formulation and exact resolution of a facility location problem it is possible to optimize the location of landfills for waste disposal. Clustering. A particular subset of cluster analysis problems can be viewed as facility location problems. In a centroid-based clustering problem, the objective is to partition formula_29 data points (elements of a common metric space) into equivalence classes—often called "colors"—such that points of the same color are close to one another (equivalently, such that points of different colors are far from one another). To see how one might view (read "transform" or "reduce") a centroid-based clustering problem as a (metric) facility location problem, view each data point in the former as a demand point in the latter. Suppose that the data to be clustered are elements of a metric space formula_30 (e.g. let formula_30 be formula_31-dimensional Euclidean space for some fixed formula_31). In the facility location problem that we are constructing, we permit facilities to be placed at any point within this metric space formula_30; this defines the set of allowed facility locations formula_32. We define the costs formula_33 to be the pairwise distances between location-demand point pairs (e.g., see metric k-center). In a centroid-based clustering problem, one partitions the data into formula_34 equivalence classes (i.e. colors) each of which has a centroid. Let us see how a solution to our constructed facility location problem also achieves such a partition. A feasible solution is a non-empty subset formula_35 of formula_34 locations. These locations in our facility location problem comprise a set of formula_34 centroids in our centroid-based clustering problem. Now, assign each demand point formula_36 to the location formula_37 that minimizes its servicing-cost; that is, assign the data point formula_36 to the centroid formula_38 (break ties arbitrarily). This achieves the partitioning provided that the facility location problem's costs formula_33 are defined such that they are the images of the centroid-based clustering problem's distance function. The popular algorithms textbook "Algorithm Design" provides a related problem-description and an approximation algorithm. The authors refer to the metric facility location problem (i.e. the centroid-based clustering problem or the metric formula_34-center problem) as the "center selection problem", thereby growing the list of synonyms. Furthermore, see that in our above definition of the facility location problem that the objective function formula_39 is general. Specific choices of formula_39 yield different variants of the facility location problem, and hence different variants of the centroid-based clustering problem. For example, one might choose to minimize the sum of distances from each location to each of its assigned demand points (à la the Weber problem), or one might elect to minimize the maximum of all such distances (à la the 1-center problem). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbb{R}^d" }, { "math_id": 1, "text": "n^{O(\\sqrt{k})}" }, { "math_id": 2, "text": "O(2^{O(k \\log k/\\varepsilon^2)}dn)" }, { "math_id": 3, "text": "O(k^n)" }, { "math_id": 4, "text": "n" }, { "math_id": 5, "text": "m" }, { "math_id": 6, "text": "f_i" }, { "math_id": 7, "text": "i" }, { "math_id": 8, "text": "i=1,\\dots,n" }, { "math_id": 9, "text": "c_{ij}" }, { "math_id": 10, "text": "j" }, { "math_id": 11, "text": "j=1,\\dots,m" }, { "math_id": 12, "text": "d_j" }, { "math_id": 13, "text": "u_i" }, { "math_id": 14, "text": "x_i" }, { "math_id": 15, "text": "x_i=1" }, { "math_id": 16, "text": "x_i=0" }, { "math_id": 17, "text": "y_{ij}" }, { "math_id": 18, "text": "\\begin{array}{rl}\n\\min & \\displaystyle\\sum_{i=1}^n\\sum_{j=1}^mc_{ij} d_j y_{ij}+\\sum_{i=1}^nf_ix_i \\\\\n\\text{s.t.} & \\displaystyle\\sum_{i=1}^ny_{ij}=1 \\text{ for all }j=1,\\dots,m \\\\\n& \\displaystyle \\sum_{j=1}^md_jy_{ij}\\leqslant u_ix_i\\text{ for all }i=1\\dots,n \\\\\n&y_{ij}\\geqslant0\\text{ for all }i=1,\\dots,n \\text{ and }j=1,\\dots,m\\\\\n&x_i\\in\\{0,1\\}\\text{ for all } i=1,\\dots,n\n\\end{array}" }, { "math_id": 19, "text": "y_{ij}=0" }, { "math_id": 20, "text": "u_i=+\\infty" }, { "math_id": 21, "text": "z_{ij}" }, { "math_id": 22, "text": "z_{ij}=1" }, { "math_id": 23, "text": "z_{ij}=0" }, { "math_id": 24, "text": "\\begin{array}{rl}\n\\min & \\displaystyle\\sum_{i=1}^n\\sum_{j=1}^mc_{ij} d_j z_{ij}+\\sum_{i=1}^nf_ix_i \\\\\n\\text{s.t.} & \\displaystyle\\sum_{i=1}^nz_{ij}=1 \\text{ for all }j=1,\\dots,m \\\\\n& \\displaystyle \\sum_{j=1}^mz_{ij}\\leqslant Mx_i\\text{ for all }i=1\\dots,n \\\\\n&z_{ij}\\in\\{0,1\\}\\text{ for all }i=1,\\dots,n \\text{ and }j=1,\\dots,m\\\\\n&x_i\\in\\{0,1\\}\\text{ for all } i=1,\\dots,n\n\\end{array}" }, { "math_id": 25, "text": "M" }, { "math_id": 26, "text": "M=m" }, { "math_id": 27, "text": "\\sum_{j=1}^{m}z_{ij}\\leqslant Mx_i\\text{ for all }i=1,\\dots,n" }, { "math_id": 28, "text": "z_{ij}\\leqslant x_i\\text{ for all }i=1,\\dots,n \\text{ and }j=1,\\dots,m" }, { "math_id": 29, "text": " n " }, { "math_id": 30, "text": " M " }, { "math_id": 31, "text": " p " }, { "math_id": 32, "text": " L " }, { "math_id": 33, "text": " c_{\\ell, d} " }, { "math_id": 34, "text": " k " }, { "math_id": 35, "text": " L' \\subseteq L " }, { "math_id": 36, "text": " d " }, { "math_id": 37, "text": " \\ell^* " }, { "math_id": 38, "text": " \\ell^* := \n\\mathrm{arg\\,min}_{\\ell \\in L} \\{c_{\\ell, d}\\} " }, { "math_id": 39, "text": " f " } ]
https://en.wikipedia.org/wiki?curid=10469986
10469987
Schubert variety
In algebraic geometry, a Schubert variety is a certain subvariety of a Grassmannian, formula_0 of formula_1-dimensional subspaces of a vector space formula_2, usually with singular points. Like the Grassmannian, it is a kind of moduli space, whose elements satisfy conditions giving lower bounds to the dimensions of the intersections of its elements formula_3, with the elements of a specified complete flag. Here formula_2 may be a vector space over an arbitrary field, but most commonly this taken to be either the real or the complex numbers. A typical example is the set formula_4 of formula_5-dimensional subspaces formula_6 of a 4-dimensional space formula_2 that intersect a fixed (reference) 2-dimensional subspace formula_7 nontrivially. formula_8 Over the real number field, this can be pictured in usual "xyz"-space as follows. Replacing subspaces with their corresponding projective spaces, and intersecting with an affine coordinate patch of formula_9, we obtain an open subset "X"° ⊂ "X". This is isomorphic to the set of all lines "L" (not necessarily through the origin) which meet the "x"-axis. Each such line "L" corresponds to a point of "X"°, and continuously moving "L" in space (while keeping contact with the "x"-axis) corresponds to a curve in "X"°. Since there are three degrees of freedom in moving "L" (moving the point on the "x"-axis, rotating, and tilting), "X" is a three-dimensional real algebraic variety. However, when "L" is equal to the "x"-axis, it can be rotated or tilted around any point on the axis, and this excess of possible motions makes "L" a singular point of "X". More generally, a Schubert variety in formula_0 is defined by specifying the minimal dimension of intersection of a formula_1-dimensional subspace formula_3 with each of the spaces in a fixed reference complete flag formula_10, where formula_11. (In the example above, this would mean requiring certain intersections of the line "L" with the "x"-axis and the "xy"-plane.) In even greater generality, given a semisimple algebraic group formula_12 with a Borel subgroup formula_13 and a standard parabolic subgroup formula_14, it is known that the homogeneous space formula_15, which is an example of a flag variety, consists of finitely many formula_13-orbits, which may be parametrized by certain elements formula_16 of the Weyl group formula_17. The closure of the formula_13-orbit associated to an element formula_16 is denoted formula_18 and is called a Schubert variety in formula_15. The classical case corresponds to formula_19, with formula_20, the formula_1th maximal parabolic subgroup of formula_21, so that formula_22 is the Grassmannian of formula_1-planes in formula_23. Significance. Schubert varieties form one of the most important and best studied classes of singular algebraic varieties. A certain measure of singularity of Schubert varieties is provided by Kazhdan–Lusztig polynomials, which encode their local Goresky–MacPherson intersection cohomology. The algebras of regular functions on Schubert varieties have deep significance in algebraic combinatorics and are examples of algebras with a straightening law. (Co)homology of the Grassmannian, and more generally, of more general flag varieties, has a basis consisting of the (co)homology classes of Schubert varieties, or Schubert cycles. The study of the intersection theory on the Grassmannian was initiated by Hermann Schubert and continued by Zeuthen in the 19th century under the heading of enumerative geometry. This area was deemed by David Hilbert important enough to be included as the fifteenth of his celebrated 23 problems. The study continued in the 20th century as part of the general development of algebraic topology and representation theory, but accelerated in the 1990s beginning with the work of William Fulton on the degeneracy loci and Schubert polynomials, following up on earlier investigations of Bernstein–Gelfand–Gelfand and Demazure in representation theory in the 1970s, Lascoux and Schützenberger in combinatorics in the 1980s, and Fulton and MacPherson in intersection theory of singular algebraic varieties, also in the 1980s.
[ { "math_id": 0, "text": "\\mathbf{Gr}_k(V)" }, { "math_id": 1, "text": "k" }, { "math_id": 2, "text": "V" }, { "math_id": 3, "text": "w\\subset V" }, { "math_id": 4, "text": "X" }, { "math_id": 5, "text": "2" }, { "math_id": 6, "text": "w \\subset V" }, { "math_id": 7, "text": "V_2" }, { "math_id": 8, "text": "X \\ =\\ \\{w\\subset V \\mid \\dim(w)=2,\\, \\dim(w \\cap V_2)\\ge 1\\}." }, { "math_id": 9, "text": "\\mathbb{P}(V)" }, { "math_id": 10, "text": "V_1\\subset V_2\\subset \\cdots \\subset V_n=V" }, { "math_id": 11, "text": "\\dim V_j=j" }, { "math_id": 12, "text": "G" }, { "math_id": 13, "text": "B" }, { "math_id": 14, "text": "P" }, { "math_id": 15, "text": "G/P" }, { "math_id": 16, "text": "w\\in W" }, { "math_id": 17, "text": "W" }, { "math_id": 18, "text": "X_{w}" }, { "math_id": 19, "text": "G=SL_n" }, { "math_id": 20, "text": "P=P_k" }, { "math_id": 21, "text": " SL_n" }, { "math_id": 22, "text": "G/P = \\mathbf{Gr}_k(\\mathbf{C}^n)" }, { "math_id": 23, "text": " \\mathbf{C}^n" } ]
https://en.wikipedia.org/wiki?curid=10469987
10470079
Standard conjectures on algebraic cycles
In mathematics, the standard conjectures about algebraic cycles are several conjectures describing the relationship of algebraic cycles and Weil cohomology theories. One of the original applications of these conjectures, envisaged by Alexander Grothendieck, was to prove that his construction of pure motives gave an abelian category that is semisimple. Moreover, as he pointed out, the standard conjectures also imply the hardest part of the Weil conjectures, namely the "Riemann hypothesis" conjecture that remained open at the end of the 1960s and was proved later by Pierre Deligne; for details on the link between Weil and standard conjectures, see . The standard conjectures remain open problems, so that their application gives only conditional proofs of results. In quite a few cases, including that of the Weil conjectures, other methods have been found to prove such results unconditionally. The classical formulations of the standard conjectures involve a fixed Weil cohomology theory H. All of the conjectures deal with "algebraic" cohomology classes, which means a morphism on the cohomology of a smooth projective variety "H" ∗("X") → "H" ∗("X") induced by an algebraic cycle with rational coefficients on the product "X" × "X" via the "cycle class map," which is part of the structure of a Weil cohomology theory. Conjecture A is equivalent to Conjecture B (see , p. 196), and so is not listed. Lefschetz type Standard Conjecture (Conjecture B). One of the axioms of a Weil theory is the so-called hard Lefschetz theorem (or axiom): Begin with a fixed smooth hyperplane section "W" "H" ∩ "X", where X is a given smooth projective variety in the ambient projective space P "N" and H is a hyperplane. Then for "i" ≤ "n" dim("X"), the Lefschetz operator "L" : "H i"("X") → "H" "i"+2("X"), which is defined by intersecting cohomology classes with W, gives an isomorphism "L""n"−"i" : "H i"("X") → "H" 2"n"−"i"("X"). Now, for "i" ≤ "n" define: Λ ("L""n"−"i"+2)−1 ∘ "L" ∘ ("L""n"−"i") : "H i"("X") → "H" "i"−2("X") Λ ("L""n"−"i") ∘ "L" ∘ ("L""n"−"i"+2)−1 : "H" 2"n"−"i"+2("X") → "H" 2"n"−"i"("X") The conjecture states that the Lefschetz operator (Λ) is induced by an algebraic cycle. Künneth type Standard Conjecture (Conjecture C). It is conjectured that the projectors "H" ∗("X") ↠ "Hi"("X") ↣ "H" ∗("X") are algebraic, i.e. induced by a cycle "π i" ⊂ "X" × "X" with rational coefficients. This implies that the motive of any smooth projective variety (and more generally, every pure motive) decomposes as formula_0 The motives formula_1 and formula_2 can always be split off as direct summands. The conjecture therefore immediately holds for curves. It was proved for surfaces by . have used the Weil conjectures to show the conjecture for algebraic varieties defined over finite fields, in arbitrary dimension. proved the Künneth decomposition for abelian varieties "A". refined this result by exhibiting a functorial Künneth decomposition of the Chow motive of "A" such that the "n"-multiplication on the abelian variety acts as formula_3 on the "i"-th summand formula_4. proved the Künneth decomposition for the Hilbert scheme of points in a smooth surface. Conjecture D (numerical equivalence vs. homological equivalence). Conjecture D states that numerical and homological equivalence agree. (It implies in particular the latter does not depend on the choice of the Weil cohomology theory). This conjecture implies the Lefschetz conjecture. If the Hodge standard conjecture holds, then the Lefschetz conjecture and Conjecture D are equivalent. This conjecture was shown by Lieberman for varieties of dimension at most 4, and for abelian varieties. The Hodge Standard Conjecture. The Hodge standard conjecture is modelled on the Hodge index theorem. It states the definiteness (positive or negative, according to the dimension) of the cup product pairing on primitive algebraic cohomology classes. If it holds, then the Lefschetz conjecture implies Conjecture D. In characteristic zero the Hodge standard conjecture holds, being a consequence of Hodge theory. In positive characteristic the Hodge standard conjecture is known for surfaces () and for abelian varieties of dimension 4 (). The Hodge standard conjecture is not to be confused with the "Hodge conjecture" which states that for smooth projective varieties over C, every rational ("p", "p")-class is algebraic. The Hodge conjecture implies the Lefschetz and Künneth conjectures and conjecture D for varieties over fields of characteristic zero. The Tate conjecture implies Lefschetz, Künneth, and conjecture D for ℓ-adic cohomology over all fields. Permanence properties of the standard conjectures. For two algebraic varieties "X" and "Y", has introduced a condition that "Y" is "motivated" by "X". The precise condition is that the motive of "Y" is (in André's category of motives) expressible starting from the motive of "X" by means of sums, summands, and products. For example, "Y" is motivated if there is a surjective morphism formula_5. If "Y" is not found in the category, it is "unmotivated" in that context. For smooth projective complex algebraic varieties "X" and "Y", such that "Y" is motivated by "X", the standard conjectures D (homological equivalence equals numerical), B (Lefschetz), the Hodge conjecture and also the generalized Hodge conjecture hold for "Y" if they hold for all powers of "X". This fact can be applied to show, for example, the Lefschetz conjecture for the Hilbert scheme of points on an algebraic surface. Relation to other conjectures. has shown that the (conjectural) existence of the so-called motivic t-structure on the triangulated category of motives implies the Lefschetz and Künneth standard conjectures B and C. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "h(X) = \\bigoplus_{i=0}^{2 dim(X)} h^i(X)." }, { "math_id": 1, "text": "h^0(X)" }, { "math_id": 2, "text": "h^{2 dim(X)}" }, { "math_id": 3, "text": "n^i" }, { "math_id": 4, "text": "h^i(A)" }, { "math_id": 5, "text": "X^n \\to Y" } ]
https://en.wikipedia.org/wiki?curid=10470079
1047111
HSAB theory
Chemical theory about acids and bases HSAB is an acronym for "hard and soft (Lewis) acids and bases". HSAB is widely used in chemistry for explaining the stability of compounds, reaction mechanisms and pathways. It assigns the terms 'hard' or 'soft', and 'acid' or 'base' to chemical species. 'Hard' applies to species which are small, have high charge states (the charge criterion applies mainly to acids, to a lesser extent to bases), and are weakly polarizable. 'Soft' applies to species which are big, have low charge states and are strongly polarizable. The theory is used in contexts where a qualitative, rather than quantitative, description would help in understanding the predominant factors which drive chemical properties and reactions. This is especially so in transition metal chemistry, where numerous experiments have been done to determine the relative ordering of ligands and transition metal ions in terms of their hardness and softness. HSAB theory is also useful in predicting the products of metathesis reactions. In 2005 it was shown that even the sensitivity and performance of explosive materials can be explained on basis of HSAB theory. Ralph Pearson introduced the HSAB principle in the early 1960s as an attempt to unify inorganic and organic reaction chemistry. Theory. Essentially, the theory states that "soft" acids prefer to form bonds with "soft" bases, whereas "hard" acids prefer to form bonds with "hard" bases, all other factors being equal. It can also be said that hard acids bind strongly to hard bases and soft acids bind strongly to soft bases. The HASB classification in the original work was largely based on equilibrium constants of Lewis acid/base reactions with a reference base for comparison. Borderline cases are also identified: borderline acids are trimethylborane, sulfur dioxide and ferrous Fe2+, cobalt Co2+ caesium Cs+ and lead Pb2+ cations. Borderline bases are: aniline, pyridine, nitrogen N2 and the azide, chloride, bromide, nitrate and sulfate anions. Generally speaking, acids and bases interact and the most stable interactions are hard–hard (ionogenic character) and soft–soft (covalent character). An attempt to quantify the 'softness' of a base consists in determining the equilibrium constant for the following equilibrium: BH + CH3Hg+ ⇌ H+ + CH3HgB Where CH3Hg+ (methylmercury ion) is a very soft acid and H+ (proton) is a hard acid, which compete for B (the base to be classified). Some examples illustrating the effectiveness of the theory: Chemical hardness. In 1983 Pearson together with Robert Parr extended the qualitative HSAB theory with a quantitative definition of the chemical hardness (η) as being proportional to the second derivative of the total energy of a chemical system with respect to changes in the number of electrons at a fixed nuclear environment: formula_0. The factor of one-half is arbitrary and often dropped as Pearson has noted. An operational definition for the chemical hardness is obtained by applying a three-point finite difference approximation to the second derivative: formula_1 where "I" is the ionization potential and "A" the electron affinity. This expression implies that the chemical hardness is proportional to the band gap of a chemical system, when a gap exists. The first derivative of the energy with respect to the number of electrons is equal to the chemical potential, "μ", of the system, formula_2, from which an operational definition for the chemical potential is obtained from a finite difference approximation to the first order derivative as formula_3 which is equal to the negative of the electronegativity ("χ") definition on the Mulliken scale: "μ" = −"χ". The hardness and Mulliken electronegativity are related as formula_4, and in this sense hardness is a measure for resistance to deformation or change. Likewise a value of zero denotes maximum softness, where softness is defined as the reciprocal of hardness. In a compilation of hardness values only that of the hydride anion deviates. Another discrepancy noted in the original 1983 article are the apparent higher hardness of Tl3+ compared to Tl+. Modifications. If the interaction between acid and base in solution results in an equilibrium mixture the strength of the interaction can be quantified in terms of an equilibrium constant. An alternative quantitative measure is the heat (enthalpy) of formation of the Lewis acid-base adduct in a non-coordinating solvent. The ECW model is quantitative model that describes and predicts the strength of Lewis acid base interactions, -ΔH . The model assigned E and C parameters to many Lewis acids and bases. Each acid is characterized by an EA and a CA. Each base is likewise characterized by its own EB and CB. The E and C parameters refer, respectively, to the electrostatic and covalent contributions to the strength of the bonds that the acid and base will form. The equation is -ΔH = EAEB + CACB + W The W term represents a constant energy contribution for acid–base reaction such as the cleavage of a dimeric acid or base. The equation predicts reversal of acids and base strengths. The graphical presentations of the equation show that there is no single order of Lewis base strengths or Lewis acid strengths. The ECW model accommodates the failure of single parameter descriptions of acid-base interactions. A related method adopting the E and C formalism of Drago and co-workers quantitatively predicts the formation constants for complexes of many metal ions plus the proton with a wide range of unidentate Lewis acids in aqueous solution, and also offered insights into factors governing HSAB behavior in solution. Another quantitative system has been proposed, in which Lewis acid strength toward Lewis base fluoride is based on gas-phase affinity for fluoride. Additional one-parameter base strength scales have been presented. However, it has been shown that to define the order of Lewis base strength (or Lewis acid strength) at least two properties must be considered. For Pearson's qualitative HSAB theory the two properties are hardness and strength while for Drago's quantitative ECW model the two properties are electrostatic and covalent . Kornblum's rule. An application of HSAB theory is the so-called Kornblum's rule (after Nathan Kornblum) which states that in reactions with ambident nucleophiles (nucleophiles that can attack from two or more places), the more electronegative atom reacts when the reaction mechanism is SN1 and the less electronegative one in a SN2 reaction. This rule (established in 1954) predates HSAB theory but in HSAB terms its explanation is that in a SN1 reaction the carbocation (a hard acid) reacts with a hard base (high electronegativity) and that in a SN2 reaction tetravalent carbon (a soft acid) reacts with soft bases. According to findings, electrophilic alkylations at free CN− occur preferentially at carbon, regardless of whether the SN1 or SN2 mechanism is involved and whether hard or soft electrophiles are employed. Preferred N attack, as postulated for hard electrophiles by the HSAB principle, could not be observed with any alkylating agent. Isocyano compounds are only formed with highly reactive electrophiles that react without an activation barrier because the diffusion limit is approached. It is claimed that the knowledge of absolute rate constants and not of the hardness of the reaction partners is needed to predict the outcome of alkylations of the cyanide ion. Criticism. Reanalysis of a large number of various most typical ambident organic system reveals that thermodynamic/kinetic control describes reactivity of organic compounds perfectly, whereas the HSAB principle fails and should be abandoned in the rationalization of ambident reactivity of organic compounds. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\eta = \\frac{1}{2}\\left(\\frac{\\partial^2 E}{\\partial N^2}\\right)_Z" }, { "math_id": 1, "text": "\n\\begin{align}\n\\eta &\\approx \\frac{E(N+1)-2E(N)+E(N-1)}{2}\\\\\n &=\\frac{(E(N-1)-E(N)) - (E(N)-E(N+1))}{2}\\\\\n &=\\frac{1}{2}(I-A)\n\\end{align}\n" }, { "math_id": 2, "text": "\\mu= \\left(\\frac{\\partial E}{\\partial N}\\right)_Z" }, { "math_id": 3, "text": "\n\\begin{align}\n\\mu &\\approx \\frac{E(N+1)-E(N-1)}{2}\\\\\n &=\\frac{-(E(N-1)-E(N))-(E(N)-E(N+1))}{2}\\\\\n &=-\\frac{1}{2}(I+A)\n\\end{align}\n" }, { "math_id": 4, "text": "2\\eta = \\left(\\frac{\\partial \\mu}{\\partial N}\\right)_Z \\approx -\\left(\\frac{\\partial \\chi}{\\partial N}\\right)_Z" } ]
https://en.wikipedia.org/wiki?curid=1047111
1047173
Integrator
Component that outputs the integral of its input over time An integrator in measurement and control applications is an element whose output signal is the time integral of its input signal. It accumulates the input quantity over a defined time to produce a representative output. Integration is an important part of many engineering and scientific applications. Mechanical integrators are the oldest type and are still used for metering water flow or electrical power. Electronic analogue integrators are the basis of analog computers and charge amplifiers. Integration can also be performed by algorithms in digital computers. In signal processing circuits. An electronic integrator is a form of first-order low-pass filter, which can be performed in the continuous-time (analog) domain or approximated (simulated) in the discrete-time (digital) domain. An integrator will have a low pass filtering effect but when given an offset it will accumulate a value building it until it reaches a limit of the system or overflows. A "current integrator" is an electronic device performing a time integration of an electric current, thus measuring a total electric charge. A capacitor's current–voltage relation makes it a very simple current integrator: formula_0 More sophisticated current integrator circuits build on this relation, such as the charge amplifier. A current integrator is also used to measure the electric charge on a Faraday cup in a residual gas analyzer to measure partial pressures of gasses in a vacuum. Another application of current integration is in ion beam deposition, where the measured charge directly corresponds to the number of ions deposited on a substrate, assuming the charge state of the ions is known. The two current-carrying electrical leads must to be connected to the ion source and the substrate, closing the electric circuit which in part is given by the ion beam. A "voltage integrator" is an electronic device performing a time integration of an electric voltage, thus measuring the total volt-second product. A simple resistor–capacitor circuit acts as an integrator at high frequencies above its cutoff frequency. "See also Integrator at op amp applications and op amp integrator" Op amp integrator. An ideal op amp integrator (e.g. Figure 1) is a voltage integrator that works over all frequencies (limited by the op amp's gain–bandwidth product) and provides gain. Drawbacks of ideal op amp integrator. Thus, an ideal integrator needs to be modified with additional components to reduce the effect of an error voltage in practice. This modified integrator is referred as practical integrator. Practical op amp integrator. Main description at: The gain of an integrator at low frequency can be limited to avoid the saturation problem, by shunting the feedback capacitor with a feedback resistor. This practical integrator acts as a low-pass filter with constant gain in its low frequency pass band. It only performs integration in high frequencies, not in low frequencies, so bandwidth for integrating is limited. Mechanical integrators. Mechanical integrators were key elements in the mechanical differential analyser, used to solve practical physical problems. Mechanical integration mechanisms were also used in control systems such as regulating flows or temperature in industrial processes. Mechanisms such as the ball-and-disk integrator were used both for computation in differential analysers and as components of instruments such as naval gun directors, flow totalizers and others. A planimeter is a mechanical device used for calculating the definite integral of a curve given in graphical form, or more generally finding the area of a closed curve. An integraph is used to plot the indefinite integral of a function given in graphical form. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "V(t) = V(t_0) + \\frac{1}{C}\\int_{t_0}^t I(\\tau) \\, \\mathrm{d}\\tau" } ]
https://en.wikipedia.org/wiki?curid=1047173
10474
Eight queens puzzle
Mathematical problem set on a chessboard The eight queens puzzle is the problem of placing eight chess queens on an 8×8 chessboard so that no two queens threaten each other; thus, a solution requires that no two queens share the same row, column, or diagonal. There are 92 solutions. The problem was first posed in the mid-19th century. In the modern era, it is often used as an example problem for various computer programming techniques. The eight queens puzzle is a special case of the more general "n" queens problem of placing "n" non-attacking queens on an "n"×"n" chessboard. Solutions exist for all natural numbers "n" with the exception of "n" = 2 and "n" = 3. Although the exact number of solutions is only known for "n" ≤ 27, the asymptotic growth rate of the number of solutions is approximately (0.143 "n")"n". History. Chess composer Max Bezzel published the eight queens puzzle in 1848. Franz Nauck published the first solutions in 1850. Nauck also extended the puzzle to the "n" queens problem, with "n" queens on a chessboard of "n"×"n" squares. Since then, many mathematicians, including Carl Friedrich Gauss, have worked on both the eight queens puzzle and its generalized "n"-queens version. In 1874, S. Günther proposed a method using determinants to find solutions. J.W.L. Glaisher refined Gunther's approach. In 1972, Edsger Dijkstra used this problem to illustrate the power of what he called structured programming. He published a highly detailed description of a depth-first backtracking algorithm. Constructing and counting solutions when "n" = 8. The problem of finding all solutions to the 8-queens problem can be quite computationally expensive, as there are 4,426,165,368 possible arrangements of eight queens on an 8×8 board, but only 92 solutions. It is possible to use shortcuts that reduce computational requirements or rules of thumb that avoids brute-force computational techniques. For example, by applying a simple rule that chooses one queen from each column, it is possible to reduce the number of possibilities to 16,777,216 (that is, 88) possible combinations. Generating permutations further reduces the possibilities to just 40,320 (that is, 8!), which can then be checked for diagonal attacks. The eight queens puzzle has 92 distinct solutions. If solutions that differ only by the symmetry operations of rotation and reflection of the board are counted as one, the puzzle has 12 solutions. These are called "fundamental" solutions; representatives of each are shown below. A fundamental solution usually has eight variants (including its original form) obtained by rotating 90, 180, or 270° and then reflecting each of the four rotational variants in a mirror in a fixed position. However, one of the 12 fundamental solutions (solution 12 below) is identical to its own 180° rotation, so has only four variants (itself and its reflection, its 90° rotation and the reflection of that). Thus, the total number of distinct solutions is 11×8 + 1×4 = 92. All fundamental solutions are presented below: &lt;templatestyles src="Col-begin/styles.css"/&gt; &lt;templatestyles src="Col-begin/styles.css"/&gt; &lt;templatestyles src="Col-begin/styles.css"/&gt; Solution 10 has the additional property that no three queens are in a straight line. Existence of solutions. Brute-force algorithms to count the number of solutions are computationally manageable for "n" = 8, but would be intractable for problems of "n" ≥ 20, as 20! = 2.433 × 1018. If the goal is to find a single solution, one can show solutions exist for all "n" ≥ 4 with no search whatsoever. These solutions exhibit stair-stepped patterns, as in the following examples for "n" = 8, 9 and 10: &lt;templatestyles src="Col-begin/styles.css"/&gt; The examples above can be obtained with the following formulas. Let ("i", "j") be the square in column "i" and row "j" on the "n" × "n" chessboard, "k" an integer. One approach is For "n" = 8 this results in fundamental solution 1 above. A few more examples follow. Counting solutions for other sizes "n". Exact enumeration. There is no known formula for the exact number of solutions for placing "n" queens on an "n" × "n" board i.e. the number of independent sets of size "n" in an "n" × "n" queen's graph. The 27×27 board is the highest-order board that has been completely enumerated. The following tables give the number of solutions to the "n" queens problem, both fundamental (sequence in the OEIS) and all (sequence in the OEIS), for all known cases. The number of placements in which furthermore no three queens line on any straight line is known for formula_0 (sequence in the OEIS). Asymptotic enumeration. In 2021, Michael Simkin proved that for large numbers "n", the number of solutions of the "n" queens problem is approximately formula_1. More precisely, the number formula_2 of solutions has asymptotic growth formula_3 where formula_4 is a constant that lies between 1.939 and 1.945. (Here "o"(1) represents little o notation.) If one instead considers a toroidal chessboard (where diagonals "wrap around" from the top edge to the bottom and from the left edge to the right), it is only possible to place "n" queens on an formula_5 board if formula_6 In this case, the asymptotic number of solutions is formula_7 Find the number of non-attacking queens that can be placed in a "d"-dimensional chess &lt;dfn id=""&gt;space&lt;/dfn&gt; of size "n". More than "n" queens can be placed in some higher dimensions (the smallest example is four non-attacking queens in a 3×3×3 chess space), and it is in fact known that for any "k", there are higher dimensions where "n""k" queens do not suffice to attack all spaces. On an 8×8 board one can place 32 knights, or 14 bishops, 16 kings or eight rooks, so that no two pieces attack each other. In the case of knights, an easy solution is to place one on each square of a given color, since they move only to the opposite color. The solution is also easy for rooks and kings. Sixteen kings can be placed on the board by dividing it into 2-by-2 squares and placing the kings at equivalent points on each square. Placements of "n" rooks on an "n"×"n" board are in direct correspondence with order-"n" permutation matrices. Related problems can be asked for chess variations such as shogi. For instance, the "n"+"k" dragon kings problem asks to place "k" shogi pawns and "n"+"k" mutually nonattacking dragon kings on an "n"×"n" shogi board. Pólya studied the "n" queens problem on a toroidal ("donut-shaped") board and showed that there is a solution on an "n"×"n" board if and only if "n" is not divisible by 2 or 3. Given an "n"×"n" board, the domination number is the minimum number of queens (or other pieces) needed to attack or occupy every square. For "n" = 8 the queen's domination number is 5. Variants include mixing queens with other pieces; for example, placing "m" queens and "m" knights on an "n"×"n" board so that no piece attacks another or placing queens and pawns so that no two queens attack each other. In 1992, Demirörs, Rafraf, and Tanik published a method for converting some magic squares into "n"-queens solutions, and vice versa. In an "n"×"n" matrix, place each digit 1 through "n" in "n" locations in the matrix so that no two instances of the same digit are in the same row or column. Consider a matrix with one primary column for each of the "n" ranks of the board, one primary column for each of the "n" files, and one secondary column for each of the 4"n" − 6 nontrivial diagonals of the board. The matrix has "n"2 rows: one for each possible queen placement, and each row has a 1 in the columns corresponding to that square's rank, file, and diagonals and a 0 in all the other columns. Then the "n" queens problem is equivalent to choosing a subset of the rows of this matrix such that every primary column has a 1 in precisely one of the chosen rows and every secondary column has a 1 in at most one of the chosen rows; this is an example of a generalized exact cover problem, of which sudoku is another example. The completion problem asks whether, given an "n"×"n" chessboard on which some queens are already placed, it is possible to place a queen in every remaining row so that no two queens attack each other. This and related questions are NP-complete and #P-complete. Any placement of at most "n"/60 queens can be completed, while there are partial configurations of roughly "n"/4 queens that cannot be completed. Exercise in algorithm design. Finding all solutions to the eight queens puzzle is a good example of a simple but nontrivial problem. For this reason, it is often used as an example problem for various programming techniques, including nontraditional approaches such as constraint programming, logic programming or genetic algorithms. Most often, it is used as an example of a problem that can be solved with a recursive algorithm, by phrasing the "n" queens problem inductively in terms of adding a single queen to any solution to the problem of placing "n"−1 queens on an "n"×"n" chessboard. The induction bottoms out with the solution to the 'problem' of placing 0 queens on the chessboard, which is the empty chessboard. This technique can be used in a way that is much more efficient than the naïve brute-force search algorithm, which considers all 648 = 248 = 281,474,976,710,656 possible blind placements of eight queens, and then filters these to remove all placements that place two queens either on the same square (leaving only 64!/56! = 178,462,987,637,760 possible placements) or in mutually attacking positions. This very poor algorithm will, among other things, produce the same results over and over again in all the different permutations of the assignments of the eight queens, as well as repeating the same computations over and over again for the different sub-sets of each solution. A better brute-force algorithm places a single queen on each row, leading to only 88 = 224 = 16,777,216 blind placements. It is possible to do much better than this. One algorithm solves the eight rooks puzzle by generating the permutations of the numbers 1 through 8 (of which there are 8! = 40,320), and uses the elements of each permutation as indices to place a queen on each row. Then it rejects those boards with diagonal attacking positions. The backtracking depth-first search program, a slight improvement on the permutation method, constructs the search tree by considering one row of the board at a time, eliminating most nonsolution board positions at a very early stage in their construction. Because it rejects rook and diagonal attacks even on incomplete boards, it examines only 15,720 possible queen placements. A further improvement, which examines only 5,508 possible queen placements, is to combine the permutation based method with the early pruning method: the permutations are generated depth-first, and the search space is pruned if the partial permutation produces a diagonal attack. Constraint programming can also be very effective on this problem. An alternative to exhaustive search is an 'iterative repair' algorithm, which typically starts with all queens on the board, for example with one queen per column. It then counts the number of conflicts (attacks), and uses a heuristic to determine how to improve the placement of the queens. The 'minimum-conflicts' heuristic – moving the piece with the largest number of conflicts to the square in the same column where the number of conflicts is smallest – is particularly effective: it easily finds a solution to even the 1,000,000 queens problem. Unlike the backtracking search outlined above, iterative repair does not guarantee a solution: like all greedy procedures, it may get stuck on a local optimum. (In such a case, the algorithm may be restarted with a different initial configuration.) On the other hand, it can solve problem sizes that are several orders of magnitude beyond the scope of a depth-first search. As an alternative to backtracking, solutions can be counted by recursively enumerating valid partial solutions, one row at a time. Rather than constructing entire board positions, blocked diagonals and columns are tracked with bitwise operations. This does not allow the recovery of individual solutions. Sample program. The following program is a translation of Niklaus Wirth's solution into the Python programming language, but does without the index arithmetic found in the original and instead uses lists to keep the program code as simple as possible. By using a coroutine in the form of a generator function, both versions of the original can be unified to compute either one or all of the solutions. Only 15,720 possible queen placements are examined. def queens(n: int, i: int, a: list, b: list, c: list): if i &lt; n: for j in range(n): if j not in a and i + j not in b and i - j not in c: yield from queens(n, i + 1, a + [j], b + [i + j], c + [i - j]) else: yield a for solution in queens(8, 0, [], [], []): print(solution) Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n \\leq 25" }, { "math_id": 1, "text": "(0.143n)^n" }, { "math_id": 2, "text": "\\mathcal{Q}(n)" }, { "math_id": 3, "text": "\n\\mathcal{Q}(n) = ((1 \\pm o(1))ne^{-\\alpha})^n\n" }, { "math_id": 4, "text": "\\alpha" }, { "math_id": 5, "text": "n \\times n" }, { "math_id": 6, "text": "n \\equiv 1,5 \\mod 6." }, { "math_id": 7, "text": "T(n) = ((1+o(1))ne^{-3})^n." } ]
https://en.wikipedia.org/wiki?curid=10474
10475041
Tschirnhausen cubic
Plane curve of the form r = a*sec(θ/3) In algebraic geometry, the Tschirnhausen cubic, or Tschirnhaus' cubic is a plane curve defined, in its left-opening form, by the polar equation formula_0 where sec is the secant function. History. The curve was studied by von Tschirnhaus, de L'Hôpital, and Catalan. It was given the name Tschirnhausen cubic in a 1900 paper by Raymond Clare Archibald, though it is sometimes known as de L'Hôpital's cubic or the trisectrix of Catalan. Other equations. Put formula_1. Then applying triple-angle formulas gives formula_2 formula_3 formula_4 formula_5 giving a parametric form for the curve. The parameter "t" can be eliminated easily giving the Cartesian equation formula_6. If the curve is translated horizontally by 8"a" and the signs of the variables are changed, the equations of the resulting right-opening curve are formula_7 formula_8 and in Cartesian coordinates formula_9. This gives the alternative polar form formula_10. Generalization. The Tschirnhausen cubic is a Sinusoidal spiral with "n" = −1/3.
[ { "math_id": 0, "text": "r = a\\sec^3 \\left(\\frac{\\theta}{3}\\right)" }, { "math_id": 1, "text": "t=\\tan(\\theta/3)" }, { "math_id": 2, "text": "x=a\\cos \\theta \\sec^3 \\frac{\\theta}{3} = a \\left(\\cos^3 \\frac{\\theta}{3} - 3 \\cos \\frac{\\theta}{3} \\sin^2 \\frac{\\theta}{3} \\right) \\sec^3 \\frac{\\theta}{3}= a\\left(1 - 3 \\tan^2 \\frac{\\theta}{3}\\right)" }, { "math_id": 3, "text": "= a(1 - 3t^2) " }, { "math_id": 4, "text": "y=a\\sin \\theta \\sec^3 \\frac{\\theta}{3} = a \\left(3 \\cos^2 \\frac{\\theta}{3}\\sin \\frac{\\theta}{3} - \\sin^3 \\frac{\\theta}{3} \\right) \\sec^3 \\frac{\\theta}{3}= a \\left(3 \\tan \\frac{\\theta}{3} - \\tan^3 \\frac{\\theta}{3} \\right) " }, { "math_id": 5, "text": "= at(3-t^2)" }, { "math_id": 6, "text": "27ay^2 = (a-x)(8a+x)^2" }, { "math_id": 7, "text": "x = 3a(3-t^2)" }, { "math_id": 8, "text": "y = at(3-t^2)" }, { "math_id": 9, "text": "x^3=9a \\left(x^2-3y^2 \\right)" }, { "math_id": 10, "text": "r=9a \\left(\\sec \\theta - 3\\sec \\theta \\tan^2 \\theta \\right)" } ]
https://en.wikipedia.org/wiki?curid=10475041
1047584
Man bites dog
Aphorism in journalism The phrase man bites dog is a shortened version of an aphorism in journalism that describes how an unusual, infrequent event (such as a man biting a dog) is more likely to be reported as news than an ordinary, everyday occurrence with similar consequences, such as a dog biting a man. The phenomenon is also described in the journalistic saying, "You never read about a plane that did not crash." It can be expressed mathematically; a basic principle of information theory is that reports of unusual events provide more information than those for more routine outcomes. Origins. The phrase was coined by Alfred Harmsworth, 1st Viscount Northcliffe (1865–1922), a British newspaper magnate, but is also attributed to "New York Sun" editor John B. Bogart (1848–1921): "When a dog bites a man, that is not news, because it happens so often. But if a man bites a dog, that is news." The quote is also attributed to Charles Anderson Dana (1819–1897). The result is that rarer events more often appear as news stories, while more common events appear less often, thus distorting the perceptions of news consumers of what constitutes normal rates of occurrence. Effect. To some extent, a focus on unusual occurrences is unavoidable in journalism, as events that proceed as expected are simply not "newsworthy". The reasoning errors caused by this phenomenon are also associated with the availability heuristic, which is the mental shortcut that relies on the immediate examples that come to mind when evaluating a specific topic. For example, because airplane crashes are frequently reported, they are easy to call to mind. This leads to people having inaccurate perceptions of how dangerous air travel is. Some consider "man bites dog" stories about unusual events a sign of yellow journalism, and in the internet era, headlines about them may be phrased as click bait. Mathematical analysis. A basic principle of the information theory, which studies the mathematical theory of communication, is that reports of unusual events provide more information than those for more routine outcomes. The amount of information conveyed by a message about an event can be expressed in terms of its "surprisal", with surprisal formula_0 defined as formula_1 for an event of probability formula_2. Measured this way, an event that is nearly certain to happen (formula_2 very close to one) carries almost no information, while an extremely rare event (formula_2 very close to zero) provides a very large amount of information. Examples of literal use in journalism. In 2000, the "Santa Cruz Sentinel" ran a story titled "Man bites dog" about a San Francisco man who bit his own dog. Reuters ran a story, "It's News! Man Bites Dog", about a man biting a dog in December 2007. A 2008 story of a boy biting a dog in Brazil had news outlets quoting the phrase. In 2010, NBC Connecticut ran a story about a man who bit a police dog, prefacing it with, "It's often said, if a dog bites a man it's not news, but if a man bites a dog, you've got a story. Well, here is that story." On May 14, 2012, the "Medway Messenger", a British local newspaper, ran a front page story headlined "MAN BITES DOG" about a man who survived an attack from a Staffordshire bull terrier by biting the dog back. On September 27, 2012, the "Toronto Star", a Canadian newspaper, ran the story headlined "Nearly Naked Man Bites Dog", about a man that is alleged to have bitten a dog in Pembroke, Ontario. On December 2, 2012, "Sydney Morning Herald" reported about a man that bit a dog, headlining it 'Man bites Dog, goes to hospital'. On May 5, 2013, "Nine News", an Australian news outlet, ran a story headlined "Man bites dog to save wife" about a man who bit a Labrador on the nose, after it attacked his wife and bit off her nose. On March 12, 2014, Rosbalt, a Russian news agency, reported that a man in Lipetsk had burnt a bed in his apartment, run around the city in his underwear, and, finally, "bit a fighting breed dog" following an hours-long online debate about the situation in Ukraine. In April 2014, CNN reported a mother bit a pit bull attacking her daughter. On June 14, 2014, the "South Wales Argus" ran a front page teaser headlined "Man Bites Dog" about a man who has been accused of assaulting his partner and her pet dog. The Online version of this story was later amended to "Man bites dog and escapes jail". On September 1, 2014, the "Coventry Telegraph" and the "Daily Mirror" ran an article about a man who had bitten a dog after it attacked his pet. On December 17, 2014, the "Cambridge News" ran an article with a headline starting: "Man bites dog then dies". On November 4, 2015, the "Washington Post" ran an article with the title "Man bites dog. No, really." On January 25, 2018, "The Hindu" reported that a man bit a police dog in Houston, Texas, while trying to evade arrest. On April 10, 2018, the "Daily Telegraph" ran such an article about a man biting a dog to defend his own dog. On May 4, 2018, the "Salt Lake Tribune" ran an article about a man biting a police dog while being taken into custody. On July 8, 2019, the "Daily Camera" ran an article about a man biting a dog in a supermarket. On April 22, 2022, the Associated Press ran an article about a man who bit a police dog while officers tried to take him into custody. Dog shoots man. There have also been a number of "dog shoots man" news stories. As an example of a related phrase, a story titled "Deer Shoots Hunter" appeared in a 1947 issue of the Pittsburgh Press, mentioning a hunter that was shot by his own gun due to a reflex kick by the deer he had killed. And in 2005, in Michigan, there was a case of "cat shoots man". Man bites snake. On April 12, 2009, Kenyan farm worker Ben Nyaumbe was attacked by a large python. During his struggle to escape from the snake's coils, he bit its tail. He was rescued after it eventually relaxed its grasp enough for him to get to his mobile phone. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "s" }, { "math_id": 1, "text": "s = -log(p)" }, { "math_id": 2, "text": "p" } ]
https://en.wikipedia.org/wiki?curid=1047584
1047605
Recursive definition
Defining elements of a set in terms of other elements in the set In mathematics and computer science, a recursive definition, or inductive definition, is used to define the elements in a set in terms of other elements in the set (Aczel 1977:740ff). Some examples of recursively-definable objects include factorials, natural numbers, Fibonacci numbers, and the Cantor ternary set. A recursive definition of a function defines values of the function for some inputs in terms of the values of the same function for other (usually smaller) inputs. For example, the factorial function "n"! is defined by the rules formula_0 This definition is valid for each natural number n, because the recursion eventually reaches the base case of 0. The definition may also be thought of as giving a procedure for computing the value of the function "n"!, starting from "n" = 0 and proceeding onwards with "n" = 1, 2, 3 etc. The recursion theorem states that such a definition indeed defines a function that is unique. The proof uses mathematical induction. An inductive definition of a set describes the elements in a set in terms of other elements in the set. For example, one definition of the set &amp;NoBreak;&amp;NoBreak; of natural numbers is: There are many sets that satisfy (1) and (2) – for example, the set {1, 1.649, 2, 2.649, 3, 3.649, …} satisfies the definition. However, condition (3) specifies the set of natural numbers by removing the sets with extraneous members. Properties of recursively defined functions and sets can often be proved by an induction principle that follows the recursive definition. For example, the definition of the natural numbers presented here directly implies the principle of mathematical induction for natural numbers: if a property holds of the natural number 0 (or 1), and the property holds of "n" + 1 whenever it holds of n, then the property holds of all natural numbers (Aczel 1977:742). Form of recursive definitions. Most recursive definitions have two foundations: a base case (basis) and an inductive clause. The difference between a circular definition and a recursive definition is that a recursive definition must always have "base cases", cases that satisfy the definition "without" being defined in terms of the definition itself, and that all other instances in the inductive clauses must be "smaller" in some sense (i.e., "closer" to those base cases that terminate the recursion) — a rule also known as "recur only with a simpler case". In contrast, a circular definition may have no base case, and even may define the value of a function in terms of that value itself — rather than on other values of the function. Such a situation would lead to an infinite regress. That recursive definitions are valid – meaning that a recursive definition identifies a unique function – is a theorem of set theory known as the recursion theorem, the proof of which is non-trivial. Where the domain of the function is the natural numbers, sufficient conditions for the definition to be valid are that the value of "f"(0) (i.e., base case) is given, and that for "n" &gt; 0, an algorithm is given for determining "f"("n") in terms of n, formula_1 (i.e., inductive clause). More generally, recursive definitions of functions can be made whenever the domain is a well-ordered set, using the principle of transfinite recursion. The formal criteria for what constitutes a valid recursive definition are more complex for the general case. An outline of the general proof and the criteria can be found in James Munkres' "Topology". However, a specific case (domain is restricted to the positive integers instead of any well-ordered set) of the general recursive definition will be given below. Principle of recursive definition. Let A be a set and let "a"0 be an element of A. If ρ is a function which assigns to each function f mapping a nonempty section of the positive integers into A, an element of A, then there exists a unique function formula_2 such that formula_3 Examples of recursive definitions. Elementary functions. Addition is defined recursively based on counting as formula_4 Multiplication is defined recursively as formula_5 Exponentiation is defined recursively as formula_6 Binomial coefficients can be defined recursively as formula_7 Prime numbers. The set of prime numbers can be defined as the unique set of positive integers satisfying The primality of the integer 2 is the base case; checking the primality of any larger integer X by this definition requires knowing the primality of every integer between 2 and X, which is well defined by this definition. That last point can be proved by induction on X, for which it is essential that the second clause says "if and only if"; if it had just said "if", the primality of, for instance, the number 4 would not be clear, and the further application of the second clause would be impossible. Non-negative even numbers. The even numbers can be defined as consisting of Well formed formula. The notion of a well-formed formula (wff) in propositional logic is defined recursively as the smallest set satisfying the three rules: The definition can be used to determine whether any particular string of symbols is a wff: Recursive definitions as logic programs. Logic programs can be understood as sets of recursive definitions. For example, the recursive definition of even number can be written as the logic program: even(0). even(s(s(X))) :- even(X). Here :- represents "if", and s(X) represents the successor of X, namely X+1, as in Peano arithmetic. The logic programming language Prolog uses backward reasoning to solve goals and answer queries. For example, given the query ?- even(s(s(0))) it produces the answer true. Given the query ?- even(s(0)) it produces the answer false. The program can be used not only to check whether a query is true, but also to generate answers that are true. For example: ?- even(X). X = 0 X = s(s(0)) X = s(s(s(s(0)))) X = s(s(s(s(s(s(0)))))) Logic programs significantly extend recursive definitions by including the use of negative conditions, implemented by negation as failure, as in the definition: even(0). even(s(X)) :- not(even(X)). Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\begin{align}\n& 0! = 1. \\\\\n& (n+1)! = (n+1) \\cdot n!.\n\\end{align}" }, { "math_id": 1, "text": "f(0), f(1), \\dots, f(n-1)" }, { "math_id": 2, "text": "h : \\Z_+ \\to A" }, { "math_id": 3, "text": "\\begin{align}\nh(1) &= a_0 \\\\\nh(i) &= \\rho \\left(h|_{\\{1,2,\\ldots,i-1\\}}\\right) \\text{ for } i>1.\n\\end{align}" }, { "math_id": 4, "text": "\\begin{align}\n& 0 + a = a, \\\\\n& (1+n) + a = 1 + (n+a).\n\\end{align}" }, { "math_id": 5, "text": "\\begin{align}\n& 0 \\cdot a = 0, \\\\\n& (1+n) \\cdot a = a + n \\cdot a.\n\\end{align}" }, { "math_id": 6, "text": "\\begin{align}\n& a^0 = 1, \\\\\n& a^{1+n} = a \\cdot a^n.\n\\end{align}" }, { "math_id": 7, "text": "\\begin{align}\n& \\binom{a}{0} = 1, \\\\\n& \\binom{1+a}{1+n} = \\frac{(1+a)\\binom{a}{n}}{1+n}.\n\\end{align}" } ]
https://en.wikipedia.org/wiki?curid=1047605
1047627
Complete variety
Type of algebraic variety In mathematics, in particular in algebraic geometry, a complete algebraic variety is an algebraic variety X, such that for any variety Y the projection morphism formula_0 is a closed map (i.e. maps closed sets onto closed sets). This can be seen as an analogue of compactness in algebraic geometry: a topological space X is compact if and only if the above projection map is closed with respect to topological products. The image of a complete variety is closed and is a complete variety. A closed subvariety of a complete variety is complete. A complex variety is complete if and only if it is compact as a complex-analytic variety. The most common example of a complete variety is a projective variety, but there do exist complete non-projective varieties in dimensions 2 and higher. While any complete nonsingular surface is projective, there exist nonsingular complete varieties in dimension 3 and higher which are not projective. The first examples of non-projective complete varieties were given by Masayoshi Nagata and Heisuke Hironaka. An affine space of positive dimension is not complete. The morphism taking a complete variety to a point is a proper morphism, in the sense of scheme theory. An intuitive justification of "complete", in the sense of "no missing points", can be given on the basis of the valuative criterion of properness, which goes back to Claude Chevalley. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt; Sources. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "X \\times Y \\to Y" } ]
https://en.wikipedia.org/wiki?curid=1047627
10477190
T-J model
In solid-state physics, the "t"-"J" model is a model first derived by Józef Spałek to explain antiferromagnetic properties of Mott insulators, taking into account experimental results about the strength of electron-electron repulsion in these materials. The material is modelled as a lattice with atoms in the sites with conduction electrons (or holes) moving between them, like in the Hubbard model. Unlike the Hubbard model, the electrons are strongly-correlated, meaning the electrons are sensitive to reciprocal coulombic repulsion, and so are less likely to occupy lattice sites already occupied by another electron. In the basic Hubbard model, the repulsion, indicated by "U", can be small or even zero, and electrons are more free to jump ("hopping", parametrized by "t" as "transfer" or "tunnel") from one site to another. In the "t"-"J" model, instead of "U", there is the parameter "J", function of the ratio "t"/"U". Like the Hubbard model, it is a prospective microscopic theory of high temperature superconductivity in cuprate superconductors which arise from doped antiferromagnets, particularly in the case where the lattice considered is the two-dimensional lattice. Cuprate superconductors are currently (as of 2024) the superconductors with the highest known superconducting transition temperature at ambient pressure, but there is no consensus on the microscopic theory responsible for their superconducting transition. The Hamiltonian. In quantum physics, system's models are usually based on the Hamiltonian operator formula_0, corresponding to the total energy of that system, including both kinetic energy and potential energy. The "t"-"J" Hamiltonian can be derived from the formula_0 of the Hubbard model using the Schrieffer–Wolff transformation, with the transformation generator depending on "t"/"U" and excluding the possibility for electrons to doubly occupy a lattice's site, which results in: formula_1 where the term in "t" corresponds to the kinetic energy and is equal to the one in the Hubbard model. The second one is the potential energy approximated at the second order, because this is an approximation of the Hubbard model in the limit "U" » "t" developed in power of "t". Terms at higher order can be added. The parameters are: If "ni" = 1, that is when in the ground state, there is just one electron per lattice's site (half-filling), the model reduces to the Heisenberg model and the ground state reproduce a dielectric antiferromagnets (Mott insulator). The model can be further extended considering also the next-nearest-neighbor sites and the chemical potential to set the ground state in function of the total number of particles: formula_2 where ⟨...⟩ and ⟨⟨...⟩⟩ denote the nearest and next-nearest neighbors, respectively, with two different values for the hopping integral ("t"1 and "t"2) and "μ" is the chemical potential. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\hat H" }, { "math_id": 1, "text": "\\hat H = -t\\sum_{\\langle ij\\rangle,\\sigma} \\left( c_{i\\sigma}^{\\dagger} c_{j\\sigma} + \\mathrm{h.c.} \\right) \n+ J\\sum_{\\langle ij\\rangle}\\left(\\mathbf{S}_{i}\\cdot \\mathbf{S}_{j}-\\frac{n_in_j}{4}\\right) + O(t^3/U^2)" }, { "math_id": 2, "text": "\n\\mathcal{\\hat H} = t_1 \\sum\\limits_{\\langle i,j \\rangle} \\left( c_{i\\sigma}^{\\dagger} c_{j\\sigma} + \\mathrm{h.c.} \\right) \\ + \\ t_2 \\sum\\limits_{\\langle\\langle i,j \\rangle\\rangle} \\left( c_{i\\sigma}^{\\dagger} c_{j\\sigma} + \\mathrm{h.c.} \\right) \\ + \\ J \\sum\\limits_{\\langle i,j \\rangle} \\left( \\mathbf{S}_{i} \\cdot \\mathbf{S}_{j} - \\frac{ n_{i} n_{j} }{4}\\right) - \\ \\mu\\sum\\limits_{i} n_{i} , " } ]
https://en.wikipedia.org/wiki?curid=10477190
10477221
Erdős–Rényi model
Two closely related models for generating random graphs In the mathematical field of graph theory, the Erdős–Rényi model refers to one of two closely related models for generating random graphs or the evolution of a random network. These models are named after Hungarian mathematicians Paul Erdős and Alfréd Rényi, who introduced one of the models in 1959. Edgar Gilbert introduced the other model contemporaneously with and independently of Erdős and Rényi. In the model of Erdős and Rényi, all graphs on a fixed vertex set with a fixed number of edges are equally likely. In the model introduced by Gilbert, also called the Erdős–Rényi–Gilbert model, each edge has a fixed probability of being present or absent, independently of the other edges. These models can be used in the probabilistic method to prove the existence of graphs satisfying various properties, or to provide a rigorous definition of what it means for a property to hold for almost all graphs. Definition. There are two closely related variants of the Erdős–Rényi random graph model. The behavior of random graphs are often studied in the case where formula_1, the number of vertices, tends to infinity. Although formula_6 and formula_2 can be fixed in this case, they can also be functions depending on formula_1. For example, the statement that almost every graph in formula_12 is connected means that, as formula_1 tends to infinity, the probability that a graph on formula_1 vertices with edge probability formula_13 is connected tends to formula_9. Comparison between the two models. The expected number of edges in "G"("n", "p") is formula_14, and by the law of large numbers any graph in "G"("n", "p") will almost surely have approximately this many edges (provided the expected number of edges tends to infinity). Therefore, a rough heuristic is that if "pn"2 → ∞, then "G"("n","p") should behave similarly to "G"("n", "M") with formula_15 as "n" increases. For many graph properties, this is the case. If "P" is any graph property which is monotone with respect to the subgraph ordering (meaning that if "A" is a subgraph of "B" and "B" satisfies "P", then "A" will satisfy "P" as well), then the statements ""P" holds for almost all graphs in "G"("n", "p")" and ""P" holds for almost all graphs in formula_16" are equivalent (provided "pn"2 → ∞). For example, this holds if "P" is the property of being connected, or if "P" is the property of containing a Hamiltonian cycle. However, this will not necessarily hold for non-monotone properties (e.g. the property of having an even number of edges). In practice, the "G"("n", "p") model is the one more commonly used today, in part due to the ease of analysis allowed by the independence of the edges. Properties of "G"("n", "p"). With the notation above, a graph in "G"("n", "p") has on average formula_17 edges. The distribution of the degree of any particular vertex is binomial: formula_18 where "n" is the total number of vertices in the graph. Since formula_19 this distribution is Poisson for large "n" and "np" = const. In a 1960 paper, Erdős and Rényi described the behavior of "G"("n", "p") very precisely for various values of "p". Their results included that: Thus formula_22 is a sharp threshold for the connectedness of "G"("n", "p"). Further properties of the graph can be described almost precisely as "n" tends to infinity. For example, there is a "k"("n") (approximately equal to 2log2("n")) such that the largest clique in "G"("n", 0.5) has almost surely either size "k"("n") or "k"("n") + 1. Thus, even though finding the size of the largest clique in a graph is NP-complete, the size of the largest clique in a "typical" graph (according to this model) is very well understood. Edge-dual graphs of Erdos-Renyi graphs are graphs with nearly the same degree distribution, but with degree correlations and a significantly higher clustering coefficient. Relation to percolation. In percolation theory one examines a finite or infinite graph and removes edges (or links) randomly. Thus the Erdős–Rényi process is in fact unweighted link percolation on the complete graph. (One refers to percolation in which nodes and/or links are removed with heterogeneous weights as weighted percolation). As percolation theory has much of its roots in physics, much of the research done was on the lattices in Euclidean spaces. The transition at "np" = 1 from giant component to small component has analogs for these graphs, but for lattices the transition point is difficult to determine. Physicists often refer to study of the complete graph as a mean field theory. Thus the Erdős–Rényi process is the mean-field case of percolation. Some significant work was also done on percolation on random graphs. From a physicist's point of view this would still be a mean-field model, so the justification of the research is often formulated in terms of the robustness of the graph, viewed as a communication network. Given a random graph of "n" ≫ 1 nodes with an average degree formula_23. Remove randomly a fraction formula_24 of nodes and leave only a fraction formula_25 from the network. There exists a critical percolation threshold formula_26 below which the network becomes fragmented while above formula_27 a giant connected component of order "n" exists. The relative size of the giant component, "P"∞, is given by formula_28 Caveats. Both of the two major assumptions of the "G"("n", "p") model (that edges are independent and that each edge is equally likely) may be inappropriate for modeling certain real-life phenomena. Erdős–Rényi graphs have low clustering, unlike many social networks. Some modeling alternatives include Barabási–Albert model and Watts and Strogatz model. These alternative models are not percolation processes, but instead represent a growth and rewiring model, respectively. Another alternative family of random graph models, capable of reproducing many real-life phenomena, are exponential random graph models. History. The "G"("n", "p") model was first introduced by Edgar Gilbert in a 1959 paper studying the connectivity threshold mentioned above. The "G"("n", "M") model was introduced by Erdős and Rényi in their 1959 paper. As with Gilbert, their first investigations were as to the connectivity of "G"("n", "M"), with the more detailed analysis following in 1960. Continuum limit representation of critical "G"("n", "p"). A continuum limit of the graph was obtained when formula_6 is of order formula_29. Specifically, consider the sequence of graphs formula_30 for formula_31. The limit object can be constructed as follows: Applying this procedure, one obtains a sequence of random infinite graphs of decreasing sizes: formula_50. The theorem states that this graph corresponds in a certain sense to the limit object of formula_51 as formula_52. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "G(n,M)" }, { "math_id": 1, "text": "n" }, { "math_id": 2, "text": "M" }, { "math_id": 3, "text": "G(3,2)" }, { "math_id": 4, "text": "\\tfrac{1}{3}" }, { "math_id": 5, "text": "G(n,p)" }, { "math_id": 6, "text": "p" }, { "math_id": 7, "text": "p^M (1-p)^{ {n \\choose 2}-M}." }, { "math_id": 8, "text": " 0 " }, { "math_id": 9, "text": "1" }, { "math_id": 10, "text": "p=\\tfrac{1}{2}" }, { "math_id": 11, "text": "2^\\binom{n}{2}" }, { "math_id": 12, "text": "G(n,2\\ln(n)/n)" }, { "math_id": 13, "text": "2\\ln(n)/n" }, { "math_id": 14, "text": "\\tbinom{n}{2}p" }, { "math_id": 15, "text": "M=\\tbinom{n}{2} p" }, { "math_id": 16, "text": " G(n, \\tbinom{n}{2} p) " }, { "math_id": 17, "text": "\\tbinom{n}{2} p" }, { "math_id": 18, "text": "P(\\deg(v) = k) = {n-1\\choose k}p^k(1-p)^{n-1-k}," }, { "math_id": 19, "text": "P(\\deg(v) = k) \\to \\frac{(np)^k \\mathrm{e}^{-np}}{k!} \\quad \\text{ as } n \\to \\infty \\text{ and } np = \\text{constant}," }, { "math_id": 20, "text": "p<\\tfrac{(1-\\varepsilon)\\ln n} n" }, { "math_id": 21, "text": "p>\\tfrac{(1+\\varepsilon) \\ln n} n" }, { "math_id": 22, "text": " \\tfrac{\\ln n} n" }, { "math_id": 23, "text": "\\langle k \\rangle" }, { "math_id": 24, "text": "1 - p'" }, { "math_id": 25, "text": "p'" }, { "math_id": 26, "text": "p'_c=\\tfrac{1}{\\langle k\\rangle}" }, { "math_id": 27, "text": "p'_c" }, { "math_id": 28, "text": " P_\\infty= p'[1-\\exp(-\\langle k \\rangle P_\\infty)]. \\, " }, { "math_id": 29, "text": "1/n" }, { "math_id": 30, "text": "G_n:=G(n,1/n+\\lambda n^{-\\frac{4}{3}})" }, { "math_id": 31, "text": "\\lambda\\in \\R" }, { "math_id": 32, "text": "W^\\lambda(t):=W(t)+\\lambda t - \\frac{t^2}{2}" }, { "math_id": 33, "text": "W" }, { "math_id": 34, "text": "R^\\lambda(t):= W^\\lambda(t) - \\inf\\limits_{s\\in[0,t]}W^\\lambda(s)" }, { "math_id": 35, "text": "W^\\lambda" }, { "math_id": 36, "text": "-\\frac{t^2}{2}" }, { "math_id": 37, "text": "t\\to +\\infty" }, { "math_id": 38, "text": "\\R" }, { "math_id": 39, "text": "(C_i)_{i\\in \\N}" }, { "math_id": 40, "text": "R^\\lambda" }, { "math_id": 41, "text": "C_i" }, { "math_id": 42, "text": "i\\in \\N" }, { "math_id": 43, "text": "(e(s))_{s\\in [0,1]}" }, { "math_id": 44, "text": "\\Xi" }, { "math_id": 45, "text": "T_e" }, { "math_id": 46, "text": "[0,1]\\times \\R_+" }, { "math_id": 47, "text": "(x,s)\\in \\Xi" }, { "math_id": 48, "text": "x\\leq e(s)" }, { "math_id": 49, "text": "\\Gamma_e" }, { "math_id": 50, "text": "(\\Gamma_i)_{i\\in \\N}" }, { "math_id": 51, "text": "G_n" }, { "math_id": 52, "text": "n\\to+\\infty" } ]
https://en.wikipedia.org/wiki?curid=10477221
10477988
Skoda–El Mir theorem
The Skoda–El Mir theorem is a theorem of complex geometry, stated as follows: Theorem (Skoda, El Mir, Sibony). Let "X" be a complex manifold, and "E" a closed complete pluripolar set in "X". Consider a closed positive current formula_0 on formula_1 which is locally integrable around "E". Then the trivial extension of formula_0 to "X" is closed on "X".
[ { "math_id": 0, "text": "\\Theta" }, { "math_id": 1, "text": " X \\backslash E" } ]
https://en.wikipedia.org/wiki?curid=10477988
10478956
Riemann–Hilbert correspondence
In mathematics, the term Riemann–Hilbert correspondence refers to the correspondence between regular singular flat connections on algebraic vector bundles and representations of the fundamental group, and more generally to one of several generalizations of this. The original setting appearing in Hilbert's twenty-first problem was for the Riemann sphere, where it was about the existence of systems of linear regular differential equations with prescribed monodromy representations. First the Riemann sphere may be replaced by an arbitrary Riemann surface and then, in higher dimensions, Riemann surfaces are replaced by complex manifolds of dimension &gt; 1. There is a correspondence between certain systems of partial differential equations (linear and having very special properties for their solutions) and possible monodromies of their solutions. Such a result was proved for algebraic connections with regular singularities by Pierre Deligne (1970, generalizing existing work in the case of Riemann surfaces) and more generally for regular holonomic D-modules by Masaki Kashiwara (1980, 1984) and Zoghman Mebkhout (1980, 1984) independently. In the setting of nonabelian Hodge theory, the Riemann-Hilbert correspondence provides a complex analytic isomorphism between two of the three natural algebraic structures on the moduli spaces, and so is naturally viewed as a nonabelian analogue of the comparison isomorphism between De Rham cohomology and singular/Betti cohomology. Statement. Suppose that "X" is a smooth complex algebraic variety. Riemann–Hilbert correspondence (for regular singular connections): there is a functor "Sol" called the local solutions functor, that is an equivalence from the category of flat connections on algebraic vector bundles on "X" with regular singularities to the category of local systems of finite-dimensional complex vector spaces on "X". For "X" connected, the category of local systems is also equivalent to the category of complex representations of the fundamental group of "X". Thus such connections give a purely algebraic way to access the finite dimensional representations of the topological fundamental group. The condition of regular singularities means that locally constant sections of the bundle (with respect to the flat connection) have moderate growth at points of "Y − X", where "Y" is an algebraic compactification of "X". In particular, when "X" is compact, the condition of regular singularities is vacuous. More generally there is the Riemann–Hilbert correspondence (for regular holonomic D-modules): there is a functor "DR" called the de Rham functor, that is an equivalence from the category of holonomic D-modules on "X" with regular singularities to the category of perverse sheaves on "X". By considering the irreducible elements of each category, this gives a 1:1 correspondence between isomorphism classes of and A D-module is something like a system of differential equations on "X", and a local system on a subvariety is something like a description of possible monodromies, so this correspondence can be thought of as describing certain systems of differential equations in terms of the monodromies of their solutions. In the case "X" has dimension one (a complex algebraic curve) then there is a more general Riemann–Hilbert correspondence for algebraic connections with no regularity assumption (or for holonomic D-modules with no regularity assumption) described in Malgrange (1991), the Riemann–Hilbert–Birkhoff correspondence. Examples. An example where the theorem applies is the differential equation formula_0 on the punctured affine line "A"1 − {0} (that is, on the nonzero complex numbers C − {0}). Here "a" is a fixed complex number. This equation has regular singularities at 0 and ∞ in the projective line P1. The local solutions of the equation are of the form "cza" for constants "c". If "a" is not an integer, then the function "za" cannot be made well-defined on all of C − {0}. That means that the equation has nontrivial monodromy. Explicitly, the monodromy of this equation is the 1-dimensional representation of the fundamental group π1("A"1 − {0}) = Z in which the generator (a loop around the origin) acts by multiplication by "e2πia". To see the need for the hypothesis of regular singularities, consider the differential equation formula_1 on the affine line "A"1 (that is, on the complex numbers C). This equation corresponds to a flat connection on the trivial algebraic line bundle over "A"1. The solutions of the equation are of the form "cez" for constants "c". Since these solutions do not have polynomial growth on some sectors around the point ∞ in the projective line P1, the equation does not have regular singularities at ∞. (This can also be seen by rewriting the equation in terms of the variable "w" := 1/"z", where it becomes formula_2 The pole of order 2 in the coefficients means that the equation does not have regular singularities at "w" = 0, according to Fuchs's theorem.) Since the functions "cez" are defined on the whole affine line "A"1, the monodromy of this flat connection is trivial. But this flat connection is not isomorphic to the obvious flat connection on the trivial line bundle over "A"1 (as an algebraic vector bundle with flat connection), because its solutions do not have moderate growth at ∞. This shows the need to restrict to flat connections with regular singularities in the Riemann–Hilbert correspondence. On the other hand, if we work with holomorphic (rather than algebraic) vector bundles with flat connection on a noncompact complex manifold such as "A"1 = C, then the notion of regular singularities is not defined. A much more elementary theorem than the Riemann–Hilbert correspondence states that flat connections on holomorphic vector bundles are determined up to isomorphism by their monodromy. In characteristic "p". For schemes in characteristic "p"&gt;0, (later developed further under less restrictive assumptions in ) establish a Riemann-Hilbert correspondence that asserts in particular that étale cohomology of étale sheaves with Z/"p"-coefficients can be computed in terms of the action of the Frobenius endomorphism on coherent cohomology. More generally, there are equivalences of categories between constructible (resp. perverse) étale Z/"p"-sheaves and left (resp. right) modules with a Frobenius (resp. Cartier) action. This can be regarded as the positive characteristic analogue of the classical theory, where one can find a similar interplay of constructive vs. perverse t-structures.
[ { "math_id": 0, "text": "\\frac{df}{dz} = \\frac{a}{z}f" }, { "math_id": 1, "text": "\\frac{df}{dz} = f" }, { "math_id": 2, "text": "\\frac{df}{dw} = -\\frac{1}{w^2}f." } ]
https://en.wikipedia.org/wiki?curid=10478956
104790
Integer partition
Decomposition of an integer as a sum of positive integers In number theory and combinatorics, a partition of a non-negative integer n, also called an integer partition, is a way of writing n as a sum of positive integers. Two sums that differ only in the order of their summands are considered the same partition. (If order matters, the sum becomes a composition.) For example, 4 can be partitioned in five distinct ways: 4 3 + 1 2 + 2 2 + 1 + 1 1 + 1 + 1 + 1 The only partition of zero is the empty sum, having no parts. The order-dependent composition 1 + 3 is the same partition as 3 + 1, and the two distinct compositions 1 + 2 + 1 and 1 + 1 + 2 represent the same partition as 2 + 1 + 1. An individual summand in a partition is called a part. The number of partitions of n is given by the partition function "p"("n"). So "p"(4) = 5. The notation "λ" ⊢ "n" means that λ is a partition of n. Partitions can be graphically visualized with Young diagrams or Ferrers diagrams. They occur in a number of branches of mathematics and physics, including the study of symmetric polynomials and of the symmetric group and in group representation theory in general. Examples. The seven partitions of 5 are Some authors treat a partition as a decreasing sequence of summands, rather than an expression with plus signs. For example, the partition 2 + 2 + 1 might instead be written as the tuple (2, 2, 1) or in the even more compact form (22, 1) where the superscript indicates the number of repetitions of a part. This multiplicity notation for a partition can be written alternatively as formula_0, where "m"1 is the number of 1's, "m"2 is the number of 2's, etc. (Components with "m""i" 0 may be omitted.) For example, in this notation, the partitions of 5 are written formula_1, and formula_2. Diagrammatic representations of partitions. There are two common diagrammatic methods to represent partitions: as Ferrers diagrams, named after Norman Macleod Ferrers, and as Young diagrams, named after Alfred Young. Both have several possible conventions; here, we use "English notation", with diagrams aligned in the upper-left corner. Ferrers diagram. The partition 6 + 4 + 3 + 1 of the number 14 can be represented by the following diagram: The 14 circles are lined up in 4 rows, each having the size of a part of the partition. The diagrams for the 5 partitions of the number 4 are shown below: Young diagram. An alternative visual representation of an integer partition is its "Young diagram" (often also called a Ferrers diagram). Rather than representing a partition with dots, as in the Ferrers diagram, the Young diagram uses boxes or squares. Thus, the Young diagram for the partition 5 + 4 + 1 is while the Ferrers diagram for the same partition is While this seemingly trivial variation does not appear worthy of separate mention, Young diagrams turn out to be extremely useful in the study of symmetric functions and group representation theory: filling the boxes of Young diagrams with numbers (or sometimes more complicated objects) obeying various rules leads to a family of objects called Young tableaux, and these tableaux have combinatorial and representation-theoretic significance. As a type of shape made by adjacent squares joined together, Young diagrams are a special kind of polyomino. Partition function. The partition function formula_3 counts the partitions of a non-negative integer formula_4. For instance, formula_5 because the integer formula_6 has the five partitions formula_7, formula_8, formula_9, formula_10, and formula_6. The values of this function for formula_11 are: 1, 1, 2, 3, 5, 7, 11, 15, 22, 30, 42, 56, 77, 101, 135, 176, 231, 297, 385, 490, 627, 792, 1002, 1255, 1575, 1958, 2436, 3010, 3718, 4565, 5604, ... (sequence in the OEIS). The generating function of formula_12 is formula_13 No closed-form expression for the partition function is known, but it has both asymptotic expansions that accurately approximate it and recurrence relations by which it can be calculated exactly. It grows as an exponential function of the square root of its argument., as follows: formula_14 as formula_15 In 1937, Hans Rademacher found a way to represent the partition function formula_3 by the convergent series formula_16 where formula_17 and formula_18 is the Dedekind sum. The multiplicative inverse of its generating function is the Euler function; by Euler's pentagonal number theorem this function is an alternating sum of pentagonal number powers of its argument. formula_19 Srinivasa Ramanujan discovered that the partition function has nontrivial patterns in modular arithmetic, now known as Ramanujan's congruences. For instance, whenever the decimal representation of formula_4 ends in the digit 4 or 9, the number of partitions of formula_4 will be divisible by 5. Restricted partitions. In both combinatorics and number theory, families of partitions subject to various restrictions are often studied. This section surveys a few such restrictions. Conjugate and self-conjugate partitions. If we flip the diagram of the partition 6 + 4 + 3 + 1 along its main diagonal, we obtain another partition of 14: By turning the rows into columns, we obtain the partition 4 + 3 + 3 + 2 + 1 + 1 of the number 14. Such partitions are said to be "conjugate" of one another. In the case of the number 4, partitions 4 and 1 + 1 + 1 + 1 are conjugate pairs, and partitions 3 + 1 and 2 + 1 + 1 are conjugate of each other. Of particular interest are partitions, such as 2 + 2, which have themselves as conjugate. Such partitions are said to be "self-conjugate". Claim: The number of self-conjugate partitions is the same as the number of partitions with distinct odd parts. Proof (outline): The crucial observation is that every odd part can be "folded" in the middle to form a self-conjugate diagram: One can then obtain a bijection between the set of partitions with distinct odd parts and the set of self-conjugate partitions, as illustrated by the following example: Odd parts and distinct parts. Among the 22 partitions of the number 8, there are 6 that contain only "odd parts": Alternatively, we could count partitions in which no number occurs more than once. Such a partition is called a "partition with distinct parts". If we count the partitions of 8 with distinct parts, we also obtain 6: This is a general property. For each positive number, the number of partitions with odd parts equals the number of partitions with distinct parts, denoted by "q"("n"). This result was proved by Leonhard Euler in 1748 and later was generalized as Glaisher's theorem. For every type of restricted partition there is a corresponding function for the number of partitions satisfying the given restriction. An important example is "q"("n") (partitions into distinct parts). The first few values of "q"("n") are (starting with "q"(0)=1): 1, 1, 1, 2, 2, 3, 4, 5, 6, 8, 10, ... (sequence in the OEIS). The generating function for "q"("n") is given by formula_20 The pentagonal number theorem gives a recurrence for "q": "q"("k") = "a""k" + "q"("k" − 1) + "q"("k" − 2) − "q"("k" − 5) − "q"("k" − 7) + "q"("k" − 12) + "q"("k" − 15) − "q"("k" − 22) − ... where "a""k" is (−1)"m" if "k" = 3"m"2 − "m" for some integer "m" and is 0 otherwise. Restricted part size or number of parts. By taking conjugates, the number "p""k"("n") of partitions of "n" into exactly "k" parts is equal to the number of partitions of "n" in which the largest part has size "k". The function "p""k"("n") satisfies the recurrence "p""k"("n") = "p""k"("n" − "k") + "p""k"−1("n" − 1) with initial values "p"0(0) = 1 and "p""k"("n") = 0 if "n" ≤ 0 or "k" ≤ 0 and "n" and "k" are not both zero. One recovers the function "p"("n") by formula_21 One possible generating function for such partitions, taking "k" fixed and "n" variable, is formula_22 More generally, if "T" is a set of positive integers then the number of partitions of "n", all of whose parts belong to "T", has generating function formula_23 This can be used to solve change-making problems (where the set "T" specifies the available coins). As two particular cases, one has that the number of partitions of "n" in which all parts are 1 or 2 (or, equivalently, the number of partitions of "n" into 1 or 2 parts) is formula_24 and the number of partitions of "n" in which all parts are 1, 2 or 3 (or, equivalently, the number of partitions of "n" into at most three parts) is the nearest integer to ("n" + 3)2 / 12. Partitions in a rectangle and Gaussian binomial coefficients. One may also simultaneously limit the number and size of the parts. Let "p"("N", "M"; "n") denote the number of partitions of n with at most M parts, each of size at most N. Equivalently, these are the partitions whose Young diagram fits inside an "M" × "N" rectangle. There is a recurrence relation formula_25 obtained by observing that formula_26 counts the partitions of n into exactly M parts of size at most N, and subtracting 1 from each part of such a partition yields a partition of "n" − "M" into at most M parts. The Gaussian binomial coefficient is defined as: formula_27 The Gaussian binomial coefficient is related to the generating function of "p"("N", "M"; "n") by the equality formula_28 Rank and Durfee square. The "rank" of a partition is the largest number "k" such that the partition contains at least "k" parts of size at least "k". For example, the partition 4 + 3 + 3 + 2 + 1 + 1 has rank 3 because it contains 3 parts that are ≥ 3, but does not contain 4 parts that are ≥ 4. In the Ferrers diagram or Young diagram of a partition of rank "r", the "r" × "r" square of entries in the upper-left is known as the Durfee square: The Durfee square has applications within combinatorics in the proofs of various partition identities. It also has some practical significance in the form of the h-index. A different statistic is also sometimes called the rank of a partition (or Dyson rank), namely, the difference formula_29 for a partition of "k" parts with largest part formula_30. This statistic (which is unrelated to the one described above) appears in the study of Ramanujan congruences. Young's lattice. There is a natural partial order on partitions given by inclusion of Young diagrams. This partially ordered set is known as "Young's lattice". The lattice was originally defined in the context of representation theory, where it is used to describe the irreducible representations of symmetric groups "S""n" for all "n", together with their branching properties, in characteristic zero. It also has received significant study for its purely combinatorial properties; notably, it is the motivating example of a differential poset. Random partitions. There is a deep theory of random partitions chosen according to the uniform probability distribution on the symmetric group via the Robinson–Schensted correspondence. In 1977, Logan and Shepp, as well as Vershik and Kerov, showed that the Young diagram of a typical large partition becomes asympototically close to the graph of a certain analytic function minimizing a certain functional. In 1988, Baik, Deift and Johansson extended these results to determine the distribution of the longest increasing subsequence of a random permutation in terms of the Tracy–Widom distribution. Okounkov related these results to the combinatorics of Riemann surfaces and representation theory. See also. &lt;templatestyles src="Div col/styles.css"/&gt; Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "1^{m_1}2^{m_2}3^{m_3}\\cdots" }, { "math_id": 1, "text": "5^1, 1^1 4^1, 2^1 3^1, 1^2 3^1, 1^1 2^2, 1^3 2^1" }, { "math_id": 2, "text": "1^5" }, { "math_id": 3, "text": "p(n)" }, { "math_id": 4, "text": "n" }, { "math_id": 5, "text": "p(4)=5" }, { "math_id": 6, "text": "4" }, { "math_id": 7, "text": "1+1+1+1" }, { "math_id": 8, "text": "1+1+2" }, { "math_id": 9, "text": "1+3" }, { "math_id": 10, "text": "2+2" }, { "math_id": 11, "text": "n=0,1,2,\\dots" }, { "math_id": 12, "text": "p" }, { "math_id": 13, "text": "\\sum_{n=0}^{\\infty}p(n)q^n=\\prod_{j=1}^{\\infty}\\sum_{i=0}^{\\infty}q^{ji}=\\prod_{j=1}^{\\infty}(1-q^j)^{-1}." }, { "math_id": 14, "text": "p(n) \\sim \\frac {1} {4n\\sqrt3} \\exp\\left({\\pi \\sqrt {\\frac{2n}{3}}}\\right)" }, { "math_id": 15, "text": "n \\to \\infty" }, { "math_id": 16, "text": "p(n) = \\frac{1}{\\pi \\sqrt{2}} \\sum_{k=1}^\\infty A_k(n)\\sqrt{k} \\cdot\n\\frac{d}{dn} \\left({\n \\frac {1} {\\sqrt{n-\\frac{1}{24}}}\n \\sinh \\left[ {\\frac{\\pi}{k}\n \\sqrt{\\frac{2}{3}\\left(n-\\frac{1}{24}\\right)}}\\,\\,\\,\\right]\n}\\right)" }, { "math_id": 17, "text": "A_k(n) = \\sum_{0 \\le m < k, \\; (m, k) = 1}\ne^{ \\pi i \\left( s(m, k) - 2 nm/k \\right) }." }, { "math_id": 18, "text": "s(m,k)" }, { "math_id": 19, "text": "p(n)=p(n-1)+p(n-2)-p(n-5)-p(n-7)+\\cdots" }, { "math_id": 20, "text": "\\sum_{n=0}^\\infty q(n)x^n = \\prod_{k=1}^\\infty (1+x^k) = \\prod_{k=1}^\\infty \\frac {1}{1-x^{2k-1}} ." }, { "math_id": 21, "text": "\np(n) = \\sum_{k = 0}^n p_k(n).\n" }, { "math_id": 22, "text": " \\sum_{n \\geq 0} p_k(n) x^n = x^k\\prod_{i = 1}^k \\frac{1}{1 - x^i}." }, { "math_id": 23, "text": "\\prod_{t \\in T}(1-x^t)^{-1}." }, { "math_id": 24, "text": "\\left \\lfloor \\frac{n}{2}+1 \\right \\rfloor ," }, { "math_id": 25, "text": "p(N,M;n) = p(N,M-1;n) + p(N-1,M;n-M)" }, { "math_id": 26, "text": "p(N,M;n) - p(N,M-1;n)" }, { "math_id": 27, "text": "{k+\\ell \\choose \\ell}_q = {k+\\ell \\choose k}_q = \\frac{\\prod^{k+\\ell}_{j=1}(1-q^j)}{\\prod^{k}_{j=1}(1-q^j)\\prod^{\\ell}_{j=1}(1-q^j)}." }, { "math_id": 28, "text": "\\sum^{MN}_{n=0}p(N,M;n)q^n = {M+N \\choose M}_q." }, { "math_id": 29, "text": "\\lambda_k - k" }, { "math_id": 30, "text": "\\lambda_k" } ]
https://en.wikipedia.org/wiki?curid=104790
1047942
Muirhead's inequality
Mathematical inequality In mathematics, Muirhead's inequality, named after Robert Franklin Muirhead, also known as the "bunching" method, generalizes the inequality of arithmetic and geometric means. Preliminary definitions. "a"-mean. For any real vector formula_0 define the ""a"-mean" ["a"] of positive real numbers "x"1, ..., "x""n" by formula_1 where the sum extends over all permutations σ of { 1, ..., "n" }. When the elements of "a" are nonnegative integers, the "a"-mean can be equivalently defined via the monomial symmetric polynomial formula_2 as formula_3 where ℓ is the number of distinct elements in "a", and "k"1, ..., "k"ℓ are their multiplicities. Notice that the "a"-mean as defined above only has the usual properties of a mean (e.g., if the mean of equal numbers is equal to them) if formula_4. In the general case, one can consider instead formula_5, which is called a Muirhead mean. Doubly stochastic matrices. An "n" × "n" matrix "P" is "doubly stochastic" precisely if both "P" and its transpose "P"T are stochastic matrices. A "stochastic matrix" is a square matrix of nonnegative real entries in which the sum of the entries in each column is 1. Thus, a doubly stochastic matrix is a square matrix of nonnegative real entries in which the sum of the entries in each row and the sum of the entries in each column is 1. Statement. Muirhead's inequality states that ["a"] ≤ ["b"] for all "x" such that "x""i" &gt; 0 for every "i" ∈ { 1, ..., "n" } if and only if there is some doubly stochastic matrix "P" for which "a" = "Pb". Furthermore, in that case we have ["a"] = ["b"] if and only if "a" = "b" or all "x""i" are equal. The latter condition can be expressed in several equivalent ways; one of them is given below. The proof makes use of the fact that every doubly stochastic matrix is a weighted average of permutation matrices (Birkhoff-von Neumann theorem). Another equivalent condition. Because of the symmetry of the sum, no generality is lost by sorting the exponents into decreasing order: formula_6 formula_7 Then the existence of a doubly stochastic matrix "P" such that "a" = "Pb" is equivalent to the following system of inequalities: formula_8 The sequence formula_9 is said to majorize the sequence formula_10. Symmetric sum notation. It is convenient to use a special notation for the sums. A success in reducing an inequality in this form means that the only condition for testing it is to verify whether one exponent sequence (formula_11) majorizes the other one. formula_12 This notation requires developing every permutation, developing an expression made of "n"! monomials, for instance: formula_13 Examples. Arithmetic-geometric mean inequality. Let formula_14 and formula_15 We have formula_16 Then ["aA"] ≥ ["aG"], which is formula_17 yielding the inequality. Other examples. We seek to prove that "x"2 + "y"2 ≥ 2"xy" by using bunching (Muirhead's inequality). We transform it in the symmetric-sum notation: formula_18 The sequence (2, 0) majorizes the sequence (1, 1), thus the inequality holds by bunching. Similarly, we can prove the inequality formula_19 by writing it using the symmetric-sum notation as formula_20 which is the same as formula_21 Since the sequence (3, 0, 0) majorizes the sequence (1, 1, 1), the inequality holds by bunching. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "a=(a_1,\\dots,a_n)" }, { "math_id": 1, "text": "[a]=\\frac{1}{n!}\\sum_\\sigma x_{\\sigma_1}^{a_1}\\cdots x_{\\sigma_n}^{a_n}," }, { "math_id": 2, "text": "m_a(x_1,\\dots,x_n)" }, { "math_id": 3, "text": "[a] = \\frac{k_1!\\cdots k_l!}{n!} m_a(x_1,\\dots,x_n)," }, { "math_id": 4, "text": "a_1+\\cdots+a_n=1" }, { "math_id": 5, "text": "[a]^{1/(a_1+\\cdots+a_n)}" }, { "math_id": 6, "text": "a_1 \\geq a_2 \\geq \\cdots \\geq a_n" }, { "math_id": 7, "text": "b_1 \\geq b_2 \\geq \\cdots \\geq b_n." }, { "math_id": 8, "text": "\n\\begin{align}\na_1 & \\leq b_1 \\\\\na_1+a_2 & \\leq b_1+b_2 \\\\\na_1+a_2+a_3 & \\leq b_1+b_2+b_3 \\\\\n& \\,\\,\\, \\vdots \\\\\na_1+\\cdots +a_{n-1} & \\leq b_1+\\cdots+b_{n-1} \\\\\na_1+\\cdots +a_n & = b_1+\\cdots+b_n.\n\\end{align}\n" }, { "math_id": 9, "text": "b_1, \\ldots, b_n" }, { "math_id": 10, "text": "a_1, \\ldots, a_n" }, { "math_id": 11, "text": "\\alpha_1, \\ldots, \\alpha_n" }, { "math_id": 12, "text": "\\sum_\\text{sym} x_1^{\\alpha_1} \\cdots x_n^{\\alpha_n}" }, { "math_id": 13, "text": "\\begin{align}\n\\sum_\\text{sym} x^3 y^2 z^0 &= x^3 y^2 z^0 + x^3 z^2 y^0 + y^3 x^2 z^0 + y^3 z^2 x^0 + z^3 x^2 y^0 + z^3 y^2 x^0 \\\\\n&= x^3 y^2 + x^3 z^2 + y^3 x^2 + y^3 z^2 + z^3 x^2 + z^3 y^2\n\\end{align}" }, { "math_id": 14, "text": "a_G = \\left( \\frac 1 n , \\ldots , \\frac 1 n \\right)" }, { "math_id": 15, "text": "a_A = ( 1 , 0, 0, \\ldots , 0 )." }, { "math_id": 16, "text": "\n\\begin{align}\na_{A1} = 1 & > a_{G1} = \\frac 1 n, \\\\\na_{A1} + a_{A2} = 1 & > a_{G1} + a_{G2} = \\frac 2 n, \\\\\n& \\,\\,\\, \\vdots \\\\\na_{A1} + \\cdots + a_{An} & = a_{G1} + \\cdots + a_{Gn} = 1.\n\\end{align}\n" }, { "math_id": 17, "text": "\\frac 1 {n!} (x_1^1 \\cdot x_2^0 \\cdots x_n^0 + \\cdots + x_1^0 \\cdots x_n^1) (n-1)! \\geq \\frac 1 {n!} (x_1 \\cdot \\cdots \\cdot x_n)^{1/n} n!" }, { "math_id": 18, "text": "\\sum_ \\mathrm{sym} x^2 y^0 \\ge \\sum_\\mathrm{sym} x^1 y^1." }, { "math_id": 19, "text": "x^3+y^3+z^3 \\ge 3 x y z" }, { "math_id": 20, "text": "\\sum_ \\mathrm{sym} x^3 y^0 z^0 \\ge \\sum_\\mathrm{sym} x^1 y^1 z^1, " }, { "math_id": 21, "text": " 2 x^3 + 2 y^3 + 2 z^3 \\ge 6 x y z. " } ]
https://en.wikipedia.org/wiki?curid=1047942
10483232
Allen's interval algebra
Calculus for temporal reasoning (relating to time instances) of events Allen's interval algebra is a calculus for temporal reasoning that was introduced by James F. Allen in 1983. The calculus defines possible relations between time intervals and provides a composition table that can be used as a basis for reasoning about temporal descriptions of events. Formal description. Relations. The following 13 base relations capture the possible relations between two intervals. Using this calculus, given facts can be formalized and then used for automatic reasoning. Relations between intervals are formalized as sets of base relations. The sentences "During dinner, Peter reads the newspaper. Afterwards, he goes to bed." are formalized in Allen's Interval Algebra as follows: formula_0 formula_1 In general, the number of different relations between "n" intervals, starting with "n" = 0, is 1, 1, 13, 409, 23917, 2244361... OEIS A055203. The special case shown above is for "n" = 2. Composition of relations between intervals. For reasoning about the relations between temporal intervals, Allen's interval algebra provides a composition table. Given the relation between formula_2 and formula_3 and the relation between formula_3 and formula_4, the composition table allows for concluding about the relation between formula_2 and formula_4. Together with a converse operation, this turns Allen's interval algebra into a relation algebra. For the example, one can infer formula_5. Extensions. Allen's interval algebra can be used for the description of both temporal intervals and spatial configurations. For the latter use, the relations are interpreted as describing the relative position of spatial objects. This also works for three-dimensional objects by listing the relation for each coordinate separately. The study of overlapping markup uses a similar algebra (see ). Its models have more variations depending on whether endpoints of document structures are permitted to be truly co-located, or merely [tangent]. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mbox{newspaper } \\mathbf{\\{ \\operatorname{d} \\}} \\mbox{ dinner}" }, { "math_id": 1, "text": "\\mbox{dinner } \\mathbf{\\{ \\operatorname{<} \\}} \\mbox{ bed}" }, { "math_id": 2, "text": "X" }, { "math_id": 3, "text": "Y" }, { "math_id": 4, "text": "Z" }, { "math_id": 5, "text": "\\mbox{newspaper } \\mathbf{\\{ \\operatorname{<}, \\operatorname{m} \\}} \\mbox{ bed}" } ]
https://en.wikipedia.org/wiki?curid=10483232
1048518
Supramolecular chemistry
Branch of chemistry Supramolecular chemistry refers to the branch of chemistry concerning chemical systems composed of a discrete number of molecules. The strength of the forces responsible for spatial organization of the system range from weak intermolecular forces, electrostatic charge, or hydrogen bonding to strong covalent bonding, provided that the electronic coupling strength remains small relative to the energy parameters of the component. While traditional chemistry concentrates on the covalent bond, supramolecular chemistry examines the weaker and reversible non-covalent interactions between molecules. These forces include hydrogen bonding, metal coordination, hydrophobic forces, van der Waals forces, pi–pi interactions and electrostatic effects. Important concepts advanced by supramolecular chemistry include molecular self-assembly, molecular folding, molecular recognition, host–guest chemistry, mechanically-interlocked molecular architectures, and dynamic covalent chemistry. The study of non-covalent interactions is crucial to understanding many biological processes that rely on these forces for structure and function. Biological systems are often the inspiration for supramolecular research. History. The existence of intermolecular forces was first postulated by Johannes Diderik van der Waals in 1873. However, Nobel laureate Hermann Emil Fischer developed supramolecular chemistry's philosophical roots. In 1894, Fischer suggested that enzyme–substrate interactions take the form of a "lock and key", the fundamental principles of molecular recognition and host–guest chemistry. In the early twentieth century non-covalent bonds were understood in gradually more detail, with the hydrogen bond being described by Latimer and Rodebush in 1920. The use of these principles led to an increasing understanding of protein structure and other biological processes. For instance, the important breakthrough that allowed the elucidation of the double helical structure of DNA occurred when it was realized that there are two separate strands of nucleotides connected through hydrogen bonds. The use of non-covalent bonds is essential to replication because they allow the strands to be separated and used to template new double stranded DNA. Concomitantly, chemists began to recognize and study synthetic structures based on non-covalent interactions, such as micelles and microemulsions. Eventually, chemists were able to take these concepts and apply them to synthetic systems. The breakthrough came in the 1960s with the synthesis of the crown ethers by Charles J. Pedersen. Following this work, other researchers such as Donald J. Cram, Jean-Marie Lehn and Fritz Vögtle became active in synthesizing shape- and ion-selective receptors, and throughout the 1980s research in the area gathered a rapid pace with concepts such as mechanically interlocked molecular architectures emerging. The importance of supramolecular chemistry was established by the 1987 Nobel Prize for Chemistry which was awarded to Donald J. Cram, Jean-Marie Lehn, and Charles J. Pedersen in recognition of their work in this area. The development of selective "host–guest" complexes in particular, in which a host molecule recognizes and selectively binds a certain guest, was cited as an important contribution. In the 1990s, supramolecular chemistry became even more sophisticated, with researchers such as James Fraser Stoddart developing molecular machinery and highly complex self-assembled structures, and Itamar Willner developing sensors and methods of electronic and biological interfacing. During this period, electrochemical and photochemical motifs became integrated into supramolecular systems in order to increase functionality, research into synthetic self-replicating system began, and work on molecular information processing devices began. The emerging science of nanotechnology also had a strong influence on the subject, with building blocks such as fullerenes, nanoparticles, and dendrimers becoming involved in synthetic systems. Control. Thermodynamics. Supramolecular complexes are formed by non-covalent interactions between two chemical moieties, which can be described as an host and a guest. Most commonly, the interacting species are held together by hydrogen bonds. The definition excludes compounds formed by electrostatic interactions, which are called ion pairs. In solution, the host H, guest G, and complexes HpGq, will be in equilibrium with each other. In the simplest case, p=q=1, the equilibrium can be written as formula_0 The value of the equilibrium constant, K, for this reaction can, in principle, be determined by any of the techniques described below. Some examples are shown in the following table. The Gibbs free energy change, formula_1, for this reaction is the sum of an enthalpy term, formula_2 and an entropy term formula_3. formula_4 Both formula_1 and formula_5 values can be determined at a given temperature, formula_6, by means of Isothermal titration calorimetry. For an example, see Sessler. "et.al." In that example a macrocyclic ring with 4 protonated nitrogen atoms encapsulates a chloride anion; illustrations of ITC data and a titration curve are reproduced in Steed&amp;Atwood. (pp 15–16) The value of the equilibrium constant and the stoichiometry of the species formed were found to be strongly solvent-dependent. With nitromethane solutions values of ΔH = 8.55 kJmol−1 and ΔS = -9.1 JK−1mol−1 were obtained. Environment. The molecular environment around a supramolecular system is also of prime importance to its operation and stability. Many solvents have strong hydrogen bonding, electrostatic, and charge-transfer capabilities, and are therefore able to become involved in complex equilibria with the system, even breaking complexes completely. For this reason, the choice of solvent can be critical. Concepts. Molecular self-assembly. Molecular self-assembly is the construction of systems without guidance or management from an outside source (other than to provide a suitable environment). The molecules are directed to assemble through non-covalent interactions. Self-assembly may be subdivided into intermolecular self-assembly (to form a supramolecular assembly), and intramolecular self-assembly (or folding as demonstrated by foldamers and polypeptides). Molecular self-assembly also allows the construction of larger structures such as micelles, membranes, vesicles, liquid crystals, and is important to crystal engineering. Molecular recognition and complexation. Molecular recognition is the specific binding of a guest molecule to a complementary host molecule to form a host–guest complex. Often, the definition of which species is the "host" and which is the "guest" is arbitrary. The molecules are able to identify each other using non-covalent interactions. Key applications of this field are the construction of molecular sensors and catalysis. Template-directed synthesis. Molecular recognition and self-assembly may be used with reactive species in order to pre-organize a system for a chemical reaction (to form one or more covalent bonds). It may be considered a special case of supramolecular catalysis. Non-covalent bonds between the reactants and a "template" hold the reactive sites of the reactants close together, facilitating the desired chemistry. This technique is particularly useful for situations where the desired reaction conformation is thermodynamically or kinetically unlikely, such as in the preparation of large macrocycles. This pre-organization also serves purposes such as minimizing side reactions, lowering the activation energy of the reaction, and producing desired stereochemistry. After the reaction has taken place, the template may remain in place, be forcibly removed, or may be "automatically" decomplexed on account of the different recognition properties of the reaction product. The template may be as simple as a single metal ion or may be extremely complex. Mechanically interlocked molecular architectures. Mechanically interlocked molecular architectures consist of molecules that are linked only as a consequence of their topology. Some non-covalent interactions may exist between the different components (often those that were used in the construction of the system), but covalent bonds do not. Supramolecular chemistry, and template-directed synthesis in particular, is key to the efficient synthesis of the compounds. Examples of mechanically interlocked molecular architectures include catenanes, rotaxanes, molecular knots, molecular Borromean rings and ravels. Dynamic covalent chemistry. In dynamic covalent chemistry covalent bonds are broken and formed in a reversible reaction under thermodynamic control. While covalent bonds are key to the process, the system is directed by non-covalent forces to form the lowest energy structures. Biomimetics. Many synthetic supramolecular systems are designed to copy functions of biological systems. These biomimetic architectures can be used to learn about both the biological model and the synthetic implementation. Examples include photoelectrochemical systems, catalytic systems, protein design and self-replication. Imprinting. Molecular imprinting describes a process by which a host is constructed from small molecules using a suitable molecular species as a template. After construction, the template is removed leaving only the host. The template for host construction may be subtly different from the guest that the finished host binds to. In its simplest form, imprinting uses only steric interactions, but more complex systems also incorporate hydrogen bonding and other interactions to improve binding strength and specificity. Molecular machinery. Molecular machines are molecules or molecular assemblies that can perform functions such as linear or rotational movement, switching, and entrapment. These devices exist at the boundary between supramolecular chemistry and nanotechnology, and prototypes have been demonstrated using supramolecular concepts. Jean-Pierre Sauvage, Sir J. Fraser Stoddart and Bernard L. Feringa shared the 2016 Nobel Prize in Chemistry for the 'design and synthesis of molecular machines'. Building blocks. Supramolecular systems are rarely designed from first principles. Rather, chemists have a range of well-studied structural and functional building blocks that they are able to use to build up larger functional architectures. Many of these exist as whole families of similar units, from which the analog with the exact desired properties can be chosen. Macrocycles. Macrocycles are very useful in supramolecular chemistry, as they provide whole cavities that can completely surround guest molecules and may be chemically modified to fine-tune their properties. Structural units. Many supramolecular systems require their components to have suitable spacing and conformations relative to each other, and therefore easily employed structural units are required. Applications. Materials technology. Supramolecular chemistry has found many applications, in particular molecular self-assembly processes have been applied to the development of new materials. Large structures can be readily accessed using bottom-up synthesis as they are composed of small molecules requiring fewer steps to synthesize. Thus most of the bottom-up approaches to nanotechnology are based on supramolecular chemistry. Many smart materials are based on molecular recognition. Catalysis. A major application of supramolecular chemistry is the design and understanding of catalysts and catalysis. Non-covalent interactions are extremely important in catalysis, binding reactants into conformations suitable for reaction and lowering the transition state energy of reaction. Template-directed synthesis is a special case of supramolecular catalysis. Encapsulation systems such as micelles, dendrimers, and cavitands are also used in catalysis to create microenvironments suitable for reactions (or steps in reactions) to progress that is not possible to use on a macroscopic scale. Medicine. Design based on supramolecular chemistry has led to numerous applications in the creation of functional biomaterials and therapeutics. Supramolecular biomaterials afford a number of modular and generalizable platforms with tunable mechanical, chemical and biological properties. These include systems based on supramolecular assembly of peptides, host–guest macrocycles, high-affinity hydrogen bonding, and metal–ligand interactions. A supramolecular approach has been used extensively to create artificial ion channels for the transport of sodium and potassium ions into and out of cells. Supramolecular chemistry is also important to the development of new pharmaceutical therapies by understanding the interactions at a drug binding site. The area of drug delivery has also made critical advances as a result of supramolecular chemistry providing encapsulation and targeted release mechanisms. In addition, supramolecular systems have been designed to disrupt protein–protein interactions that are important to cellular function. Data storage and processing. Supramolecular chemistry has been used to demonstrate computation functions on a molecular scale. In many cases, photonic or chemical signals have been used in these components, but electrical interfacing of these units has also been shown by supramolecular signal transduction devices. Data storage has been accomplished by the use of molecular switches with photochromic and photoisomerizable units, by electrochromic and redox-switchable units, and even by molecular motion. Synthetic molecular logic gates have been demonstrated on a conceptual level. Even full-scale computations have been achieved by semi-synthetic DNA computers. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "H + G \\leftrightharpoons HG" }, { "math_id": 1, "text": "\\Delta G" }, { "math_id": 2, "text": "\\Delta H" }, { "math_id": 3, "text": "T\\Delta S" }, { "math_id": 4, "text": "\\Delta G = \\Delta H -T\\Delta S" }, { "math_id": 5, "text": "\\Delta S" }, { "math_id": 6, "text": "T" } ]
https://en.wikipedia.org/wiki?curid=1048518
1048680
Algebraic normal form
Algebraic normal form, closely related to Zhegalkin polynomial (Q4370006) In Boolean algebra, the algebraic normal form (ANF), ring sum normal form (RSNF or RNF), "Zhegalkin normal form", or "Reed–Muller expansion" is a way of writing propositional logic formulas in one of three subforms: Formulas written in ANF are also known as Zhegalkin polynomials and Positive Polarity (or Parity) Reed–Muller expressions (PPRM). Common uses. ANF is a canonical form, which means that two logically equivalent formulas will convert to the same ANF, easily showing whether two formulas are equivalent for automated theorem proving. Unlike other normal forms, it can be represented as a simple list of lists of variable names—conjunctive and disjunctive normal forms also require recording whether each variable is negated or not. Negation normal form is unsuitable for determining equivalence, since on negation normal forms, equivalence does not imply equality: a ∨ ¬a is not reduced to the same thing as 1, even though they are logically equivalent. Putting a formula into ANF also makes it easy to identify linear functions (used, for example, in linear-feedback shift registers): a linear function is one that is a sum of single literals. Properties of nonlinear-feedback shift registers can also be deduced from certain properties of the feedback function in ANF. Performing operations within algebraic normal form. There are straightforward ways to perform the standard boolean operations on ANF inputs in order to get ANF results. XOR (logical exclusive disjunction) is performed directly: (1 ⊕ x) ⊕ (1 ⊕ x ⊕ y) 1 ⊕ x ⊕ 1 ⊕ x ⊕ y 1 ⊕ 1 ⊕ x ⊕ x ⊕ y y NOT (logical negation) is XORing 1: ¬(1 ⊕ x ⊕ y) 1 ⊕(1 ⊕ x ⊕ y) 1 ⊕ 1 ⊕ x ⊕ y x ⊕ y AND (logical conjunction) is distributed algebraically (1 ⊕ x)(1 ⊕ x ⊕ y) 1(1 ⊕ x ⊕ y) ⊕ x(1 ⊕ x ⊕ y) (1 ⊕ x ⊕ y) ⊕ (x ⊕ x ⊕ xy) 1 ⊕ x ⊕ x ⊕ x ⊕ y ⊕ xy 1 ⊕ x ⊕ y ⊕ xy OR (logical disjunction) uses either 1 ⊕ (1 ⊕ a)(1 ⊕ b) (easier when both operands have purely true terms) or a ⊕ b ⊕ ab (easier otherwise): (1 ⊕ x) + (1 ⊕ x ⊕ y) 1 ⊕ (1 ⊕ 1 ⊕ x)(1 ⊕ 1 ⊕ x ⊕ y) 1 ⊕ x(x ⊕ y) 1 ⊕ x ⊕ xy Converting to algebraic normal form. Each variable in a formula is already in pure ANF, so one only needs to perform the formula's boolean operations as shown above to get the entire formula into ANF. For example: x + (y ⋅ ¬z) x + (y(1 ⊕ z)) x + (y ⊕ yz) x ⊕ (y ⊕ yz) ⊕ x(y ⊕ yz) x ⊕ y ⊕ xy ⊕ yz ⊕ xyz Formal representation. ANF is sometimes described in an equivalent way: where formula_6 fully describes formula_7. Recursively deriving multiargument Boolean functions. There are only four functions with one argument: To represent a function with multiple arguments one can use the following equality: formula_12, where * formula_13 * formula_14 Indeed, Since both formula_21 and formula_22 have fewer arguments than formula_7 it follows that using this process recursively we will finish with functions with one variable. For example, let us construct ANF of formula_23 (logical or): References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "1" }, { "math_id": 1, "text": "0" }, { "math_id": 2, "text": "\\and" }, { "math_id": 3, "text": "\\oplus" }, { "math_id": 4, "text": " a \\oplus b \\oplus \\left(a \\and b\\right) \\oplus \\left(a \\and b \\and c\\right) " }, { "math_id": 5, "text": " 1 \\oplus a \\oplus b \\oplus \\left(a \\and b\\right) \\oplus \\left(a \\and b \\and c\\right) " }, { "math_id": 6, "text": "a_0, a_1, \\ldots, a_{1,2,\\ldots,n} \\in \\{0,1\\}^*" }, { "math_id": 7, "text": "f" }, { "math_id": 8, "text": "f(x)=0" }, { "math_id": 9, "text": "f(x)=1" }, { "math_id": 10, "text": "f(x)=x" }, { "math_id": 11, "text": "f(x)=1 \\oplus x" }, { "math_id": 12, "text": "f(x_1,x_2,\\ldots,x_n) = g(x_2,\\ldots,x_n) \\oplus x_1 h(x_2,\\ldots,x_n)" }, { "math_id": 13, "text": "g(x_2,\\ldots,x_n) = f(0,x_2,\\ldots,x_n)" }, { "math_id": 14, "text": "h(x_2,\\ldots,x_n) = f(0,x_2,\\ldots,x_n) \\oplus f(1,x_2,\\ldots,x_n)" }, { "math_id": 15, "text": "x_1=0" }, { "math_id": 16, "text": "x_1 h = 0" }, { "math_id": 17, "text": "f(0,\\ldots) = f(0,\\ldots)" }, { "math_id": 18, "text": "x_1=1" }, { "math_id": 19, "text": "x_1 h = h" }, { "math_id": 20, "text": "f(1,\\ldots) = f(0,\\ldots) \\oplus f(0,\\ldots) \\oplus f(1,\\ldots)" }, { "math_id": 21, "text": "g" }, { "math_id": 22, "text": "h" }, { "math_id": 23, "text": "f(x,y)= x \\lor y" }, { "math_id": 24, "text": "f(x,y) = f(0,y) \\oplus x(f(0,y) \\oplus f(1,y))" }, { "math_id": 25, "text": "f(0,y)=0 \\lor y = y" }, { "math_id": 26, "text": "f(1,y)=1 \\lor y = 1" }, { "math_id": 27, "text": "f(x,y) = y \\oplus x (y \\oplus 1)" }, { "math_id": 28, "text": "f(x,y) = y \\oplus x y \\oplus x = x \\oplus y \\oplus x y" } ]
https://en.wikipedia.org/wiki?curid=1048680
1049256
Apollonian gasket
Fractal composed of tangent circles In mathematics, an Apollonian gasket or Apollonian net is a fractal generated by starting with a triple of circles, each tangent to the other two, and successively filling in more circles, each tangent to another three. It is named after Greek mathematician Apollonius of Perga. Construction. The construction of the Apollonian gasket starts with three circles formula_0, formula_1, and formula_2 (black in the figure), that are each tangent to the other two, but that do not have a single point of triple tangency. These circles may be of different sizes to each other, and it is allowed for two to be inside the third, or for all three to be outside each other. As Apollonius discovered, there exist two more circles formula_3 and formula_4 (red) that are tangent to all three of the original circles – these are called "Apollonian circles". These five circles are separated from each other by six curved triangular regions, each bounded by the arcs from three pairwise-tangent circles. The construction continues by adding six more circles, one in each of these six curved triangles, tangent to its three sides. These in turn create 18 more curved triangles, and the construction continues by again filling these with tangent circles, ad infinitum. Continued stage by stage in this way, the construction adds formula_5 new circles at stage formula_6, giving a total of formula_7 circles after formula_6 stages. In the limit, this set of circles is an Apollonian gasket. In it, each pair of tangent circles has an infinite Pappus chain of circles tangent to both circles in the pair. The size of each new circle is determined by Descartes' theorem, which states that, for any four mutually tangent circles, the radii formula_8 of the circles obeys the equation formula_9 This equation may have a solution with a negative radius; this means that one of the circles (the one with negative radius) surrounds the other three. One or two of the initial circles of this construction, or the circles resulting from this construction, can degenerate to a straight line, which can be thought of as a circle with infinite radius. When there are two lines, they must be parallel, and are considered to be tangent at a point at infinity. When the gasket includes two lines on the formula_10-axis and one unit above it, and a circle of unit diameter tangent to both lines centered on the formula_11-axis, then the circles that are tangent to the formula_10-axis are the Ford circles, important in number theory. The Apollonian gasket has a Hausdorff dimension of about 1.3057. Because it has a well-defined fractional dimension, even though it is not precisely self-similar, it can be thought of as a fractal. Symmetries. The Möbius transformations of the plane preserve the shapes and tangencies of circles, and therefore preserve the structure of an Apollonian gasket. Any two triples of mutually tangent circles in an Apollonian gasket may be mapped into each other by a Möbius transformation, and any two Apollonian gaskets may be mapped into each other by a Möbius transformation. In particular, for any two tangent circles in any Apollonian gasket, an inversion in a circle centered at the point of tangency (a special case of a Möbius transformation) will transform these two circles into two parallel lines, and transform the rest of the gasket into the special form of a gasket between two parallel lines. Compositions of these inversions can be used to transform any two points of tangency into each other. Möbius transformations are also isometries of the hyperbolic plane, so in hyperbolic geometry all Apollonian gaskets are congruent. In a sense, there is therefore only one Apollonian gasket, up to (hyperbolic) isometry. The Apollonian gasket is the limit set of a group of Möbius transformations known as a Kleinian group. For Euclidean symmetry transformations rather than Möbius transformations, in general, the Apollonian gasket will inherit the symmetries of its generating set of three circles. However, some triples of circles can generate Apollonian gaskets with higher symmetry than the initial triple; this happens when the same gasket has a different and more-symmetric set of generating circles. Particularly symmetric cases include the Apollonian gasket between two parallel lines (with infinite dihedral symmetry), the Apollonian gasket generated by three congruent circles in an equilateral triangle (with the symmetry of the triangle), and the Apollonian gasket generated by two circles of radius 1 surrounded by a circle of radius 2 (with two lines of reflective symmetry). Integral Apollonian circle packings. If any four mutually tangent circles in an Apollonian gasket all have integer curvature (the inverse of their radius) then all circles in the gasket will have integer curvature. Since the equation relating curvatures in an Apollonian gasket, integral or not, is formula_12 it follows that one may move from one quadruple of curvatures to another by Vieta jumping, just as when finding a new Markov number. The first few of these integral Apollonian gaskets are listed in the following table. The table lists the curvatures of the largest circles in the gasket. Only the first three curvatures (of the five displayed in the table) are needed to completely describe each gasket – all other curvatures can be derived from these three. &lt;templatestyles src="Col-begin/styles.css"/&gt; Enumerating integral Apollonian circle packings. The curvatures formula_13 are a root quadruple (the smallest in some integral circle packing) if formula_14. They are primitive when formula_15. Defining a new set of variables formula_16 by the matrix equation formula_17 gives a system where formula_13 satisfies the Descartes equation precisely when formula_18. Furthermore, formula_13 is primitive precisely when formula_19, and formula_13 is a root quadruple precisely when formula_20. This relationship can be used to find all the primitive root quadruples with a given negative bend formula_10. It follows from formula_21 and formula_22 that formula_23, and hence that formula_24. Therefore, any root quadruple will satisfy formula_25. By iterating over all the possible values of formula_26, formula_27, and formula_28 one can find all the primitive root quadruples. The following Python code demonstrates this algorithm, producing the primitive root quadruples listed above. import math def get_primitive_bends(n: int): if n == 0: yield 0, 0, 1, 1 return for m in range(math.ceil(n / math.sqrt(3))): s = m**2 + n**2 for d1 in range(max(2 * m, 1), math.floor(math.sqrt(s)) + 1): d2, remainder = divmod(s, d1) if remainder == 0 and math.gcd(n, d1, d2) == 1: yield -n, d1 + n, d2 + n, d1 + d2 + n - 2 * m for n in range(15): for bends in get_primitive_bends(n): print(bends) The curvatures appearing in a primitive integral Apollonian circle packing must belong to a set of six or eight possible residues classes modulo 24, and numerical evidence supported that any sufficiently large integer from these residue classes would also be present as a curvature within the packing. This conjecture, known as the local-global conjecture, was proved to be false in 2023. Symmetry of integral Apollonian circle packings. There are multiple types of dihedral symmetry that can occur with a gasket depending on the curvature of the circles. No symmetry. If none of the curvatures are repeated within the first five, the gasket contains no symmetry, which is represented by symmetry group "C"1; the gasket described by curvatures (−10, 18, 23, 27) is an example. "D"1 symmetry. Whenever two of the largest five circles in the gasket have the same curvature, that gasket will have "D"1 symmetry, which corresponds to a reflection along a diameter of the bounding circle, with no rotational symmetry. "D"2 symmetry. If two different curvatures are repeated within the first five, the gasket will have D2 symmetry; such a symmetry consists of two reflections (perpendicular to each other) along diameters of the bounding circle, with a two-fold rotational symmetry of 180°. The gasket described by curvatures (−1, 2, 2, 3) is the only Apollonian gasket (up to a scaling factor) to possess D2 symmetry. "D"3 symmetry. There are no integer gaskets with "D"3 symmetry. If the three circles with smallest positive curvature have the same curvature, the gasket will have "D"3 symmetry, which corresponds to three reflections along diameters of the bounding circle (spaced 120° apart), along with three-fold rotational symmetry of 120°. In this case the ratio of the curvature of the bounding circle to the three inner circles is 2√3 − 3. As this ratio is not rational, no integral Apollonian circle packings possess this "D"3 symmetry, although many packings come close. Almost-"D"3 symmetry. The figure at left is an integral Apollonian gasket that appears to have "D"3 symmetry. The same figure is displayed at right, with labels indicating the curvatures of the interior circles, illustrating that the gasket actually possesses only the "D"1 symmetry common to many other integral Apollonian gaskets. The following table lists more of these "almost"-"D"3 integral Apollonian gaskets. The sequence has some interesting properties, and the table lists a factorization of the curvatures, along with the multiplier needed to go from the previous set to the current one. The absolute values of the curvatures of the "a" disks obey the recurrence relation "a"("n") = 4"a"("n" − 1) − "a"("n" − 2) (sequence in the OEIS), from which it follows that the multiplier converges to √3 + 2 ≈ 3.732050807. Sequential curvatures. For any integer "n" &gt; 0, there exists an Apollonian gasket defined by the following curvatures: (−"n", "n" + 1, "n"("n" + 1), "n"("n" + 1) + 1). For example, the gaskets defined by (−2, 3, 6, 7), (−3, 4, 12, 13), (−8, 9, 72, 73), and (−9, 10, 90, 91) all follow this pattern. Because every interior circle that is defined by "n" + 1 can become the bounding circle (defined by −"n") in another gasket, these gaskets can be nested. This is demonstrated in the figure at right, which contains these sequential gaskets with "n" running from 2 through 20.
[ { "math_id": 0, "text": "C_1" }, { "math_id": 1, "text": "C_2" }, { "math_id": 2, "text": "C_3" }, { "math_id": 3, "text": "C_4" }, { "math_id": 4, "text": "C_5" }, { "math_id": 5, "text": "2\\cdot 3^n" }, { "math_id": 6, "text": "n" }, { "math_id": 7, "text": "3^{n+1}+2" }, { "math_id": 8, "text": "r_i" }, { "math_id": 9, "text": "\\left(\\frac1{r_1}+\\frac1{r_2}+\\frac1{r_3}+\\frac1{r_4}\\right)^2=2\\left(\\frac1{r_1^2}+\\frac1{r_2^2}+\\frac1{r_3^2}+\\frac1{r_4^2}\\right)." }, { "math_id": 10, "text": "x" }, { "math_id": 11, "text": "y" }, { "math_id": 12, "text": "a^2 + b^2 + c^2 + d^2 = 2ab + 2 a c + 2 a d + 2 bc+2bd+2cd,\\," }, { "math_id": 13, "text": "(a, b, c, d)" }, { "math_id": 14, "text": "a < 0 \\leq b \\leq c \\leq d" }, { "math_id": 15, "text": "\\gcd(a, b, c, d)=1" }, { "math_id": 16, "text": "(x, d_1, d_2, m)" }, { "math_id": 17, "text": "\n\\begin{bmatrix}\na \\\\ b \\\\ c \\\\ d\n\\end{bmatrix}\n=\n\\begin{bmatrix}\n 1 & 0 & 0 & 0\\\\\n-1 & 1 & 0 & 0\\\\\n-1 & 0 & 1 & 0\\\\\n-1 & 1 & 1 &-2\n\\end{bmatrix}\n\\begin{bmatrix}\nx \\\\ d_1 \\\\ d_2 \\\\ m\n\\end{bmatrix}\n" }, { "math_id": 18, "text": "x^2+m^2=d_1 d_2" }, { "math_id": 19, "text": "\\gcd(x, d_1, d_2)=1" }, { "math_id": 20, "text": "x<0\\leq 2m\\leq d_1\\leq d_2" }, { "math_id": 21, "text": "2m\\leq d_1" }, { "math_id": 22, "text": "2m\\leq d_2" }, { "math_id": 23, "text": "4m^2\\leq d_1d_2" }, { "math_id": 24, "text": "3m^2\\leq d_1d_2-m^2=x^2" }, { "math_id": 25, "text": "0\\leq m \\leq |x|/\\sqrt{3}" }, { "math_id": 26, "text": "m" }, { "math_id": 27, "text": "d_1" }, { "math_id": 28, "text": "d_2" } ]
https://en.wikipedia.org/wiki?curid=1049256
10494269
Bethe–Salpeter equation
Equation for two-body bound states The Bethe–Salpeter equation (BSE, named after Hans Bethe and Edwin Salpeter) is an integral equation, the solution of which describes the structure of a relativistic two-body (particles) bound state in a covariant formalism quantum field theory (QFT). The equation was first published in 1950 at the end of a paper by Yoichiro Nambu, but without derivation. Due to its common application in several branches of theoretical physics, the Bethe–Salpeter equation appears in many forms. One form often used in high energy physics is formula_0 where "Γ" is the Bethe–Salpeter amplitude (BSA), "K" the Green's function representing the interaction and "S" the dressed propagators of the two constituent particles. In quantum theory, bound states are composite physical systems with lifetime significantly longer than the time scale of the interaction breaking their structure (otherwise the physical systems under consideration are called resonances), thus allowing ample time for constituents to interact. By accounting all possible interactions that can occur between the two constituents, the BSE is a tool to calculate properties of deep-bound states. The BSA as Its solution encodes the structure of the bound state under consideration. As it can be derived via identifying bound-states with poles in the S-matrix of the 4-point function involving the constituent particles, the quation is related to the quantum-field description of scattering processes applying Green's functions. As a general-purpose tool the applications of the BSE can be found in most quantum field theories. Examples include positronium (bound state of an electron–positron pair), excitons (bound states of an electron–hole pairs), and mesons (as quark-antiquark bound states). Even for simple systems such as the positronium, the equation cannot be solved exactly under quantum electrodynamics (QED), despite its exact formulation. A reduction of the equation can be achieved without the exact solution. In the case where particle-pair production can be ignored, if one of the two fermion constituent is significantly more massive than the other, the system is simplified into the Dirac equation for the light particle under the external potential of the heavy one. Derivation. The starting point for the derivation of the Bethe–Salpeter equation is the two-particle (or four point) Dyson equation formula_1 in momentum space, where "G" is the two-particle Green function formula_2, "S" are the free propagators and "K" is an interaction kernel, which contains all possible interactions between the two particles. The crucial step is now, to assume that bound states appear as poles in the Green function. One assumes, that two particles come together and form a bound state with mass "M", this bound state propagates freely, and then the bound state splits in its two constituents again. Therefore, one introduces the Bethe–Salpeter wave function formula_3, which is a transition amplitude of two constituents formula_4 into a bound state formula_5, and then makes an Ansatz for the Green function in the vicinity of the pole as formula_6 where "P" is the total momentum of the system. One sees, that if for this momentum the equation formula_7 holds, which is exactly the Einstein energy-momentum relation (with the Four-momentum formula_8 and formula_9 ), the four-point Green function contains a pole. If one plugs that Ansatz into the Dyson equation above, and sets the total momentum "P" such that the energy-momentum relation holds, on both sides of the term a pole appears. formula_10 Comparing the residues yields formula_11 This is already the Bethe–Salpeter equation, written in terms of the Bethe–Salpeter wave functions. To obtain the above form one introduces the Bethe–Salpeter amplitudes "Γ" formula_12 and gets finally formula_13 which is written down above, with the explicit momentum dependence. Rainbow-ladder approximation. In principle the interaction kernel K contains all possible two-particle-irreducible interactions that can occur between the two constituents. In order to carry out practical calculations one has to model it by choosing a subset of the interactions. As in quantum field theories, interaction is described via the exchange of particles (e.g. photons in QED, or gluons in quantum chromodynamics), other than contact interactions the most simple interaction is modeled by the exchange of only one of these force-carrying particles with a known propagator. As the Bethe–Salpeter equation sums up the interaction infinitely many times from a perturbative view point, the resulting Feynman graph resembles the form of a ladder (or rainbow), hence the name of this approximation. While in QED the ladder approximation caused problems with crossing symmetry and gauge invariance, indicating the inclusion of crossed-ladder terms. In quantum chromodynamics (QCD) this approximation is frequently used phenomenologically to calculate hadron mass and its structure in terms of Bethe--Salpeter amplitudes and Faddeev amplitudes, a well-known Ansatz of which is proposed by Maris and Tandy. Such an Ansatz for the dressed quark-gluon vertex within the rainbow-ladder truncation respects chiral symmetry and its dynamical breaking, which therefore is an important modeling of the strong nuclear interaction. As an example the structure of pions can be solved applying the Maris--Tandy Ansatz from the Bethe--Salpeter equation in Euclidean space. Normalization. As for solutions of any homogeneous equation, that of the Bethe–Salpeter equation is determined up to a numerical factor. This factor has to be specified by a certain normalization condition. For the Bethe–Salpeter amplitudes this is usually done by demanding probability conservation (similar to the normalization of the quantum mechanical wave function), which corresponds to the equation formula_14 Normalizations to the charge and energy-momentum tensor of the bound state lead to the same equation. In the rainbow-ladder approximation this Interaction kernel does not depend on the total momentum of the Bethe–Salpeter amplitude, in which case the second term of the normalization condition vanishes. An alternative normalization based on the eigenvalue of the corresponding linear operator was derived by Nakanishi. Solution in the Minkowski space. The Bethe--Salpeter equation applies to all kinematic region of the Bethe--Salpeter amplitude. Consequently it determines the amplitudes where the functions are not continuous. Such singularities are usually located when the constituent momentum is timelike, which are not directly accessible from Euclidean-space solutions of this equation. Instead one develop methods to solve these type of integral equations directly in the timelike region. In the case of scalar bound states through a scalar-particle exchange in the rainbow-ladder truncation, the Bethe--Salpeter equation in the Minkowski space can be solved with the assistance of Nakanishi integral representation. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Bibliography. Many modern quantum field theory textbooks and a few articles provide pedagogical accounts for the Bethe–Salpeter equation's context and uses. See: Still a good introduction is given by the review article of Nakanishi For historical aspects, see External links to codes where the Bethe-Salpeter equation is coded. For a more comprehensive list of first principles codes see here: List_of_quantum_chemistry_and_solid-state_physics_software
[ { "math_id": 0, "text": " \\Gamma(P,p) =\\int\\!\\frac{d^4k}{(2\\pi)^4} \\; K(P,p,k)\\, S(k-\\tfrac{P}{2}) \\,\\Gamma(P,k)\\, S(k+\\tfrac{P}{2}) " }, { "math_id": 1, "text": " G = S_1\\,S_2 + S_1\\,S_2\\, K_{12}\\, G " }, { "math_id": 2, "text": " \\langle\\Omega| \\phi_1 \\,\\phi_2\\, \\phi_3\\, \\phi_4 |\\Omega\\rangle " }, { "math_id": 3, "text": " \\Psi = \\langle\\Omega| \\phi_1 \\,\\phi_2|\\psi\\rangle " }, { "math_id": 4, "text": "\\phi_i" }, { "math_id": 5, "text": "\\psi" }, { "math_id": 6, "text": " G \\approx \\frac{\\Psi\\;\\bar\\Psi}{P^2-M^2}," }, { "math_id": 7, "text": " P^2 = M^2" }, { "math_id": 8, "text": " P_\\mu = \\left(E/c,\\vec p \\right)" }, { "math_id": 9, "text": " P^2 = P_\\mu\\,P^\\mu " }, { "math_id": 10, "text": " \\frac{\\Psi\\;\\bar\\Psi}{P^2-M^2} = S_1\\,S_2 +S_1\\,S_2\\, K_{12}\\frac{\\Psi\\;\\bar\\Psi}{P^2-M^2} " }, { "math_id": 11, "text": " \\Psi=S_1\\,S_2\\, K_{12}\\Psi, \\, " }, { "math_id": 12, "text": " \\Psi = S_1\\,S_2\\,\\Gamma " }, { "math_id": 13, "text": " \\Gamma= K_{12}\\,S_1\\,S_2\\,\\Gamma " }, { "math_id": 14, "text": "2 P_\\mu = \\bar\\Gamma \\left( \\frac{\\partial}{\\partial P_\\mu} \\left( S_1 \\otimes S_2 \\right) - S_1\\,S_2\\, \\left(\\frac{\\partial}{\\partial P_\\mu}\\,K\\right)\\, S_1\\,S_2\\right) \\Gamma " } ]
https://en.wikipedia.org/wiki?curid=10494269
10494334
Post-modern portfolio theory
Simply stated, post-modern portfolio theory (PMPT) is an extension of the traditional modern portfolio theory (MPT) of Markowitz and Sharpe. Both theories provide analytical methods for rational investors to use diversification to optimize their investment portfolios. The essential difference between PMPT and MPT is that PMPT emphasizes the return that "must" be earned on an investment in order to meet future, specified obligations, MPT is concerned only with the absolute return vis-a-vis the risk-free rate. History. The earliest published literature under the PMPT rubric was published by the principals of software developer Investment Technologies, LLC, Brian M. Rom and Kathleen W. Ferguson, in the Winter, 1993 and Fall, 1994 editions of "The Journal of Investing." However, while the software tools resulting from the application of PMPT were innovations for practitioners, many of the ideas and concepts embodied in these applications had long and distinguished provenance in academic and research institutions worldwide. Empirical investigations began in 1981 at the Pension Research Institute (PRI) at San Francisco State University. Dr. Hal Forsey and Dr. Frank Sortino were trying to apply Peter Fishburn's theory published in 1977 to Pension Fund Management. The result was an asset allocation model that PRI licensed Brian Rom to market in 1988. Mr. Rom coined the term PMPT and began using it to market portfolio optimization and performance measurement software developed by his company. These systems were built on the PRI downside- risk algorithms. Sortino and Steven Satchell at Cambridge University co-authored the first book on PMPT. This was intended as a graduate seminar text in portfolio management. A more recent book by Sortino was written for practitioners. The first publication in a major journal was co-authored by Sortino and Dr. Robert van der Meer, then at Shell Oil Netherlands. These concepts were popularized by articles and conference presentations by Sortino, Rom and others, including members of the now-defunct Salomon Bros. Skunk Works. Sortino claims the major contributors to the underlying theory are: Discussion. Harry Markowitz laid the foundations of MPT, the greatest contribution of which is the establishment of a formal risk/return framework for investment decision-making; see Markowitz model. By defining investment risk in quantitative terms, Markowitz gave investors a mathematical approach to asset-selection and portfolio management. But there are important limitations to the original MPT formulation. Two major limitations of MPT are its assumptions that: Stated another way, MPT is limited by measures of risk and return that do not always represent the realities of the investment markets. The assumption of a normal distribution is a major practical limitation, because it is symmetrical. Using the variance (or its square root, the standard deviation) implies that uncertainty about better-than-expected returns is equally averred as uncertainty about returns that are worse than expected. Furthermore, using the normal distribution to model the pattern of investment returns makes investment results with more upside than downside returns appear more risky than they really are. The converse distortion applies to distributions with a predominance of downside returns. The result is that using traditional MPT techniques for measuring investment portfolio construction and evaluation frequently does not accurately model investment reality. It has long been recognized that investors typically do not view as risky those returns "above" the minimum they must earn in order to achieve their investment objectives. They believe that risk has to do with the bad outcomes (i.e., returns below a required target), not the good outcomes (i.e., returns in excess of the target) and that losses weigh more heavily than gains. This view has been noted by researchers in finance, economics and psychology, including Sharpe (1964). "Under certain conditions the MVA can be shown to lead to unsatisfactory predictions of (investor) behavior. Markowitz suggests that a model based on the semivariance would be preferable; in light of the formidable computational problems, however, he bases his (MV) analysis on the mean and the standard deviation." Recent advances in portfolio and financial theory, coupled with increased computing power, have also contributed to overcoming these limitations. Applications. In 1987, the Pension Research Institute at San Francisco State University developed the practical mathematical algorithms of PMPT that are in use today. These methods provide a framework that recognizes investors' preferences for upside over downside volatility. At the same time, a more robust model for the pattern of investment returns, the three-parameter lognormal distribution, was introduced. Downside risk. Downside risk (DR) is measured by target semi-deviation (the square root of target semivariance) and is termed downside deviation. It is expressed in percentages and therefore allows for rankings in the same way as standard deviation. An intuitive way to view downside risk is the annualized standard deviation of returns below the target. Another is the square root of the probability-weighted squared below-target returns. The squaring of the below-target returns has the effect of penalizing failures quadratically. This is consistent with observations made on the behavior of individual decision-making under formula_0 where "d" = downside deviation (commonly known in the financial community as 'downside risk'). Note: By extension, "d"² = downside variance. "t" = the annual target return, originally termed the minimum acceptable return, or MAR. "r" = the random variable representing the return for the distribution of annual returns "f"("r"), "f"("r") = the distribution for the annual returns, e.g. the three-parameter lognormal distribution For the reasons provided below, this "continuous" formula is preferred over a simpler "discrete" version that determines the standard deviation of below-target periodic returns taken from the return series. 1. The continuous form permits all subsequent calculations to be made using annual returns which is the natural way for investors to specify their investment goals. The discrete form requires monthly returns for there to be sufficient data points to make a meaningful calculation, which in turn requires converting the annual target into a monthly target. This significantly affects the amount of risk that is identified. For example, a goal of earning 1% in every month of one year results in a greater risk than the seemingly equivalent goal of earning 12% in one year. 2. A second reason for strongly preferring the continuous form to the discrete form has been proposed by Sortino &amp; Forsey (1996): "Before we make an investment, we don't know what the outcome will be... After the investment is made, and we want to measure its performance, all we know is what the outcome was, not what it could have been. To cope with this uncertainty, we assume that a reasonable estimate of the range of possible returns, as well as the probabilities associated with estimation of those returns...In statistical terms, the shape of [this] uncertainty is called a probability distribution. In other words, looking at just the discrete monthly or annual values does not tell the whole story." Using the observed points to create a distribution is a staple of conventional performance measurement. For example, monthly returns are used to calculate a fund's mean and standard deviation. Using these values and the properties of the normal distribution, we can make statements such as the likelihood of losing money (even though no negative returns may actually have been observed), or the range within which two-thirds of all returns lies (even though the specific returns identifying this range have not necessarily occurred). Our ability to make these statements comes from the process of assuming the continuous form of the normal distribution and certain of its well-known properties. In PMPT an analogous process is followed: Sortino Ratio. The Sortino ratio, developed in 1993 by Rom's company, Investment Technologies, LLC, was the first new element in the PMPT rubric. It is defined as: formula_1 where "r" = the annualized rate of return, "t" = the target return, "d" = downside risk. The following table shows that this ratio is demonstrably superior to the traditional Sharpe ratio as a means for ranking investment results. The table shows risk-adjusted ratios for several major indexes using both Sortino and Sharpe ratios. The data cover the five years 1992-1996 and are based on monthly total returns. The Sortino ratio is calculated against a 9.0% target. As an example of the different conclusions that can be drawn using these two ratios, notice how the Lehman Aggregate and MSCI EAFE compare - the Lehman ranks higher using the Sharpe ratio whereas EAFE ranks higher using the Sortino ratio. In many cases, manager or index rankings will be different, depending on the risk-adjusted measure used. These patterns will change again for different values of t. For example, when t is close to the risk-free rate, the Sortino Ratio for T-Bill's will be higher than that for the S&amp;P 500, while the Sharpe ratio remains unchanged. In March 2008, researchers at the Queensland Investment Corporation and Queensland University of Technology showed that for skewed return distributions, the Sortino ratio is superior to the Sharpe ratio as a measure of portfolio risk. Volatility skewness. Volatility skewness is the second portfolio-analysis statistic introduced by Rom and Ferguson under the PMPT rubric. It measures the ratio of a distribution's percentage of total variance from returns above the mean, to the percentage of the distribution's total variance from returns below the mean. Thus, if a distribution is symmetrical ( as in the normal case, as is assumed under MPT), it has a volatility skewness of 1.00. Values greater than 1.00 indicate positive skewness; values less than 1.00 indicate negative skewness. While closely correlated with the traditional statistical measure of skewness (viz., the third moment of a distribution), the authors of PMPT argue that their volatility skewness measure has the advantage of being intuitively more understandable to non-statisticians who are the primary practical users of these tools. The importance of skewness lies in the fact that the more non-normal (i.e., skewed) a return series is, the more its true risk will be distorted by traditional MPT measures such as the Sharpe ratio. Thus, with the recent advent of hedging and derivative strategies, which are asymmetrical by design, MPT measures are essentially useless, while PMPT is able to capture significantly more of the true information contained in the returns under consideration. Many of the common market indices and the returns of stock and bond mutual funds cannot themselves always be assumed to be accurately represented by the normal distribution. Data: Monthly returns, January, 1991 through December, 1996. Endnotes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. For a comprehensive survey of the early literature, see R. Libby and P.C. Fishburn [1977].
[ { "math_id": 0, "text": "d = \\sqrt{ \\int_{-\\infty}^t (t-r)^2f(r)\\,dr } " }, { "math_id": 1, "text": "\\frac{r - t}{d}" } ]
https://en.wikipedia.org/wiki?curid=10494334