id
stringlengths 2
8
| title
stringlengths 1
130
| text
stringlengths 0
252k
| formulas
listlengths 1
823
| url
stringlengths 38
44
|
---|---|---|---|---|
9754212 | Fraction of variance unexplained | Statistical noise
In statistics, the fraction of variance unexplained (FVU) in the context of a regression task is the fraction of variance of the regressand (dependent variable) "Y" which cannot be explained, i.e., which is not correctly predicted, by the explanatory variables "X".
Formal definition.
Suppose we are given a regression function formula_0 yielding for each formula_1 an estimate formula_2 where formula_3 is the vector of the "i"th observations on all the explanatory variables. We define the fraction of variance unexplained (FVU) as:
formula_4
where "R"2 is the coefficient of determination and "VAR"err and "VAR"tot are the variance of the residuals and the sample variance of the dependent variable. "SS""err" (the sum of squared predictions errors, equivalently the residual sum of squares), "SS""tot" (the total sum of squares), and "SS""reg" (the sum of squares of the regression, equivalently the explained sum of squares) are given by
formula_5
Alternatively, the fraction of variance unexplained can be defined as follows:
formula_6
where MSE("f") is the mean squared error of the regression function "ƒ".
Explanation.
It is useful to consider the second definition to understand FVU. When trying to predict "Y", the most naive regression function that we can think of is the constant function predicting the mean of "Y", i.e., formula_7. It follows that the MSE of this function equals the variance of "Y"; that is, "SS"err = "SS"tot, and "SS"reg = 0. In this case, no variation in "Y" can be accounted for, and the FVU then has its maximum value of 1.
More generally, the FVU will be 1 if the explanatory variables "X" tell us nothing about "Y" in the sense that the predicted values of "Y" do not covary with "Y". But as prediction gets better and the MSE can be reduced, the FVU goes down. In the case of perfect prediction where formula_8 for all "i", the MSE is 0, "SS"err = 0, "SS"reg = "SS"tot, and the FVU is 0.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "f"
},
{
"math_id": 1,
"text": "y_i"
},
{
"math_id": 2,
"text": "\\widehat{y}_i = f(x_i)"
},
{
"math_id": 3,
"text": "x_i"
},
{
"math_id": 4,
"text": "\\begin{align}\n\\text{FVU} & = {\\text{VAR}_\\text{err} \\over \\text{VAR}_\\text{tot}} = {\\text{SS}_\\text{err}/N \\over \\text{SS}_\\text{tot}/N} = {\\text{SS}_\\text{err} \\over \\text{SS}_\\text{tot}} \\left( = 1-{\\text{SS}_\\text{reg} \\over \\text{SS}_\\text{tot}} , \\text{ only true in some cases such as linear regression}\\right) \\\\[6pt]\n & = 1 - R^2\n\\end{align}"
},
{
"math_id": 5,
"text": "\\begin{align} \n\\text{SS}_\\text{err} & = \\sum_{i=1}^N\\;(y_i - \\widehat{y}_i)^2\\\\\n\\text{SS}_\\text{tot} & = \\sum_{i=1}^N\\;(y_i-\\bar{y})^2 \\\\\n\\text{SS}_\\text{reg} & = \\sum_{i=1}^N\\;(\\widehat{y}_i-\\bar{y})^2 \\text{ and} \\\\\n\\bar{y} & = \\frac 1 N \\sum_{i=1}^N\\;y_i.\n\\end{align}"
},
{
"math_id": 6,
"text": " \\text{FVU} = \\frac{\\operatorname{MSE}(f)}{\\operatorname{var}[Y]}"
},
{
"math_id": 7,
"text": "f(x_i)=\\bar{y}"
},
{
"math_id": 8,
"text": "\\hat{y}_i = y_i"
}
] | https://en.wikipedia.org/wiki?curid=9754212 |
975450 | Argument of periapsis | Specifies the orbit of an object in space
The argument of periapsis (also called argument of perifocus or argument of pericenter), symbolized as "ω (omega)", is one of the orbital elements of an orbiting body. Parametrically, "ω" is the angle from the body's ascending node to its periapsis, measured in the direction of motion.
For specific types of orbits, terms such as argument of perihelion (for heliocentric orbits), argument of perigee (for geocentric orbits), argument of periastron (for orbits around stars), and so on, may be used (see apsis for more information).
An argument of periapsis of 0° means that the orbiting body will be at its closest approach to the central body at the same moment that it crosses the plane of reference from South to North. An argument of periapsis of 90° means that the orbiting body will reach periapsis at its northmost distance from the plane of reference.
Adding the argument of periapsis to the longitude of the ascending node gives the longitude of the periapsis. However, especially in discussions of binary stars and exoplanets, the terms "longitude of periapsis" or "longitude of periastron" are often used synonymously with "argument of periapsis".
Calculation.
In astrodynamics the argument of periapsis "ω" can be calculated as follows:
formula_0
If "ez" < 0 then "ω" → 2π − "ω".
where:
In the case of equatorial orbits (which have no ascending node), the argument is strictly undefined. However, if the convention of setting the longitude of the ascending node Ω to 0 is followed, then the value of "ω" follows from the two-dimensional case:
formula_1
If the orbit is clockwise (i.e. (r × v)"z" < 0) then "ω" → 2π − "ω".
where:
In the case of circular orbits it is often assumed that the periapsis is placed at the ascending node and therefore "ω" = 0. However, in the professional exoplanet community, "ω" = 90° is more often assumed for circular orbits, which has the advantage that the time of a planet's inferior conjunction (which would be the time the planet would transit if the geometry were favorable) is equal to the time of its periastron.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\omega = \\arccos{{\\mathbf{n} \\cdot \\mathbf{e}} \\over {\\mathbf{\\left| n \\right|} \\mathbf{\\left| e \\right|}}}"
},
{
"math_id": 1,
"text": "\\omega = \\mathrm{atan2}\\left(e_y, e_x\\right)"
}
] | https://en.wikipedia.org/wiki?curid=975450 |
975474 | Longitude of periapsis | In celestial mechanics, the longitude of the periapsis, also called longitude of the pericenter, of an orbiting body is the longitude (measured from the point of the vernal equinox) at which the periapsis (closest approach to the central body) would occur if the body's orbit inclination were zero. It is usually denoted "ϖ".
For the motion of a planet around the Sun, this position is called longitude of perihelion ϖ, which is the sum of the longitude of the ascending node Ω, and the argument of perihelion ω.
The longitude of periapsis is a compound angle, with part of it being measured in the plane of reference and the rest being measured in the plane of the orbit. Likewise, any angle derived from the longitude of periapsis (e.g., mean longitude and true longitude) will also be compound.
Sometimes, the term "longitude of periapsis" is used to refer to "ω", the angle between the ascending node and the periapsis. That usage of the term is especially common in discussions of binary stars and exoplanets. However, the angle ω is less ambiguously known as the argument of periapsis.
Calculation from state vectors.
"ϖ" is the sum of the longitude of ascending node Ω (measured on ecliptic plane) and the argument of periapsis "ω" (measured on orbital plane):
formula_0
which are derived from the orbital state vectors.
Derivation of ecliptic longitude and latitude of perihelion for inclined orbits.
Define the following:
Then:
The right ascension α and declination δ of the direction of perihelion are:
<templatestyles src="Block indent/styles.css"/>
<templatestyles src="Block indent/styles.css"/>
If A < 0, add 180° to α to obtain the correct quadrant.
The ecliptic longitude ϖ and latitude b of perihelion are:
<templatestyles src="Block indent/styles.css"/>
<templatestyles src="Block indent/styles.css"/>
If cos(α) < 0, add 180° to ϖ to obtain the correct quadrant.
As an example, using the most up-to-date numbers from Brown (2017) for the hypothetical Planet Nine with i = 30°, ω = 136.92°, and Ω = 94°, then α = 237.38°, δ = +0.41° and ϖ = 235.00°, b = +19.97° (Brown actually provides i, Ω, and ϖ, from which ω was computed).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\varpi = \\Omega + \\omega"
}
] | https://en.wikipedia.org/wiki?curid=975474 |
9755509 | Mosco convergence | In mathematical analysis, Mosco convergence is a notion of convergence for functionals that is used in nonlinear analysis and set-valued analysis. It is a particular case of Γ-convergence. Mosco convergence is sometimes phrased as “weak Γ-liminf and strong Γ-limsup” convergence since it uses both the weak and strong topologies on a topological vector space "X". In finite dimensional spaces, Mosco convergence coincides with epi-convergence, while in infinite-dimensional ones, Mosco convergence is strictly stronger property.
"Mosco convergence" is named after Italian mathematician Umberto Mosco.
Definition.
Let "X" be a topological vector space and let "X"∗ denote the dual space of continuous linear functionals on "X". Let "F""n" : "X" → [0, +∞] be functionals on "X" for each "n" = 1, 2, ... The sequence (or, more generally, net) ("F""n") is said to Mosco converge to another functional "F" : "X" → [0, +∞] if the following two conditions hold:
formula_0
formula_1
Since lower and upper bound inequalities of this type are used in the definition of Γ-convergence, Mosco convergence is sometimes phrased as “weak Γ-liminf and strong Γ-limsup” convergence. Mosco convergence is sometimes abbreviated to M-convergence and denoted by
formula_2 | [
{
"math_id": 0,
"text": "\\liminf_{n \\to \\infty} F_{n} (x_{n}) \\geq F(x);"
},
{
"math_id": 1,
"text": "\\limsup_{n \\to \\infty} F_{n} (x_{n}) \\leq F(x)."
},
{
"math_id": 2,
"text": "\\mathop{\\text{M-lim}}_{n \\to \\infty} F_{n} = F \\text{ or } F_{n} \\xrightarrow[n \\to \\infty]{\\mathrm{M}} F."
}
] | https://en.wikipedia.org/wiki?curid=9755509 |
9755564 | Congruence lattice problem | In mathematics, the congruence lattice problem asks whether every algebraic distributive lattice is isomorphic to the congruence lattice of some other lattice. The problem was posed by Robert P. Dilworth, and for many years it was one of the most famous and long-standing open problems in lattice theory; it had a deep impact on the development of lattice theory itself. The conjecture that every distributive lattice is a congruence lattice is true for all distributive lattices with at most ℵ1 compact elements, but F. Wehrung provided a counterexample for distributive lattices with ℵ2 compact elements using a construction based on Kuratowski's free set theorem.
Preliminaries.
We denote by Con "A" the congruence lattice of an algebra "A", that is, the lattice of all congruences of "A" under inclusion.
The following is a universal-algebraic triviality. It says that for a congruence, being finitely generated is a lattice-theoretical property.
Lemma.
A congruence of an algebra "A" is finitely generated if and only if it is a compact element of Con "A".
As every congruence of an algebra is the join of the finitely generated congruences below it (e.g., every submodule of a module is the union of all its finitely generated submodules), we obtain the following result, first published by Birkhoff and Frink in 1948.
Theorem (Birkhoff and Frink 1948).
The congruence lattice Con "A" of any algebra "A" is an algebraic lattice.
While congruences of lattices lose something in comparison to groups, modules, rings (they cannot be identified with subsets of the universe), they also have a property unique among all the other structures encountered yet.
Theorem (Funayama and Nakayama 1942).
The congruence lattice of any lattice is distributive.
This says that α ∧ (β ∨ γ) = (α ∧ β) ∨ (α ∧ γ), for any congruences α, β, and γ of a given lattice. The analogue of this result fails, for instance, for modules, as formula_0, as a rule, for submodules "A", "B", "C" of a given module.
Soon after this result, Dilworth proved the following result. He did not publish the result but it appears as an exercise credited to him in Birkhoff 1948. The first published proof is in Grätzer and Schmidt 1962.
Theorem (Dilworth ≈1940, Grätzer and Schmidt 1962).
Every finite distributive lattice is isomorphic to the congruence lattice of some finite lattice.
It is important to observe that the solution lattice found in Grätzer and Schmidt's proof is "sectionally complemented", that is, it has a least element (true for any finite lattice) and for all elements "a" ≤ "b" there exists an element "x" with "a" ∨ "x" = "b" and "a" ∧ "x" = "0". It is also in that paper that CLP is first stated in published form, although it seems that the earliest attempts at CLP were made by Dilworth himself. Congruence lattices of finite lattices have been given an enormous amount of attention, for which a reference is Grätzer's 2005 monograph.
The congruence lattice problem (CLP):
Is every distributive algebraic lattice isomorphic to the congruence lattice of some lattice?
The problem CLP has been one of the most intriguing and longest-standing open problems of lattice theory. Some related results of universal algebra are the following.
Theorem (Grätzer and Schmidt 1963).
Every algebraic lattice is isomorphic to the congruence lattice of some algebra.
The lattice Sub "V" of all subspaces of a vector space "V" is certainly an algebraic lattice. As the next result shows, these algebraic lattices are difficult to represent.
Theorem (Freese, Lampe, and Taylor 1979).
Let "V" be an infinite-dimensional vector space over an uncountable field "F". Then Con "A" isomorphic to Sub "V" implies that "A" has at least card "F" operations, for any algebra "A".
As "V" is infinite-dimensional, the largest element ("unit") of Sub "V" is not compact. However innocuous it sounds, the compact unit assumption is essential in the statement of the result above, as demonstrated by the following result.
Theorem (Lampe 1982).
Every algebraic lattice with compact unit is isomorphic to the congruence lattice of some groupoid.
Semilattice formulation of CLP.
The congruence lattice Con "A" of an algebra "A" is an algebraic lattice. The (∨,0)-semilattice of compact elements of Con "A" is denoted by Conc "A", and it is sometimes called the "congruence semilattice" of "A". Then Con "A" is isomorphic to the ideal lattice of Conc "A". By using the classical equivalence between the category of all (∨,0)-semilattices and the category of all algebraic lattices (with suitable definitions of morphisms), as it is outlined here, we obtain the following semilattice-theoretical formulation of CLP.
Semilattice-theoretical formulation of CLP:
Is every distributive (∨,0)-semilattice isomorphic to the congruence semilattice of some lattice?
Say that a distributive (∨,0)-semilattice is "representable", if it is isomorphic to Conc "L", for some lattice "L". So CLP asks whether every distributive (∨,0)-semilattice is representable.
Many investigations around this problem involve "diagrams" of semilattices or of algebras. A most useful folklore result about these is the following.
Theorem.
The functor Conc, defined on all algebras of a given signature, to all (∨,0)-semilattices, preserves direct limits.
Schmidt's approach via distributive join-homomorphisms.
We say that a (∨,0)-semilattice satisfies "Schmidt's Condition", if it is isomorphic to the quotient of a generalized Boolean semilattice "B" under some distributive join-congruence of "B". One of the deepest results about representability of (∨,0)-semilattices is the following.
Theorem (Schmidt 1968).
Any (∨,0)-semilattice satisfying Schmidt's Condition is representable.
This raised the following problem, stated in the same paper.
Problem 1 (Schmidt 1968).
Does any (∨,0)-semilattice satisfy Schmidt's Condition?
Partial positive answers are the following.
Theorem (Schmidt 1981).
Every distributive "lattice" with zero satisfies Schmidt's Condition; thus it is representable.
This result has been improved further as follows, "via" a very long and technical proof, using forcing and Boolean-valued models.
Theorem (Wehrung 2003).
Every direct limit of a countable sequence of distributive "lattices" with zero and (∨,0)-homomorphisms is representable.
Other important representability results are related to the cardinality of the semilattice. The following result was prepared for publication by Dobbertin after Huhn's passing away in 1985. The two corresponding papers were published in 1989.
Theorem (Huhn 1985). Every distributive (∨,0)-semilattice of cardinality at most ℵ1 satisfies Schmidt's Condition. Thus it is representable.
By using different methods, Dobbertin got the following result.
Theorem (Dobbertin 1986).
Every distributive (∨,0)-semilattice in which every principal ideal is at most countable is representable.
Problem 2 (Dobbertin 1983). Is every conical refinement monoid measurable?
Pudlák's approach; lifting diagrams of (∨,0)-semilattices.
The approach of CLP suggested by Pudlák in his 1985 paper is different. It is based on the following result, Fact 4, p. 100 in Pudlák's 1985 paper, obtained earlier by Yuri L. Ershov as the main theorem in Section 3 of the Introduction of his 1977 monograph.
Theorem (Ershov 1977, Pudlák 1985).
Every distributive (∨,0)-semilattice is the directed union of its finite distributive (∨,0)-subsemilattices.
This means that every finite subset in a distributive (∨,0)-semilattice "S" is contained in some finite "distributive" (∨,0)-subsemilattice of "S". Now we are trying to represent a given distributive (∨,0)-semilattice "S" as Conc "L", for some lattice "L". Writing "S" as a directed union formula_1 of finite distributive (∨,0)-subsemilattices, we are "hoping" to represent each "Si" as the congruence lattice of a lattice "Li" with lattice homomorphisms "fij : Li→ Lj", for "i ≤ j" in "I", such that the diagram formula_2 of all "Si" with all inclusion maps "Si→Sj", for "i ≤ j" in "I", is naturally equivalent to formula_3, we say that the diagram formula_4 lifts formula_2 (with respect to the Conc functor). If this can be done, then, as we have seen that the Conc functor preserves direct limits, the direct limit formula_5 satisfies formula_6.
While the problem whether this could be done in general remained open for about 20 years, Pudlák could prove it for distributive "lattices" with zero, thus extending one of Schmidt's results by providing a "functorial" solution.
Theorem (Pudlák 1985).
There exists a direct limits preserving functor Φ, from the category of all distributive lattices with zero and 0-lattice embeddings to the category of all lattices with zero and 0-lattice embeddings, such that ConcΦ is naturally equivalent to the identity. Furthermore, Φ("S") is a finite atomistic lattice, for any finite distributive (∨,0)-semilattice "S".
This result is improved further, by an even far more complex construction, to "locally finite, sectionally complemented modular lattices" by Růžička in 2004 and 2006.
Pudlák asked in 1985 whether his result above could be extended to the whole category of distributive (∨,0)-semilattices with (∨,0)-embeddings. The problem remained open until it was recently solved in the negative by Tůma and Wehrung.
Theorem (Tůma and Wehrung 2006).
There exists a diagram "D" of finite Boolean (∨,0)-semilattices and (∨,0,1)-embeddings, indexed by a finite partially ordered set, that cannot be lifted, with respect to the Conc functor, by any diagram of lattices and lattice homomorphisms.
In particular, this implies immediately that CLP has no "functorial" solution.
Furthermore, it follows from deep 1998 results of universal algebra by Kearnes and Szendrei in so-called "commutator theory of varieties" that the result above can be extended from the variety of all lattices to any variety formula_7 such that all Con "A", for formula_8, satisfy a fixed nontrivial identity in the signature (∨,∧) (in short, "with a nontrivial congruence identity").
We should also mention that many attempts at CLP were also based on the following result, first proved by Bulman-Fleming and McDowell in 1978 by using a categorical 1974 result of Shannon, see also Goodearl and Wehrung in 2001 for a direct argument.
Theorem (Bulman-Fleming and McDowell 1978).
Every distributive (∨,0)-semilattice is a direct limit of finite Boolean (∨,0)-semilattices and (∨,0)-homomorphisms.
It should be observed that while the transition homomorphisms used in the Ershov-Pudlák Theorem are (∨,0)-embeddings, the transition homomorphisms used in the result above are not necessarily one-to-one, for example when one tries to represent the three-element chain. Practically this does not cause much trouble, and makes it possible to prove the following results.
Theorem.
Every distributive (∨,0)-semilattice of cardinality at most ℵ1 is isomorphic to
(1) Conc "L", for some locally finite, relatively complemented modular lattice "L" (Tůma 1998 and Grätzer, Lakser, and Wehrung 2000).
(2) The semilattice of finitely generated two-sided ideals of some (not necessarily unital) von Neumann regular ring (Wehrung 2000).
(3) Conc "L", for some sectionally complemented modular lattice "L" (Wehrung 2000).
(4) The semilattice of finitely generated normal subgroups of some locally finite group (Růžička, Tůma, and Wehrung 2007).
(5) The submodule lattice of some right module over a (non-commutative) ring (Růžička, Tůma, and Wehrung 2007).
Congruence lattices of lattices and nonstable K-theory of von Neumann regular rings.
We recall that for a (unital, associative) ring "R", we denote by "V(R)" the (conical, commutative) monoid of isomorphism classes of finitely generated projective right "R"-modules, see here for more details. Recall that if "R" is von Neumann regular, then "V(R)" is a refinement monoid. Denote by Idc "R" the (∨,0)-semilattice of finitely generated two-sided ideals of "R". We denote by "L(R)" the lattice of all principal right ideals of a von Neumann regular ring "R". It is well known that "L(R)" is a complemented modular lattice.
The following result was observed by Wehrung, building on earlier works mainly by Jónsson and Goodearl.
Theorem (Wehrung 1999).
Let "R" be a von Neumann regular ring. Then the (∨,0)-semilattices Idc "R" and Conc "L(R)" are both isomorphic to the maximal semilattice quotient of "V(R)".
Bergman proves in a well-known unpublished note from 1986 that any at most countable distributive (∨,0)-semilattice is isomorphic to Idc "R", for some locally matricial ring "R" (over any given field). This result is extended to semilattices of cardinality at most ℵ1 in 2000 by Wehrung, by keeping only the regularity of "R" (the ring constructed by the proof is not locally matricial). The question whether "R" could be taken locally matricial in the ℵ1 case remained open for a while, until it was disproved by Wehrung in 2004. Translating back to the lattice world by using the theorem above and using a lattice-theoretical analogue of the "V(R)" construction, called the "dimension monoid", introduced by Wehrung in 1998, yields the following result.
Theorem (Wehrung 2004).
There exists a distributive (∨,0,1)-semilattice of cardinality ℵ1 that is not isomorphic to Conc "L", for any modular lattice "L" every finitely generated sublattice of which has finite length.
Problem 3 (Goodearl 1991). Is the positive cone of any dimension group with order-unit isomorphic to "V(R)", for some von Neumann regular ring "R"?
A first application of Kuratowski's free set theorem.
The abovementioned Problem 1 (Schmidt), Problem 2 (Dobbertin), and Problem 3 (Goodearl) were solved simultaneously in the negative in 1998.
Theorem (Wehrung 1998).
There exists a dimension vector space "G" over the rationals with order-unit whose positive cone "G"+ is not isomorphic to "V(R)", for any von Neumann regular ring "R", and is not measurable in Dobbertin's sense. Furthermore, the maximal semilattice quotient of "G"+ does not satisfy Schmidt's Condition. Furthermore, "G" can be taken of any given cardinality greater than or equal to ℵ2.
It follows from the previously mentioned works of Schmidt, Huhn, Dobbertin, Goodearl, and Handelman that the ℵ2 bound is optimal in all three negative results above.
As the ℵ2 bound suggests, infinite combinatorics are involved. The principle used is Kuratowski's free set theorem, first published in 1951. Only the case "n=2" is used here.
The semilattice part of the result above is achieved "via" an infinitary semilattice-theoretical statement URP ("Uniform Refinement Property"). If we want to disprove Schmidt's problem, the idea is (1) to prove that any generalized Boolean semilattice satisfies URP (which is easy), (2) that URP is preserved under homomorphic image under a weakly distributive homomorphism (which is also easy), and (3) that there exists a distributive (∨,0)-semilattice of cardinality ℵ2 that does not satisfy URP (which is difficult, and uses Kuratowski's free set theorem).
Schematically, the construction in the theorem above can be described as follows. For a set Ω, we consider the partially ordered vector space "E(Ω)" defined by generators 1 and "ai,x", for "i<2" and "x" in Ω, and relations "a0,x+a1,x=1", "a0,x ≥ 0", and "a1,x ≥ 0", for any "x" in Ω. By using a Skolemization of the theory of dimension groups, we can embed "E(Ω)" functorially into a dimension vector space "F(Ω)". The vector space counterexample of the theorem above is "G=F(Ω)", for any set Ω with at least ℵ2 elements.
This counterexample has been modified subsequently by Ploščica and Tůma to a direct semilattice construction. For a (∨,0)-semilattice, the larger semilattice "R(S)" is the (∨,0)-semilattice freely generated by new elements "t(a,b,c)", for "a, b, c" in "S" such that "c ≤ a ∨ b", subjected to the only relations "c=t(a,b,c) ∨ t(b,a,c)" and "t(a,b,c) ≤ a". Iterating this construction gives the "free distributive extension" formula_9 of "S". Now, for a set Ω, let "L(Ω)" be the (∨,0)-semilattice defined by generators 1 and "ai,x", for "i<2" and "x" in Ω, and relations "a0,x ∨ a1,x=1", for any "x" in Ω. Finally, put "G(Ω)=D(L(Ω))".
In most related works, the following "uniform refinement property" is used. It is a modification of the one introduced by Wehrung in 1998 and 1999.
Definition (Ploščica, Tůma, and Wehrung 1998).
Let "e" be an element in a (∨,0)-semilattice "S". We say that the "weak uniform refinement property" WURP holds at "e", if for all families formula_10 and formula_11 of elements in "S" such that "ai ∨ bi=e" for all "i" in "I", there exists a family formula_12 of elements of "S" such that the relations
• "ci,j ≤ ai,bj",
• "ci,j ∨ aj ∨ bi=e",
• "ci,k ≤ ci,j∨ cj,k"
hold for all "i, j, k" in "I". We say that "S" satisfies WURP, if WURP holds at every element of "S".
By building on Wehrung's abovementioned work on dimension vector spaces, Ploščica and Tůma proved that WURP does not hold in "G(Ω)", for any set Ω of cardinality at least ℵ2. Hence "G(Ω)" does not satisfy Schmidt's Condition. All negative representation results mentioned here always make use of some "uniform refinement property", including the first one about dimension vector spaces.
However, the semilattices used in these negative results are relatively complicated. The following result, proved by Ploščica, Tůma, and Wehrung in 1998, is more striking, because it shows examples of "representable" semilattices that do not satisfy Schmidt's Condition. We denote by FV(Ω) the free lattice on Ω in V, for any variety V of lattices.
Theorem (Ploščica, Tůma, and Wehrung 1998).
The semilattice Conc FV(Ω) does not satisfy WURP, for any set Ω of cardinality at least ℵ2 and any non-distributive variety V of lattices. Consequently, Conc FV(Ω) does not satisfy Schmidt's Condition.
It is proved by Tůma and Wehrung in 2001 that Conc FV(Ω) is not isomorphic to Conc "L", for any lattice "L" with permutable congruences. By using a slight weakening of WURP, this result is extended to arbitrary algebras with permutable congruences by Růžička, Tůma, and Wehrung in 2007. Hence, for example, if Ω has at least ℵ2 elements, then Conc FV(Ω) is not isomorphic to the normal subgroup lattice of any group, or the submodule lattice of any module.
Solving CLP: the Erosion Lemma.
The following recent theorem solves CLP.
Theorem (Wehrung 2007).
The semilattice "G(Ω)" is not isomorphic to Conc "L" for any lattice "L", whenever the set Ω has at least ℵω+1 elements.
Hence, the counterexample to CLP had been known for nearly ten years, it is just that nobody knew why it worked! All the results prior to the theorem above made use of some form of permutability of congruences. The difficulty was to find enough structure in congruence lattices of non-congruence-permutable lattices.
We shall denote by ε the `parity function' on the natural numbers, that is, ε("n")="n" mod 2, for any natural number "n".
We let "L" be an algebra possessing a structure of semilattice ("L",∨) such that every congruence of "L" is also a congruence for the operation ∨ . We put
formula_13
and we denote by Conc"U" "L" the (∨,0)-subsemilattice of Conc "L" generated by all principal congruences Θ("u","v") ( = least congruence of "L" that identifies "u" and "v"), where ("u","v") belongs to "U" ×"U". We put Θ+("u","v")=Θ("u ∨ v","v"), for all "u, v" in "L".br />
The Erosion Lemma (Wehrung 2007).
Let "x"0, "x"1 in "L" and let formula_14, for a positive integer "n", be a finite subset of "L" with formula_15. Put
formula_16
Then there are congruences formula_17, for "j<2", such that
formula_18
The proof of the theorem above runs by setting a "structure" theorem for congruence lattices of semilattices—namely, the Erosion Lemma, against "non-structure" theorems for free distributive extensions "G(Ω)", the main one being called the "Evaporation Lemma". While the latter are technically difficult, they are, in some sense, predictable. Quite to the contrary, the proof of the Erosion Lemma is elementary and easy, so it is probably the strangeness of its statement that explains that it has been hidden for so long.
More is, in fact, proved in the theorem above: "For any algebra L with a congruence-compatible structure of join-semilattice with unit and for any set Ω with at least ℵω+1 elements, there is no weakly distributive homomorphism μ: Conc L → G(Ω) containing 1 in its range". In particular, CLP was, after all, not a problem of lattice theory, but rather of universal algebra—even more specifically, "semilattice theory"! These results can also be translated in terms of a "uniform refinement property", denoted by CLR in Wehrung's paper presenting the solution of CLP, which is noticeably more complicated than WURP.
Finally, the cardinality bound ℵω+1 has been improved to the optimal bound ℵ2 by Růžička.
Theorem (Růžička 2008).
The semilattice "G(Ω)" is not isomorphic to Conc "L" for any lattice "L", whenever the set Ω has at least ℵ2 elements.
Růžička's proof follows the main lines of Wehrung's proof, except that it introduces an enhancement of Kuratowski's Free Set Theorem, called there "existence of free trees", which it uses in the final argument involving the Erosion Lemma.
A positive representation result for distributive semilattices.
The proof of the negative solution for CLP shows that the problem of representing distributive semilattices by compact congruences of lattices already appears for congruence lattices of "semilattices". The question whether the structure of partially ordered set would cause similar problems is answered by the following result.
Theorem (Wehrung 2008). For any distributive (∨,0)-semilattice "S", there are a (∧,0)-semilattice "P" and a map μ : "P" × "P" → "S" such that the following conditions hold:
(1) "x" ≤ "y" implies that μ("x","y")=0, for all "x", "y" in "P".
(2) μ("x","z") ≤ μ("x","y") ∨ μ("y","z"), for all "x", "y", "z" in "P".
(3) For all "x" ≥ "y" in "P" and all α, β in "S" such that μ("x","y") ≤ α ∨ β, there are a positive integer "n" and elements "x"="z"0 ≥ "z"1 ≥ ... ≥ "z"2"n"="y" such that μ("z"i","z"i+1") ≤ α (resp., μ("z"i","z"i+1") ≤ β) whenever "i" < 2"n" is even (resp., odd).
(4) "S" is generated, as a join-semilattice, by all the elements of the form μ("x",0), for "x" in "P".
Furthermore, if "S" has a largest element, then "P" can be assumed to be a lattice with a largest element.
It is not hard to verify that conditions (1)–(4) above imply the distributivity of "S", so the result above gives a "characterization" of distributivity for (∨,0)-semilattices.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "A\\cap(B+C)\\neq(A\\cap B)+(A\\cap C)"
},
{
"math_id": 1,
"text": "S=\\bigcup(S_i\\mid i\\in I)"
},
{
"math_id": 2,
"text": "\\mathcal{S}"
},
{
"math_id": 3,
"text": "(\\mathrm{Con_c}\\,L_i,\\mathrm{Con_c}\\,f_i^j\\mid i\\leq j\\text{ in }I)"
},
{
"math_id": 4,
"text": "(L_i,f_i^j\\mid i\\leq j\\text{ in }I)"
},
{
"math_id": 5,
"text": "L=\\varinjlim_{i\\in I}L_i"
},
{
"math_id": 6,
"text": "{\\rm Con_c}\\,L\\cong S"
},
{
"math_id": 7,
"text": "\\mathcal{V}"
},
{
"math_id": 8,
"text": "A\\in\\mathcal{V}"
},
{
"math_id": 9,
"text": "D(S)=\\bigcup(R^n(S)\\mid n<\\omega)"
},
{
"math_id": 10,
"text": "(a_i)_{i\\in I}"
},
{
"math_id": 11,
"text": "(b_i)_{i\\in I}"
},
{
"math_id": 12,
"text": "(c_{i,j}\\mid (i,j)\\in I\\times I)"
},
{
"math_id": 13,
"text": " U\\vee V=\\{u\\vee v \\mid (u,v)\\in U\\times V\\},\\quad\n \\text{for all }U,V\\subseteq L,\n"
},
{
"math_id": 14,
"text": "Z=\\{z_0,z_1,\\dots,z_n\\}"
},
{
"math_id": 15,
"text": "\\bigvee_{i<n}z_i\\leq z_n"
},
{
"math_id": 16,
"text": "\\alpha_j=\\bigvee(\\Theta_L(z_i,z_{i+1})\\mid i<n,\\ \\varepsilon(i)=j),\\text{ for all }j<2."
},
{
"math_id": 17,
"text": "\\theta_j\\in\\mathrm{Con_c}^{\\{x_j\\}\\vee Z}L"
},
{
"math_id": 18,
"text": " z_0\\vee x_0\\vee x_1\\equiv z_n\\vee x_0\\vee x_1\n \\pmod{\\theta_0\\vee\\theta_1}\\quad\\text{and}\\quad\n \\theta_j\\subseteq\\alpha_j\\cap\\Theta_L^+(z_n,x_j),\\text{ for all }j<2.\n "
}
] | https://en.wikipedia.org/wiki?curid=9755564 |
975599 | 311 (number) | Natural number
311 (three hundred [and] eleven) is the natural number following 310 and preceding 312.
311 is the 64th prime; a twin prime with 313; an irregular prime; an emirp, an Eisenstein prime with no imaginary part and real part of the form formula_0; a Gaussian prime with no imaginary part and real part of the form formula_1; and a permutable prime with 113 and 131.
It can be expressed as a sum of consecutive primes in four different ways: as a sum of three consecutive primes (101 + 103 + 107), as a sum of five consecutive primes (53 + 59 + 61 + 67 + 71), as a sum of seven consecutive primes (31 + 37 + 41 + 43 + 47 + 53 + 59), and as a sum of eleven consecutive primes (11 + 13 + 17 + 19 + 23 + 29 + 31 + 37 + 41 + 43 + 47).
311 is a strictly non-palindromic number, as it is not palindromic in any base between base 2 and base 309.
311 is the smallest positive integer "d" such that the imaginary quadratic field Q(√–"d") has class number = 19.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "3n - 1"
},
{
"math_id": 1,
"text": "4n - 1"
}
] | https://en.wikipedia.org/wiki?curid=975599 |
975686 | Dike (geology) | A sheet of rock that is formed in a fracture of a pre-existing rock body
In geology, a dike or dyke is a sheet of rock that is formed in a fracture of a pre-existing rock body. Dikes can be either magmatic or sedimentary in origin. Magmatic dikes form when magma flows into a crack then solidifies as a sheet intrusion, either cutting across layers of rock or through a contiguous mass of rock. Clastic dikes are formed when sediment fills a pre-existing crack.
Magmatic dikes.
A magmatic dike is a sheet of igneous rock that cuts across older rock beds. It is formed when magma fills a fracture in the older beds and then cools and solidifies. The dike rock is usually more resistant to weathering than the surrounding rock, so that erosion exposes the dike as a natural wall or ridge. It is from these natural walls that dikes get their name.
Dikes preserve a record of the fissures through which most mafic magma (fluid magma low in silica) reaches the surface. They are studied by geologists for the clues they provide on volcanic plumbing systems. They also record ancient episodes of extension of the Earth's crust, since large numbers of dikes ("dike swarms") are formed when the crust is pulled apart by tectonic forces. The dikes show the direction of extension, since they form at right angles to the direction of maximum extension.
Description.
The thickness of a dike is much smaller than its other two dimensions, and the opposite walls are roughly parallel, so that a dike is more or less constant in thickness. The thickness of different dikes can range from a few millimeters to hundreds of meters, but is most typically from about a meter to a few tens of meters. The lateral extent can be tens of kilometers, and dikes with a thickness of a few tens of meters or more commonly extend for over 100 km. Most dikes are steeply dipping; in other words, they are oriented nearly vertically. Subsequent tectonic deformation may rotate the sequence of strata through which the dike propagates so that the dike becomes horizontal.
It is common for a set of dikes, each a few kilometers long, to form "en echelon". This pattern is seen in the Higganum dike set of New England. This dike set consists of individual dikes that are typically four kilometers in length at the surface and up to 60 meters wide. These short segments form longer groups extending for around 10 km. The entire set of dikes forms a line extending for 250 km. Individual segments overlap, with the overlapping portions thinner, so that the combined thickness of the two overlapped portions is about the same as the thickness of a single segment. Other examples of "en echelon" dikes are the Inyo dike of Long Valley, California, US; the Jagged Rocks complex, Arizona, US; and the dikes of oceanic spreading centers.
Dikes range in composition from basaltic to rhyolitic, but most are basaltic. The texture is typically slightly coarser than basalt erupted at the surface, forming a rock type called diabase. The grain size varies systematically across the dike, with the coarsest grains normally at the center of the dike. Dikes formed at shallow depth commonly have a glassy or fine-grained chilled margin 1 to 5 cm thick, formed where the magma was rapidly cooled by contact with the cold surrounding rock. Shallow dikes also typically show columnar jointing perpendicular to the margins. Here the dike rock fractures into columns as it cools and contracts. These are usually 5- to 6-sided, but 3- to 4-sided columns are also common. These are fairly uniform in size within a single dike, but range from a few centimeters to over 0.3 meters across in different dikes, tending to be thicker in wider dikes. Larger columns are likely a consequence of slower cooling.
Dike rock is usually dense, with almost no vesicles (frozen bubbles), but vesicles may be seen in the shallowest part of a dike. When vesicles are present, they tend to form bands parallel to walls and are elongated in direction of flow. Likewise, phenocrysts (larger crystals) on the margins of the dike show an alignment in the direction of flow.
In contrast to dikes, which cut across the bedding of layered rock, a sill is a sheet intrusion that forms within and parallel to the bedding.
Formation.
Mafic magma (fluid magma low in silica) usually reaches the surface through fissures, forming dikes.
At the shallowest depths, dikes form when magma rises into an existing fissure. In the young, shallow dikes of the Hawaiian Islands, there is no indication of forceful intrusion of magma. For example, there is little penetration of magma into the walls of dikes even when the walls consist of highly porous volcanic clinker, and little wall material breaks off into the molten magma. These fissures likely open as a result of bulging of the rock beds above a magma chamber that is being filled with magma from deeper in the crust.
However, open fractures can exist only near the surface. Magma deeper in the crust must force its way through the rock, always opening a path along a plane normal to the minimum principal stress. This is the direction in which the crust is under the weakest compression and so requires the least work to fracture. At shallow depths, where the rock is brittle, the pressurized magma progressively fractures the rock as it advances upwards. Even if the magma is only slightly pressurized compared with the surrounding rock, tremendous stress is concentrated on the tip of the propagating fracture. In effect, the magma wedges apart the brittle rock in a process called "hydraulic fracture". At greater depths, where the rock is hotter and less brittle, the magma forces the rock aside along brittle shear planes oriented 35 degrees to the sides of the dock. This bulldozer-like action produces a blunter dike tip. At the greatest depths, the shear planes become ductile faults, angled 45 degree from the sides of the dike. At depths where the rock is completely plastic, a diapir (a rising plug of magma) forms instead of a dike.
The walls of dikes often fit closely back together, providing strong evidence that the dike formed by dilatation of a fissure. However, a few large dikes, such as the 120-meter-thick Medford dike in Maine, US, or the 500-meter-thick Gardar dike in Greenland, show no dilatation. These may have formed by "stoping", in which the magma fractured and disintegrated the rock at its advancing tip rather than prying the rock apart. Other dikes may have formed by "metasomatism", in which fluids moving along a narrow fissure changed the chemical composition of the rock closest to the fissure.
There is an approximate relationship between the width of a dike and its maximum extent, expressed by the formula:
formula_0
Here formula_1 is the thickness of the dike; formula_2 is its lateral extent; formula_3 is the excess pressure in the magma relative to the host rock; formula_4 is the density of the host rock; and formula_5 is the P-wave velocity of the host rock (essentially, the speed of sound in the rock). This formula predicts that dikes will be longer and narrower at greater depths below the surface. The ratio of thickness to length is around 0.01 to 0.001 near the surface, but at depth it ranges from 0.001 to 0.0001. A surface dike 10 meters in thickness will extend about 3 km, while a dike of similar thickness at depth will extend about 30 km. This tendency of intruding magma to form shorter fissures at shallower depths has been put forward as an explanation of "en echelon" dikes. However, "en echelon" dikes have also been explained as a consequence of the direction of minimum principal stress changing as the magma ascends from deep to shallow levels in the crust.
An "en echelon" dike set may evolve into single dike with "bridges" connecting the formerly separate segments and "horns" showing former segment overlaps. In ancient dikes in deformed rock, the bridges and horns are used by geologists to determine the direction of magma flow.
Where there is rapid flow of molten magma through a fissure, the magma tends to erode the walls, either by melting the wall rock or by tearing off fragments of wall rock. This widens the fissure and increases flow. Where flow is less rapid, the magma may solidify next to the wall, narrowing the fissure and decreasing flow. This causes flow to become concentrated at a few points. At Hawaii, eruptions often begin with a "curtain of fire" where lava erupts along the entire length of a fissure several kilometers long. However, the length of erupting fissure diminishes over time, becoming focused on a short segment of less than half a kilometer. The minimum possible width of a dike is determined by the balance between magma movement and cooling.
Multiple and composite dikes.
There may be more than one injection of magma along a given fissure. When multiple injections are all of similar composition, the dike is described as a "multiple dike". However, subsequent injections are sometimes quite different in composition, and then the dike is described as a "composite dike". The range of compositions in a composite dike can go all the way from diabase to granite, as is observed in some dikes of Scotland and northern Ireland.
After the initial formation of a dike, subsequent injections of magma are most likely to take place along the center of the dike. If the previous dike rock has cooled significantly, the subsequent injection can be characterized by fracturing of the old dike rock and the formation of chilled margins on the new injection.
Dike swarms.
Sometimes dikes appear in swarms, consisting of several to hundreds of dikes emplaced more or less contemporaneously during a single intrusive event. Dike swarms are almost always composed of diabase and most often are associated with flood basalts of large igneous provinces. They are characteristic of divergent plate boundaries. For example, Jurassic dike swarms in New England, northern England, and the west coast of Scotland record the early opening of the Atlantic Ocean. Dike swarms are forming in the present day along the divergent plate boundary running through Iceland. Dike swarms often have a great cumulative thickness: Dikes in Iceland average 3 to 5 meters in width, but one 53-kilometer stretch of coast has about 1000 dikes with total thickness of 3 kilometers. The world's largest dike swarm is the Mackenzie dike swarm in the Northwest Territories, Canada.
Dike swarms (also called "dike complexes") are exposed in the eroded rift zones of Hawaiian volcanoes. As with most other magmatic dikes, these were fissures through which lava reached the surface. The swarms are typically 2.5 to 5 km in width, with individual dikes about a meter in width. The dike swarms extend radially out from volcano summits and parallel to the long axis of the volcanic shield. Sills and stocks are occasionally present in the complexes. They are abruptly truncated at the margins of summit calderas. Typically, there are about 50 to 100 dikes per kilometer at the center of the rift zone, though the density can be as high as 500 per kilometer and the dikes then make up half the volume of the rock. The density drops to 5 to 50 per kilometer away from the center of the rift zone before abruptly dropping to very few dikes. It is likely that the number of dikes must increase with depth, reaching a typical value of 300 to 350 per kilometer at the level of the ocean floor. In some respects, these dike swarms resemble those of western Scotland associated with the flood eruptions that preceded the opening of the Atlantic Ocean.
Dikes often form as radial swarms from a central volcano or intrusion. Though they appear to originate in the central intrusion, the dikes often have a different age and composition from the intrusion. These radial swarms may have formed over the intrusion and were later cut by the rising body of magma, or the crust was already experiencing regional tension and the intrusion triggered formation of the fissures.
Sheeted dike complexes.
In rock of the oceanic crust, pillow lava erupted onto the sea floor is underlain by "sheeted dike complexes" that preserve the conduits through which magma reached the ocean floor at mid-ocean ridges. These sheeted dikes characteristically show a chilled margin on only one side, indicating that each dike was split in half by a subsequent eruption of magma.
Ring dikes and cone sheets.
Ring dikes and cone sheets are special types of dikes associated with caldera volcanism. These are distributed around a shallow magma chamber. Cone sheets form when magma is injected into a shallow magma chamber, which lifts and fractures the rock beds above it. The fractures take the form of a set of concentric cones dipping at a relatively shallow angle into the magma chamber. When the caldera is subsequently emptied by explosive volcanic activity, the roof of the magma chamber collapses as a plug of rock surrounded by a ring fracture. Magma rising into the ring fracture produces a ring dike. Good examples of ring dikes and cone sheets are found in the Ardnamurchan peninsula of Scotland.
Other special types.
A feeder dike is a dike that acted as a conduit for magma moving from a magma chamber to a localized intrusion. For example, the Muskox intrusion in arctic Canada was fed by a large dike, with a thickness of 150 meters.
A sole injection is a dike injected along a thrust fault plane, where rock beds were fractured and thrust up over younger beds.
Clastic dikes.
Clastic dikes (also known as sedimentary dikes) are vertical bodies of sedimentary rock that cut off other rock layers. They can form in two ways:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\frac{2w}{2b}=\\frac{2.25P_{ex}}{\\rho_{host}V_P^2}"
},
{
"math_id": 1,
"text": "w"
},
{
"math_id": 2,
"text": "b"
},
{
"math_id": 3,
"text": "P_{ex}"
},
{
"math_id": 4,
"text": "\\rho_{host}"
},
{
"math_id": 5,
"text": "V_P"
}
] | https://en.wikipedia.org/wiki?curid=975686 |
9757480 | Umbilical point | In the differential geometry of surfaces in three dimensions, umbilics or umbilical points are points on a surface that are locally spherical. At such points the normal curvatures in all directions are equal, hence, both principal curvatures are equal, and every tangent vector is a "principal direction". The name "umbilic" comes from the Latin "umbilicus" (navel).
Umbilic points generally occur as isolated points in the elliptical region of the surface; that is, where the Gaussian curvature is positive.
<templatestyles src="Unsolved/styles.css" />
Unsolved problem in mathematics:
Does every smooth topological sphere in Euclidean space have at least two umbilics?
The sphere is the only surface with non-zero curvature where every point is umbilic. A flat umbilic is an umbilic with zero Gaussian curvature. The monkey saddle is an example of a surface with a flat umbilic and on the plane every point is a flat umbilic. A closed surface topologically equivalent to a torus may or may not have zero umbilics, but every closed surface of nonzero Euler characteristic, embedded smoothly into Euclidean space, has at least one umbilic. An unproven conjecture of Constantin Carathéodory states that every smooth surface topologically equivalent to the sphere has at least two umbilics.
The three main types of umbilic points are elliptical umbilics, parabolic umbilics and hyperbolic umbilics. Elliptical umbilics have the three ridge lines passing through the umbilic and hyperbolic umbilics have just one. Parabolic umbilics are a transitional case with two ridges one of which is singular. Other configurations are possible for transitional cases. These cases correspond to the "D"4−, "D"5 and "D"4+ elementary catastrophes of René Thom's catastrophe theory.
Umbilics can also be characterised by the pattern of the principal direction vector field around the umbilic which typically form one of three configurations: star, lemon, and lemonstar (or monstar). The index of the vector field is either −½ (star) or ½ (lemon, monstar). Elliptical and parabolic umbilics always have the star pattern, whilst hyperbolic umbilics can be star, lemon, or monstar. This classification was first due to Darboux and the names come from Hannay.
For surfaces with genus 0 with isolated umbilics, e.g. an ellipsoid, the index of the principal direction vector field must be 2 by the Poincaré–Hopf theorem. Generic genus 0 surfaces have at least four umbilics of index ½. An ellipsoid of revolution has two non-generic umbilics each of which has index 1.
Classification of umbilics.
Cubic forms.
The classification of umbilics is closely linked to the classification of real cubic forms formula_0. A cubic form will have a number of root lines formula_1 such that the cubic form is zero for all real formula_2. There are a number of possibilities including:
The equivalence classes of such cubics under uniform scaling form a three-dimensional real projective space and the subset of parabolic forms define a surface – called the umbilic bracelet by Christopher Zeeman. Taking equivalence classes under rotation of the coordinate system removes one further parameter and a cubic forms can be represent by the complex cubic form formula_7 with a single complex parameter formula_8. Parabolic forms occur when formula_9, the inner deltoid, elliptical forms are inside the deltoid and hyperbolic one outside. If formula_10 and formula_8 is not a cube root of unity then the cubic form is a "right-angled cubic form" which play a special role for umbilics. If formula_11 then two of the root lines are orthogonal.
A second cubic form, the "Jacobian" is formed by taking the Jacobian determinant of the vector valued function formula_12, formula_13. Up to a constant multiple this is the cubic form formula_14. Using complex numbers the Jacobian is a parabolic cubic form when formula_15, the outer deltoid in the classification diagram.
Umbilic classification.
Any surface with an isolated umbilic point at the origin can be expressed as a Monge form parameterisation formula_16, where formula_17 is the unique principal curvature. The type of umbilic is classified by the cubic form from the cubic part and corresponding Jacobian cubic form. Whilst principal directions are not uniquely defined at an umbilic the limits of the principal directions when following a ridge on the surface can be found and these correspond to the root-lines of the cubic form. The pattern of lines of curvature is determined by the Jacobian.
The classification of umbilic points is as follows:
In a generic family of surfaces umbilics can be created, or destroyed, in pairs: the "birth of umbilics" transition. Both umbilics will be hyperbolic, one with a star pattern and one with a monstar pattern. The outer circle in the diagram, a right angle cubic form, gives these transitional cases. Symbolic umbilics are a special case of this.
Focal surface.
The elliptical umbilics and hyperbolic umbilics have distinctly different focal surfaces. A ridge on the surface corresponds to a cuspidal edges so each sheet of the elliptical focal surface will have three cuspidal edges which come together at the umbilic focus and then switch to the other sheet. For a hyperbolic umbilic there is a single cuspidal edge which switch from one sheet to the other.
Definition in higher dimension in Riemannian manifolds.
A point "p" in a Riemannian submanifold is umbilical if, at "p", the (vector-valued) Second fundamental form is some normal vector tensor the induced metric (First fundamental form). Equivalently, for all vectors "U", "V" at "p", II("U", "V") = "g""p"("U", "V")formula_18, where formula_18 is the mean curvature vector at "p".
A submanifold is said to be umbilic (or all-umbilic) if this condition holds at every point "p". This is equivalent to saying that the submanifold can be made totally geodesic by an appropriate conformal change of the metric of the surrounding ("ambient") manifold. For example, a surface in Euclidean space is umbilic if and only if it is a piece of a sphere. | [
{
"math_id": 0,
"text": "a x^3 + 3 b x^2 y + 3 c x y^2 + d y^3"
},
{
"math_id": 1,
"text": "\\lambda (x,y)"
},
{
"math_id": 2,
"text": "\\lambda"
},
{
"math_id": 3,
"text": "x^2 y-y^3"
},
{
"math_id": 4,
"text": "x^2 y"
},
{
"math_id": 5,
"text": "x^2 y+y^3"
},
{
"math_id": 6,
"text": "x^3"
},
{
"math_id": 7,
"text": "z^3+3 \\overline{\\beta} z^2 \\overline{z} + 3 \\beta z \\overline{z}^2 + \\overline{z}^3"
},
{
"math_id": 8,
"text": "\\beta"
},
{
"math_id": 9,
"text": "\\beta=\\tfrac{1}{3}(2 e^{i\\theta}+e^{-2 i\\theta})"
},
{
"math_id": 10,
"text": "\\left |\\beta\\right |=1"
},
{
"math_id": 11,
"text": "\\left |\\beta\\right |=\\tfrac{1}{3}"
},
{
"math_id": 12,
"text": "F : \\mathbb{R}^2 \\rightarrow \\mathbb{R}^2"
},
{
"math_id": 13,
"text": "F(x,y)=(x^2+y^2,a x^3 + 3 b x^2 y + 3 c x y^2 + d y^3)"
},
{
"math_id": 14,
"text": "b x^3+(2 c-a)x^2 y+(d-2 b)x y^2-c y^3"
},
{
"math_id": 15,
"text": "\\beta=-2 e^{i\\theta}-e^{-2 i\\theta}"
},
{
"math_id": 16,
"text": "z=\\tfrac{1}{2}\\kappa(x^2+y^2)+\\tfrac{1}{3}(a x^3 + 3 b x^2 y + 3 c x y^2 + d y^3)+\\ldots"
},
{
"math_id": 17,
"text": "\\kappa"
},
{
"math_id": 18,
"text": "\\nu"
}
] | https://en.wikipedia.org/wiki?curid=9757480 |
9758445 | Maximal semilattice quotient | In abstract algebra, a branch of mathematics, a maximal semilattice quotient is a commutative monoid derived from another commutative monoid by making certain elements equivalent to each other.
Every commutative monoid can be endowed with its "algebraic" preordering ≤ . By definition, "x≤ y" holds, if there exists "z" such that "x+z=y". Further, for "x, y" in "M", let formula_0 hold, if there exists a positive integer "n" such that "x≤ ny", and let formula_1 hold, if formula_0 and formula_2. The binary relation formula_3 is a monoid congruence of "M", and the quotient monoid formula_4 is the "maximal semilattice quotient" of "M".
This terminology can be explained by the fact that the canonical projection "p" from "M" onto formula_4 is universal among all monoid homomorphisms from "M" to a (∨,0)-semilattice, that is, for any (∨,0)-semilattice "S" and any monoid homomorphism "f: M→ S", there exists a unique (∨,0)-homomorphism formula_5 such that "f=gp".
If "M" is a refinement monoid, then formula_4 is a distributive semilattice. | [
{
"math_id": 0,
"text": "x\\propto y"
},
{
"math_id": 1,
"text": "x\\asymp y"
},
{
"math_id": 2,
"text": "y\\propto x"
},
{
"math_id": 3,
"text": "\\asymp"
},
{
"math_id": 4,
"text": "M/{\\asymp}"
},
{
"math_id": 5,
"text": "g\\colon M/{\\asymp}\\to S"
}
] | https://en.wikipedia.org/wiki?curid=9758445 |
9759038 | Banach's matchbox problem | Banach's match problem is a classic problem in probability attributed to Stefan Banach. Feller says that the problem was inspired by a humorous reference to Banach's smoking habit in a speech honouring him by Hugo Steinhaus, but that it was not Banach who set the problem or provided an answer.
Suppose a mathematician carries two matchboxes at all times: one in his left pocket and one in his right. Each time he needs a match, he is equally likely to take it from either pocket. Suppose he reaches into his pocket and discovers for the first time that the box picked is empty. If it is assumed that each of the matchboxes originally contained formula_0 matches, what is the probability that there are exactly formula_1 matches in the other box?
Solution.
Without loss of generality consider the case where the matchbox in his right pocket has an unlimited number of matches and let formula_2 be the number of matches removed from this one before the left one is found to be empty. When the left pocket is found to be empty, the man has chosen that pocket formula_3 times. Then formula_2 is the number of successes before formula_3 failures in Bernoulli trials with formula_4, which has the negative binomial distribution and thus
formula_5.
Returning to the original problem, we see that the probability that the left pocket is found to be empty first is formula_6 which equals formula_7 because both are equally likely. We see that the number formula_8 of matches remaining in the other pocket is
formula_9.
The expectation of the distribution is approximately formula_10. (This is shown using Stirling's approximation.) So starting with boxes with formula_11 matches, the expected number of matches in the second box is formula_12.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "N"
},
{
"math_id": 1,
"text": "k"
},
{
"math_id": 2,
"text": "M"
},
{
"math_id": 3,
"text": "(N+1)"
},
{
"math_id": 4,
"text": "p=1/2"
},
{
"math_id": 5,
"text": "P[M=m] = \\binom{N+m}{m}\\left(\\frac{1}{2}\\right)^{N+1+m}"
},
{
"math_id": 6,
"text": "P[M<N+1]"
},
{
"math_id": 7,
"text": "1/2"
},
{
"math_id": 8,
"text": "K"
},
{
"math_id": 9,
"text": "P[K=k] = P[M=N-k|M<N+1] = 2P[M=N-k] = \\binom{2N-k}{N - k}\\left(\\frac{1}{2}\\right)^{2N-k}"
},
{
"math_id": 10,
"text": "2\\sqrt{N/\\pi}-1"
},
{
"math_id": 11,
"text": "N=40"
},
{
"math_id": 12,
"text": "6"
}
] | https://en.wikipedia.org/wiki?curid=9759038 |
97592 | Bucket sort | Sorting algorithm
Bucket sort, or bin sort, is a sorting algorithm that works by distributing the elements of an array into a number of buckets. Each bucket is then sorted individually, either using a different sorting algorithm, or by recursively applying the bucket sorting algorithm. It is a distribution sort, a generalization of pigeonhole sort that allows multiple keys per bucket, and is a cousin of radix sort in the most-to-least significant digit flavor. Bucket sort can be implemented with comparisons and therefore can also be considered a comparison sort algorithm. The computational complexity depends on the algorithm used to sort each bucket, the number of buckets to use, and whether the input is uniformly distributed.
Bucket sort works as follows:
Pseudocode.
function bucketSort(array, k) is
buckets ← new array of k empty lists
M ← 1 + the maximum key value in the array
for i = 0 to length(array) do
insert "array[i]" into "buckets[floor(k × array[i] / M)]"
for i = 0 to k do
nextSort(buckets[i])
return the concatenation of buckets[0], ..., buckets[k]
Let "array" denote the array to be sorted and "k" denote the number of buckets to use. One can compute the maximum key value in linear time by iterating over all the keys once. The floor function must be used to convert a floating number to an integer ( and possibly casting of datatypes too ). The function "nextSort" is a sorting function used to sort each bucket. Conventionally, insertion sort is used, but other algorithms could be used as well, such as "selection sort" or "merge sort". Using "bucketSort" itself as "nextSort" produces a relative of radix sort; in particular, the case "n = 2" corresponds to quicksort (although potentially with poor pivot choices).
Analysis.
Worst-case analysis.
When the input contains several keys that are close to each other (clustering), those elements are likely to be placed in the same bucket, which results in some buckets containing more elements than average. The worst-case scenario occurs when all the elements are placed in a single bucket. The overall performance would then be dominated by the algorithm used to sort each bucket, for example formula_1 insertion sort or formula_2 comparison sort algorithms, such as merge sort.
Average-case analysis.
Consider the case that the input is uniformly distributed. The first step, which is initialize the buckets and find the maximum key value in the array, can be done in formula_3 time. If division and multiplication can be done in constant time, then scattering each element to its bucket also costs formula_3. Assume insertion sort is used to sort each bucket, then the third step costs formula_4, where formula_5 is the length of the bucket indexed formula_6. Since we are concerning the average time, the expectation formula_7 has to be evaluated instead. Let formula_8 be the random variable that is formula_9 if element formula_10 is placed in bucket formula_6, and formula_11 otherwise. We have formula_12. Therefore,
formula_13
The last line separates the summation into the case formula_14 and the case formula_15. Since the chance of an object distributed to bucket formula_6 is formula_16, formula_17 is 1 with probability formula_16 and 0 otherwise.
formula_18
formula_19
With the summation, it would be
formula_20
Finally, the complexity would be formula_21.
The last step of bucket sort, which is concatenating all the sorted objects in each bucket, requires formula_22 time. Therefore, the total complexity is formula_0. Note that if k is chosen to be formula_23, then bucket sort runs in formula_3 average time, given a uniformly distributed input.
Optimizations.
A common optimization is to put the unsorted elements of the buckets back in the original array "first", then run insertion sort over the complete array; because insertion sort's runtime is based on how far each element is from its final position, the number of comparisons remains relatively small, and the memory hierarchy is better exploited by storing the list contiguously in memory.
If the input distribution is known or can be estimated, buckets can often be chosen which contain constant density (rather than merely having constant size). This allows formula_3 average time complexity even without uniformly distributed input.
Variants.
Generic bucket sort.
The most common variant of bucket sort operates on a list of "n" numeric inputs between zero and some maximum value "M" and divides the value range into "b" buckets each of size "M"/"b". If each bucket is sorted using insertion sort, the sort can be shown to run in expected linear time (where the average is taken over all possible inputs). However, the performance of this sort degrades with clustering; if many values occur close together, they will all fall into a single bucket and be sorted slowly. This performance degradation is avoided in the original bucket sort algorithm by assuming that the input is generated by a random process that distributes elements uniformly over the interval "[0,1)".
ProxmapSort.
Similar to generic bucket sort as described above, ProxmapSort works by dividing an array of keys into subarrays via the use of a "map key" function that preserves a partial ordering on the keys; as each key is added to its subarray, insertion sort is used to keep that subarray sorted, resulting in the entire array being in sorted order when ProxmapSort completes. ProxmapSort differs from bucket sorts in its use of the map key to place the data approximately where it belongs in sorted order, producing a "proxmap" — a proximity mapping — of the keys.
Histogram sort.
Another variant of bucket sort known as histogram sort or counting sort adds an initial pass that counts the number of elements that will fall into each bucket using a count array. Using this information, the array values can be arranged into a sequence of buckets in-place by a sequence of exchanges, leaving no space overhead for bucket storage.
Postman's sort.
The Postman's sort is a variant of bucket sort that takes advantage of a hierarchical structure of elements, typically described by a set of attributes. This is the algorithm used by letter-sorting machines in post offices: mail is sorted first between domestic and international; then by state, province or territory; then by destination post office; then by routes, etc. Since keys are not compared against each other, sorting time is O("cn"), where "c" depends on the size of the key and number of buckets. This is similar to a radix sort that works "top down," or "most significant digit first."
Shuffle sort.
The shuffle sort is a variant of bucket sort that begins by removing the first 1/8 of the "n" items to be sorted, sorts them recursively, and puts them in an array. This creates "n"/8 "buckets" to which the remaining 7/8 of the items are distributed. Each "bucket" is then sorted, and the "buckets" are concatenated into a sorted array.
Comparison with other sorting algorithms.
Bucket sort can be seen as a generalization of counting sort; in fact, if each bucket has size 1 then bucket sort degenerates to counting sort. The variable bucket size of bucket sort allows it to use O("n") memory instead of O("M") memory, where "M" is the number of distinct values; in exchange, it gives up counting sort's O("n" + "M") worst-case behavior.
Bucket sort with two buckets is effectively a version of quicksort where the pivot value is always selected to be the middle value of the value range. While this choice is effective for uniformly distributed inputs, other means of choosing the pivot in quicksort such as randomly selected pivots make it more resistant to clustering in the input distribution.
The "n"-way mergesort algorithm also begins by distributing the list into "n" sublists and sorting each one; however, the sublists created by mergesort have overlapping value ranges and so cannot be recombined by simple concatenation as in bucket sort. Instead, they must be interleaved by a merge algorithm. However, this added expense is counterbalanced by the simpler scatter phase and the ability to ensure that each sublist is the same size, providing a good worst-case time bound.
Top-down radix sort can be seen as a special case of bucket sort where both the range of values and the number of buckets is constrained to be a power of two. Consequently, each bucket's size is also a power of two, and the procedure can be applied recursively. This approach can accelerate the scatter phase, since we only need to examine a prefix of the bit representation of each element to determine its bucket. | [
{
"math_id": 0,
"text": "O\\left(n+\\frac{n^2}{k}+k\\right)"
},
{
"math_id": 1,
"text": "O(n^2)"
},
{
"math_id": 2,
"text": "O(n \\log(n))"
},
{
"math_id": 3,
"text": "O(n)"
},
{
"math_id": 4,
"text": "O(\\textstyle \\sum_{i=1}^k \\displaystyle n_i^2)"
},
{
"math_id": 5,
"text": "n_i"
},
{
"math_id": 6,
"text": "i"
},
{
"math_id": 7,
"text": "E(n_i^2)"
},
{
"math_id": 8,
"text": "X_{ij}"
},
{
"math_id": 9,
"text": "1"
},
{
"math_id": 10,
"text": "j"
},
{
"math_id": 11,
"text": "0"
},
{
"math_id": 12,
"text": "n_i = \\sum_{j=1}^n X_{ij}"
},
{
"math_id": 13,
"text": "\\begin{align}\nE(n_i^2) & = E\\left(\\sum_{j=1}^n X_{ij} \\sum_{l=1}^n X_{il}\\right) \\\\\n& = E\\left(\\sum_{j=1}^n \\sum_{l=1}^n X_{ij}X_{il}\\right) \\\\\n& = E\\left(\\sum_{j=1}^n X_{ij}^2\\right) + E\\left(\\sum_{1\\leq j,l\\leq n}\\sum_{j\\neq l}X_{ij}X_{il}\\right)\n\\end{align} "
},
{
"math_id": 14,
"text": "j=l"
},
{
"math_id": 15,
"text": "j\\neq l"
},
{
"math_id": 16,
"text": "1/k"
},
{
"math_id": 17,
"text": "X_{ij} "
},
{
"math_id": 18,
"text": "E(X_{ij}^2) = 1^2\\cdot \\left(\\frac{1}{k}\\right) + 0^2\\cdot \\left(1-\\frac{1}{k}\\right) = \\frac{1}{k}"
},
{
"math_id": 19,
"text": "E(X_{ij}X_{ik}) = 1\\cdot \\left(\\frac{1}{k}\\right)\\left(\\frac{1}{k}\\right) = \\frac{1}{k^2} "
},
{
"math_id": 20,
"text": "E\\left(\\sum_{j=1}^n X_{ij}^2\\right) + E\\left(\\sum_{1\\leq j,k\\leq n}\\sum_{j\\neq k}X_{ij}X_{ik}\\right) = n\\cdot\\frac{1}{k} + n(n-1)\\cdot\\frac{1}{k^2} = \\frac{n^2+nk-n}{k^2}"
},
{
"math_id": 21,
"text": "O\\left(\\sum_{i=1}^kE(n_i^2)\\right) = O\\left(\\sum_{i=1}^k \\frac{n^2+nk-n}{k^2}\\right) = O\\left(\\frac{n^2}{k}+n\\right) "
},
{
"math_id": 22,
"text": "O(k)"
},
{
"math_id": 23,
"text": "k = \\Theta(n)"
}
] | https://en.wikipedia.org/wiki?curid=97592 |
976354 | Maximum operating depth | Depth below which the partial pressure of oxygen (pO2) of the gas mix exceeds an acceptable limit
In underwater diving activities such as saturation diving, technical diving and nitrox diving, the maximum operating depth (MOD) of a breathing gas is the depth below which the partial pressure of oxygen (pO2) of the gas mix exceeds an acceptable limit. This limit is based on risk of central nervous system oxygen toxicity, and is somewhat arbitrary, and varies depending on the diver training agency or Code of Practice, the level of underwater exertion expected and the planned duration of the dive, but is normally in the range of 1.2 to 1.6 bar.
The MOD is significant when planning dives using gases such as heliox, nitrox and trimix because the proportion of oxygen in the mix determines a maximum depth for breathing that gas at an acceptable risk. There is a risk of acute oxygen toxicity if the MOD is exceeded. The tables below show MODs for a selection of oxygen mixes. Atmospheric air contains approximately 21% oxygen, and has an MOD calculated by the same method.
Safe limit of partial pressure of oxygen.
Acute, or central nervous system oxygen toxicity is a time variable response to the partial pressure exposure history of the diver and is both complex and not fully understood.
Central nervous system oxygen toxicity manifests as symptoms such as visual changes (especially tunnel vision), ringing in the ears (tinnitus), nausea, twitching (especially of the face), behavioural changes (irritability, anxiety, confusion), and dizziness. This may be followed by a tonic–clonic seizure consisting of two phases: intense muscle contraction occurs for several seconds (tonic phase); followed by rapid spasms of alternate muscle relaxation and contraction producing convulsive jerking (clonic phase). The seizure ends with a period of unconsciousness (the postictal state). The onset of seizure depends upon the partial pressure of oxygen in the breathing gas and exposure duration. However, exposure time before onset is unpredictable, as tests have shown a wide variation, both amongst individuals, and in the same individual from day to day. In addition, many external factors, such as underwater immersion, exposure to cold, and exercise will decrease the time to onset of central nervous system symptoms. Decrease of tolerance is closely linked to retention of carbon dioxide. Other factors, such as darkness and caffeine, increase tolerance in test animals, but these effects have not been proven in humans.
The maximum single exposure limits recommended in the NOAA Diving Manual are 45 minutes at 1.6 bar, 120 minutes at 1.5 bar, 150 minutes at 1.4 bar, 180 minutes at 1.3 bar and 210 minutes at 1.2 bar.
Formula.
The formula simply divides the absolute partial pressure of oxygen which can be tolerated (expressed in atm or bar) by the fraction of oxygen in the breathing gas, to calculate the absolute pressure at which the mix can be breathed. (for example, 50% nitrox can be breathed at twice the pressure of 100% oxygen, so divide by 0.5, etc.). Of this total pressure which can be tolerated by the diver, 1 atmosphere is due to surface pressure of the Earth's air, and the rest is due to the depth in water. So the 1 atmosphere or bar contributed by the air is subtracted to give the pressure due to the depth of water. The pressure produced by depth in water, is converted to pressure in feet sea water (fsw) or metres sea water (msw) by multiplying with the appropriate conversion factor, 33 fsw per atm, or 10 msw per bar.
In feet
formula_0
In which pO2 is the chosen maximum partial pressure of oxygen in atmospheres absolute and the FO2 is the fraction of oxygen in the mixture. For example, if a gas contains 36% oxygen (FO2 = 0.36) and the limiting maximum pO2 is chosen at 1.4 atmospheres absolute, the MOD in feet of seawater (fsw) is 33 fsw/atm x [(1.4 ata / 0.36) − 1] = 95.3 fsw.
In metres
formula_1
In which pO2 is the chosen maximum partial pressure in oxygen in bar and the FO2 is the fraction of oxygen in the mixture. For example, if a gas contains 36% oxygen and the maximum pO2 is 1.4 bar, the MOD (msw) is 10 msw/bar x [(1.4 bar / 0.36) − 1] = 28.9 msw.
Tables.
These depths are rounded to the nearest foot.
These depths are rounded to the nearest metre.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "MOD (fsw) = 33\\mathrm{~fsw/atm} \\times \\left [\\left ({pO_2\\mathrm{~ata}\\over FO_2} \\right ) - 1\\right ]"
},
{
"math_id": 1,
"text": "MOD (msw) =10\\mathrm{~ msw/bar} \\times \\left [\\left ({pO_2\\mathrm{~ bar}\\over FO_2} \\right ) - 1\\right ]"
}
] | https://en.wikipedia.org/wiki?curid=976354 |
976365 | Divided differences | Algorithm for computing polynomial coefficients
In mathematics, divided differences is an algorithm, historically used for computing tables of logarithms and trigonometric functions. Charles Babbage's difference engine, an early mechanical calculator, was designed to use this algorithm in its operation.
Divided differences is a recursive division process. Given a sequence of data points formula_0, the method calculates the coefficients of the interpolation polynomial of these points in the Newton form.
Definition.
Given "n" + 1 data points
formula_1
where the formula_2 are assumed to be pairwise distinct, the forward divided differences are defined as:
formula_3
To make the recursive process of computation clearer, the divided differences can be put in tabular form, where the columns correspond to the value of "j" above, and each entry in the table is computed from the difference of the entries to its immediate lower left and to its immediate upper left, divided by a difference of corresponding "x-"values:
formula_4
Notation.
Note that the divided difference formula_5 depends on the values formula_6 and formula_7, but the notation hides the dependency on the "x"-values. If the data points are given by a function "f",
formula_8
one sometimes writes the divided difference in the notation
formula_9Other notations for the divided difference of the function "ƒ" on the nodes "x"0, ..., "x""n" are:
formula_10
Example.
Divided differences for formula_11 and the first few values of formula_12:
formula_13
Thus, the table corresponding to these terms upto two columns has the following form:
formula_14
Matrix form.
The divided difference scheme can be put into an upper triangular matrix:
formula_29
Then it holds
Polynomials and power series.
The matrix
formula_46
contains the divided difference scheme for the identity function with respect to the nodes formula_47, thus formula_48 contains the divided differences for the power function with exponent formula_49.
Consequently, you can obtain the divided differences for a polynomial function formula_23 by applying formula_23 to the matrix formula_50: If
formula_51
and
formula_52
then
formula_53
This is known as "Opitz' formula".
Now consider increasing the degree of formula_23 to infinity, i.e. turn the Taylor polynomial into a Taylor series.
Let formula_26 be a function which corresponds to a power series.
You can compute the divided difference scheme for formula_26 by applying the corresponding matrix series to formula_50:
If
formula_54
and
formula_55
then
formula_56
Alternative characterizations.
Expanded form.
formula_57
With the help of the polynomial function formula_58 this can be written as
formula_59
Peano form.
If formula_60 and formula_61, the divided differences can be expressed as
formula_62
where formula_63 is the formula_64-th derivative of the function formula_26 and formula_65 is a certain B-spline of degree formula_66 for the data points formula_47, given by the formula
formula_67
This is a consequence of the Peano kernel theorem; it is called the "Peano form" of the divided differences and formula_65 is the "Peano kernel" for the divided differences, all named after Giuseppe Peano.
Forward and backward differences.
When the data points are equidistantly distributed we get the special case called forward differences. They are easier to calculate than the more general divided differences.
Given "n"+1 data points
formula_68
with
formula_69
the forward differences are defined as
formula_70whereas the backward differences are defined as:
formula_71
Thus the forward difference table is written as:formula_72whereas the backwards difference table is written as:formula_73
The relationship between divided differences and forward differences is
formula_74whereas for backward differences:formula_75
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(x_0, y_0), \\ldots, (x_{n}, y_{n})"
},
{
"math_id": 1,
"text": "(x_0, y_0),\\ldots,(x_{n}, y_{n})"
},
{
"math_id": 2,
"text": "x_k"
},
{
"math_id": 3,
"text": "\\begin{align}\n\\mathopen[y_k] &:= y_k, && k \\in \\{ 0,\\ldots,n\\} \\\\\n\\mathopen[y_k,\\ldots,y_{k+j}] &:= \\frac{[y_{k+1},\\ldots , y_{k+j}] - [y_{k},\\ldots , y_{k+j-1}]}{x_{k+j}-x_k}, && k\\in\\{0,\\ldots,n-j\\},\\ j\\in\\{1,\\ldots,n\\}.\n\\end{align}"
},
{
"math_id": 4,
"text": "\n\\begin{matrix}\nx_0 & y_0 = [y_0] & & & \\\\\n & & [y_0,y_1] & & \\\\\nx_1 & y_1 = [y_1] & & [y_0,y_1,y_2] & \\\\\n & & [y_1,y_2] & & [y_0,y_1,y_2,y_3]\\\\\nx_2 & y_2 = [y_2] & & [y_1,y_2,y_3] & \\\\\n & & [y_2,y_3] & & \\\\\nx_3 & y_3 = [y_3] & & & \\\\\n\\end{matrix}\n"
},
{
"math_id": 5,
"text": "[y_k,\\ldots,y_{k+j}]"
},
{
"math_id": 6,
"text": "x_k,\\ldots,x_{k+j}"
},
{
"math_id": 7,
"text": "y_k,\\ldots,y_{k+j}"
},
{
"math_id": 8,
"text": "(x_0, y_0), \\ldots, (x_{k}, y_n)\n=(x_0, f(x_0)), \\ldots, (x_n, f(x_n))"
},
{
"math_id": 9,
"text": "f[x_k,\\ldots,x_{k+j}]\n\\ \\stackrel{\\text{def}}= \\ [f(x_k),\\ldots,f(x_{k+j})] \n= [y_k,\\ldots,y_{k+j}]."
},
{
"math_id": 10,
"text": "f[x_k,\\ldots,x_{k+j}]=\\mathopen[x_0,\\ldots,x_n]f= \n\\mathopen[x_0,\\ldots,x_n;f]=\nD[x_0,\\ldots,x_n]f."
},
{
"math_id": 11,
"text": "k=0"
},
{
"math_id": 12,
"text": "j"
},
{
"math_id": 13,
"text": "\n\\begin{align}\n \\mathopen[y_0] &= y_0 \\\\\n \\mathopen[y_0,y_1] &= \\frac{y_1-y_0}{x_1-x_0} \\\\\n \\mathopen[y_0,y_1,y_2]\n&= \\frac{\\mathopen[y_1,y_2]-\\mathopen[y_0,y_1]}{x_2-x_0}\n = \\frac{\\frac{y_2-y_1}{x_2-x_1}-\\frac{y_1-y_0}{x_1-x_0}}{x_2-x_0}\n = \\frac{y_2-y_1}{(x_2-x_1)(x_2-x_0)}-\\frac{y_1-y_0}{(x_1-x_0)(x_2-x_0)}\n\\\\\n \\mathopen[y_0,y_1,y_2,y_3] &= \\frac{\\mathopen[y_1,y_2,y_3]-\\mathopen[y_0,y_1,y_2]}{x_3-x_0}\n\\end{align}\n"
},
{
"math_id": 14,
"text": "\\begin{matrix}\n x_0 & y_{0} & & \\\\\n & & {y_{1}-y_{0}\\over x_1 - x_0} & \\\\\n x_1 & y_{1} & & {{y_{2}-y_{1}\\over x_2 - x_1}-{y_{1}-y_{0}\\over x_1 - x_0} \\over x_2 - x_0} \\\\\n & & {y_{2}-y_{1}\\over x_2 - x_1} & \\\\\n x_2 & y_{2} & & \\vdots \\\\\n & & \\vdots & \\\\\n\\vdots & & & \\vdots \\\\\n & & \\vdots & \\\\\n x_n & y_{n} & & \\\\\n\\end{matrix} "
},
{
"math_id": 15,
"text": "\\begin{align}\n(f+g)[x_0,\\dots,x_n] &= f[x_0,\\dots,x_n] + g[x_0,\\dots,x_n] \\\\\n(\\lambda\\cdot f)[x_0,\\dots,x_n] &= \\lambda\\cdot f[x_0,\\dots,x_n]\n\\end{align}"
},
{
"math_id": 16,
"text": "(f\\cdot g)[x_0,\\dots,x_n] = f[x_0]\\cdot g[x_0,\\dots,x_n] + f[x_0,x_1]\\cdot g[x_1,\\dots,x_n] + \\dots + f[x_0,\\dots,x_n]\\cdot g[x_n] = \\sum_{r=0}^n f[x_0,\\ldots,x_r]\\cdot g[x_r,\\ldots,x_n]"
},
{
"math_id": 17,
"text": "\\sigma : \\{0, \\dots, n\\} \\to \\{0, \\dots, n\\}"
},
{
"math_id": 18,
"text": "f[x_0, \\dots, x_n] = f[x_{\\sigma(0)}, \\dots, x_{\\sigma(n)}]"
},
{
"math_id": 19,
"text": "P"
},
{
"math_id": 20,
"text": "\\leq n"
},
{
"math_id": 21,
"text": "p[x_0, \\dots , x_n]"
},
{
"math_id": 22,
"text": "P_{n-1}(x) = p[x_0] + p[x_0,x_1](x-x_0) + p[x_0,x_1,x_2](x-x_0)(x-x_1) + \\cdots + p[x_0,\\ldots,x_n] (x-x_0)(x-x_1)\\cdots(x-x_{n-1})"
},
{
"math_id": 23,
"text": "p"
},
{
"math_id": 24,
"text": "<n"
},
{
"math_id": 25,
"text": "p[x_0, \\dots, x_n] = 0."
},
{
"math_id": 26,
"text": "f"
},
{
"math_id": 27,
"text": "f[x_0,\\dots,x_n] = \\frac{f^{(n)}(\\xi)}{n!}"
},
{
"math_id": 28,
"text": "\\xi"
},
{
"math_id": 29,
"text": "T_f(x_0,\\dots,x_n)=\n\\begin{pmatrix}\nf[x_0] & f[x_0,x_1] & f[x_0,x_1,x_2] & \\ldots & f[x_0,\\dots,x_n] \\\\\n0 & f[x_1] & f[x_1,x_2] & \\ldots & f[x_1,\\dots,x_n] \\\\\n0 & 0 & f[x_2] & \\ldots & f[x_2,\\dots,x_n] \\\\\n\\vdots & \\vdots & & \\ddots & \\vdots \\\\\n0 & 0 & 0 & \\ldots & f[x_n]\n\\end{pmatrix}."
},
{
"math_id": 30,
"text": "T_{f+g}(x) = T_f(x) + T_g(x)"
},
{
"math_id": 31,
"text": "T_{\\lambda f}(x) = \\lambda T_f(x)"
},
{
"math_id": 32,
"text": "\\lambda"
},
{
"math_id": 33,
"text": "T_{f\\cdot g}(x) = T_f(x) \\cdot T_g(x)"
},
{
"math_id": 34,
"text": " T_f(x) "
},
{
"math_id": 35,
"text": " f(x_0), \\dots, f(x_n) "
},
{
"math_id": 36,
"text": "\\delta_\\xi"
},
{
"math_id": 37,
"text": "\\delta_\\xi(t) = \\begin{cases}\n1 &: t=\\xi , \\\\\n0 &: \\mbox{else}.\n\\end{cases}"
},
{
"math_id": 38,
"text": "f\\cdot \\delta_\\xi = f(\\xi)\\cdot \\delta_\\xi"
},
{
"math_id": 39,
"text": "T_{\\delta_{x_i}}(x)"
},
{
"math_id": 40,
"text": "T_f(x)"
},
{
"math_id": 41,
"text": " T_f(x) \\cdot T_{\\delta_{x_i}} (x) = f(x_i) \\cdot T_{\\delta_{x_i}}(x) "
},
{
"math_id": 42,
"text": "i"
},
{
"math_id": 43,
"text": "U(x)"
},
{
"math_id": 44,
"text": " U(x_0,x_1,x_2,x_3) = \\begin{pmatrix}\n1 & \\frac{1}{(x_1-x_0)} & \\frac{1}{(x_2-x_0) (x_2-x_1)} & \\frac{1}{(x_3-x_0) (x_3-x_1) (x_3-x_2)} \\\\\n0 & 1 & \\frac{1}{(x_2-x_1)} & \\frac{1}{(x_3-x_1) (x_3-x_2)} \\\\\n0 & 0 & 1 & \\frac{1}{(x_3-x_2)} \\\\\n0 & 0 & 0 & 1\n\\end{pmatrix} "
},
{
"math_id": 45,
"text": " U(x) \\cdot \\operatorname{diag}(f(x_0),\\dots,f(x_n)) = T_f(x) \\cdot U(x) ."
},
{
"math_id": 46,
"text": "\nJ =\n\\begin{pmatrix}\nx_0 & 1 & 0 & 0 & \\cdots & 0 \\\\\n0 & x_1 & 1 & 0 & \\cdots & 0 \\\\\n0 & 0 & x_2 & 1 & & 0 \\\\\n\\vdots & \\vdots & & \\ddots & \\ddots & \\\\\n0 & 0 & 0 & 0 & \\; \\ddots & 1\\\\\n0 & 0 & 0 & 0 & & x_n\n\\end{pmatrix}\n"
},
{
"math_id": 47,
"text": "x_0,\\dots,x_n"
},
{
"math_id": 48,
"text": "J^m"
},
{
"math_id": 49,
"text": "m"
},
{
"math_id": 50,
"text": "J"
},
{
"math_id": 51,
"text": "p(\\xi) = a_0 + a_1 \\cdot \\xi + \\dots + a_m \\cdot \\xi^m"
},
{
"math_id": 52,
"text": "p(J) = a_0 + a_1\\cdot J + \\dots + a_m\\cdot J^m"
},
{
"math_id": 53,
"text": "T_p(x) = p(J)."
},
{
"math_id": 54,
"text": "f(\\xi) = \\sum_{k=0}^\\infty a_k \\xi^k"
},
{
"math_id": 55,
"text": "f(J)=\\sum_{k=0}^\\infty a_k J^k"
},
{
"math_id": 56,
"text": "T_f(x)=f(J)."
},
{
"math_id": 57,
"text": "\n\\begin{align}\nf[x_0] &= f(x_0) \\\\\nf[x_0,x_1] &= \\frac{f(x_0)}{(x_0-x_1)} + \\frac{f(x_1)}{(x_1-x_0)} \\\\\nf[x_0,x_1,x_2] &= \\frac{f(x_0)}{(x_0-x_1)\\cdot(x_0-x_2)} + \\frac{f(x_1)}{(x_1-x_0)\\cdot(x_1-x_2)} + \\frac{f(x_2)}{(x_2-x_0)\\cdot(x_2-x_1)} \\\\\nf[x_0,x_1,x_2,x_3] &= \\frac{f(x_0)}{(x_0-x_1)\\cdot(x_0-x_2)\\cdot(x_0-x_3)} + \\frac{f(x_1)}{(x_1-x_0)\\cdot(x_1-x_2)\\cdot(x_1-x_3)} +\\\\\n&\\quad\\quad \\frac{f(x_2)}{(x_2-x_0)\\cdot(x_2-x_1)\\cdot(x_2-x_3)} + \\frac{f(x_3)}{(x_3-x_0)\\cdot(x_3-x_1)\\cdot(x_3-x_2)} \\\\\nf[x_0,\\dots,x_n] &=\n\\sum_{j=0}^{n} \\frac{f(x_j)}{\\prod_{k\\in\\{0,\\dots,n\\}\\setminus\\{j\\}} (x_j-x_k)}\n\\end{align}\n"
},
{
"math_id": 58,
"text": "\\omega(\\xi) = (\\xi-x_0) \\cdots (\\xi-x_n)"
},
{
"math_id": 59,
"text": "\nf[x_0,\\dots,x_n] = \\sum_{j=0}^{n} \\frac{f(x_j)}{\\omega'(x_j)}.\n"
},
{
"math_id": 60,
"text": "x_0<x_1<\\cdots<x_n"
},
{
"math_id": 61,
"text": "n\\geq 1"
},
{
"math_id": 62,
"text": "f[x_0,\\ldots,x_n] = \\frac{1}{(n-1)!} \\int_{x_0}^{x_n} f^{(n)}(t)\\;B_{n-1}(t) \\, dt"
},
{
"math_id": 63,
"text": "f^{(n)}"
},
{
"math_id": 64,
"text": "n"
},
{
"math_id": 65,
"text": "B_{n-1}"
},
{
"math_id": 66,
"text": "n-1"
},
{
"math_id": 67,
"text": "B_{n-1}(t) = \\sum_{k=0}^n \\frac{(\\max(0,x_k-t))^{n-1}}{\\omega'(x_k)}"
},
{
"math_id": 68,
"text": "(x_0, y_0), \\ldots, (x_n, y_n)"
},
{
"math_id": 69,
"text": "x_{k} = x_0 + k h,\\ \\text{ for } \\ k=0,\\ldots,n \\text{ and fixed } h>0"
},
{
"math_id": 70,
"text": "\\begin{align}\n\\Delta^{(0)} y_k &:= y_k,\\qquad k=0,\\ldots,n \\\\\n\\Delta^{(j)}y_k &:= \\Delta^{(j-1)}y_{k+1} - \\Delta^{(j-1)}y_k,\\qquad k=0,\\ldots,n-j,\\ j=1,\\dots,n.\n\\end{align}"
},
{
"math_id": 71,
"text": "\\begin{align}\n\\nabla^{(0)} y_k &:= y_k,\\qquad k=0,\\ldots,n \\\\\n\\nabla^{(j)}y_k &:= \\nabla^{(j-1)}y_{k} - \\nabla^{(j-1)}y_{k-1},\\qquad k=0,\\ldots,n-j,\\ j=1,\\dots,n.\n\\end{align}"
},
{
"math_id": 72,
"text": "\n\\begin{matrix}\ny_0 & & & \\\\\n & \\Delta y_0 & & \\\\\ny_1 & & \\Delta^2 y_0 & \\\\\n & \\Delta y_1 & & \\Delta^3 y_0\\\\\ny_2 & & \\Delta^2 y_1 & \\\\\n & \\Delta y_2 & & \\\\\ny_3 & & & \\\\\n\\end{matrix}\n"
},
{
"math_id": 73,
"text": "\n\\begin{matrix}\ny_0 & & & \\\\\n & \\nabla y_1 & & \\\\\ny_1 & & \\nabla^2 y_2 & \\\\\n & \\nabla y_2 & & \\nabla^3 y_3\\\\\ny_2 & & \\nabla^2 y_3 & \\\\\n & \\nabla y_3 & & \\\\\ny_3 & & & \\\\\n\\end{matrix} \n"
},
{
"math_id": 74,
"text": "[y_j, y_{j+1}, \\ldots , y_{j+k}] = \\frac{1}{k!h^k}\\Delta^{(k)}y_j, "
},
{
"math_id": 75,
"text": "[{y}_{j}, y_{j-1},\\ldots,{y}_{j-k}] = \\frac{1}{k!h^k}\\nabla^{(k)}y_j. "
}
] | https://en.wikipedia.org/wiki?curid=976365 |
9764915 | Bicarbonate buffer system | Buffer system that maintains pH balance in humans
The bicarbonate buffer system is an acid-base homeostatic mechanism involving the balance of carbonic acid (H2CO3), bicarbonate ion (HCO), and carbon dioxide (CO2) in order to maintain pH in the blood and duodenum, among other tissues, to support proper metabolic function. Catalyzed by carbonic anhydrase, carbon dioxide (CO2) reacts with water (H2O) to form carbonic acid (H2CO3), which in turn rapidly dissociates to form a bicarbonate ion (HCO ) and a hydrogen ion (H+) as shown in the following reaction:
formula_0
As with any buffer system, the pH is balanced by the presence of both a weak acid (for example, H2CO3) and its conjugate base (for example, HCO) so that any excess acid or base introduced to the system is neutralized.
Failure of this system to function properly results in acid-base imbalance, such as acidemia (pH < 7.35) and alkalemia (pH > 7.45) in the blood.
In systemic acid–base balance.
In tissue, cellular respiration produces carbon dioxide as a waste product; as one of the primary roles of the cardiovascular system, most of this CO2 is rapidly removed from the tissues by its hydration to bicarbonate ion. The bicarbonate ion present in the blood plasma is transported to the lungs, where it is dehydrated back into CO2 and released during exhalation. These hydration and dehydration conversions of CO2 and H2CO3, which are normally very slow, are facilitated by carbonic anhydrase in both the blood and duodenum. While in the blood, bicarbonate ion serves to neutralize acid introduced to the blood through other metabolic processes (e.g. lactic acid, ketone bodies); likewise, any bases are neutralized by carbonic acid (H2CO3).
Regulation.
As calculated by the Henderson–Hasselbalch equation, in order to maintain a normal pH of 7.4 in the blood (whereby the pKa of carbonic acid is 6.1 at physiological temperature), a 20:1 ratio of bicarbonate to carbonic acid must constantly be maintained; this homeostasis is mainly mediated by pH sensors in the medulla oblongata of the brain and probably in the kidneys, linked via negative feedback loops to effectors in the respiratory and renal systems. In the blood of most animals, the bicarbonate buffer system is coupled to the lungs via respiratory compensation, the process by which the rate and/or depth of breathing changes to compensate for changes in the blood concentration of CO2. By Le Chatelier's principle, the release of CO2 from the lungs pushes the reaction above to the left, causing carbonic anhydrase to form CO2 until all excess protons are removed. Bicarbonate concentration is also further regulated by renal compensation, the process by which the kidneys regulate the concentration of bicarbonate ions by secreting H+ ions into the urine while, at the same time, reabsorbing HCO ions into the blood plasma, or "vice versa", depending on whether the plasma pH is falling or rising, respectively.
Henderson–Hasselbalch equation.
A modified version of the Henderson–Hasselbalch equation can be used to relate the pH of blood to constituents of the bicarbonate buffer system:
formula_1
where:
When describing arterial blood gas, the Henderson–Hasselbalch equation is usually quoted in terms of pCO2, the partial pressure of carbon dioxide, rather than H2CO3 concentration. However, these quantities are related by the equation:
formula_2
where:
Combining these equations results in the following equation relating the pH of blood to the concentration of bicarbonate and the partial pressure of carbon dioxide:
formula_3
where:
Derivation of the Kassirer–Bleich approximation.
The Henderson–Hasselbalch equation, which is derived from the law of mass action, can be modified with respect to the bicarbonate buffer system to yield a simpler equation that provides a quick approximation of the H+ or HCO concentration without the need to calculate logarithms:
formula_4
Since the partial pressure of carbon dioxide is much easier to obtain from measurement than carbonic acid, the Henry's law solubility constant – which relates the partial pressure of a gas to its solubility – for CO2 in plasma is used in lieu of the carbonic acid concentration. After solving for H+ and applying Henry's law, the equation becomes:
formula_5
where "K’" is the dissociation constant of carbonic acid, which is equal to 800 nmol/L (since "K’" = 10−p"Ka"H2CO3 = 10−(6.1) ≈ 8.00×10−7 mol/L = 800 nmol/L).
After multiplying the constants (800 × 0.03 = 24) and solving for HCO, the equation is simplified to:
formula_6
where:
In other tissues.
The bicarbonate buffer system plays a vital role in other tissues as well. In the human stomach and duodenum, the bicarbonate buffer system serves to both neutralize gastric acid and stabilize the intracellular pH of epithelial cells via the secretion of bicarbonate ion into the gastric mucosa. In patients with duodenal ulcers, "Helicobacter pylori" eradication can restore mucosal bicarbonate secretion and reduce the risk of ulcer recurrence.
Tear buffering.
The tears are unique among body fluids in that they are exposed to the environment. Much like other body fluids, tear fluid is kept in a tight pH range using the bicarbonate buffer system. The pH of tears shift throughout a waking day, rising "about 0.013 pH units/hour" until a prolonged closed-eye period causes the pH to fall again. Most healthy individuals have tear pH in the range of 7.0 to 7.7, where bicarbonate buffering is the most significant, but proteins and other buffering components are also present that are active outside of this pH range.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rm CO_2 + H_2O \\rightleftarrows H_2CO_3 \\rightleftarrows HCO_3^- + H^+"
},
{
"math_id": 1,
"text": " \\ce{pH} = \\textrm{p}K_{a~\\ce{H_2CO_3}}+ \\log \\left ( \\frac{[\\ce{HCO_3^-}]}{[\\ce{H_2CO_3}]} \\right ),"
},
{
"math_id": 2,
"text": " [\\ce{H_2CO_3}] = k_{\\ce{H~CO_2}} \\times p_\\ce{CO_2}, "
},
{
"math_id": 3,
"text": " \\ce{pH} = 6.1 + \\log \\left ( \\frac{[\\ce{HCO_3^-}]}{0.0307 \\times p_\\ce{CO_2}} \\right ),"
},
{
"math_id": 4,
"text": "K_{a,\\ce{H_2CO_3}} = \\frac{[\\ce{HCO_3^-}] [\\ce{H^+}]}{[\\ce{H_2CO_3}]}"
},
{
"math_id": 5,
"text": "[\\ce{H^+}] = \\frac{K'\\cdot0.03p_{\\ce{CO_2}}}{[\\ce{HCO_3^-}]},"
},
{
"math_id": 6,
"text": "[\\ce{HCO_3^-}] = 24\\frac{p_{\\ce{CO_2}}}{[\\ce{H^+}]}"
}
] | https://en.wikipedia.org/wiki?curid=9764915 |
9765126 | Quinhydrone electrode | The quinhydrone electrode may be used to measure the hydrogen ion concentration (pH) of a solution containing an acidic substance.
Principles and operation.
Quinones form a quinhydrone species by formation of hydrogen bonding between ρ-quinone and ρ-hydroquinone. An equimolar mixture of ρ-quinones and ρ-hydroquinone in contact with an inert metallic electrode, such as antimony, forms what is known as a quinhydrone electrode. Such devices can be used to measure the pH of solutions. Quinhydrone electrodes provide fast response times and high accuracy. However, it can only measure pH in the range of 1 to 9 and the solution must not contain a strong oxidizing or reducing agent.
A platinum wire electrode is immersed in a saturated aqueous solution of quinhydrone, in which there is the following equilibrium
C6H6O2 ⇌ C6H4O2 + 2H+ +2e−.
The potential difference between the platinum electrode and a reference electrode is dependent on the activity, formula_0, of hydrogen ions in the solution.
formula_1 (Nernst equation)
Limitations.
The quinhydrone electrode provides an alternative to the most commonly used glass electrode. however, it is not reliable above pH 8 (at 298 K) and cannot be used with solutions that contain a strong oxidizing or reducing agent.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "a_{H^{+}}"
},
{
"math_id": 1,
"text": "E= E^0 + \\frac{RT}{2F} \\ln a_{H^{+}}"
}
] | https://en.wikipedia.org/wiki?curid=9765126 |
9765502 | Cloth modeling | Simulating cloth within a computer program
Cloth modeling is the term used for simulating cloth within a computer program, usually in the context of 3D computer graphics. The main approaches used for this may be classified into three basic types: geometric, physical, and particle/energy.
Background.
Most models of cloth are based on "particles" of mass connected in some manner of mesh. Newtonian Physics is used to model each particle through the use of a "black box" called a physics engine. This involves using the basic law of motion (Newton's Second Law):
formula_0
In all of these models, the goal is to find the position and shape of a piece of fabric using this basic equation and several other methods.
Geometric methods.
Jerry Weil pioneered the first of these, the geometric technique, in 1986. His work was focused on approximating the look of cloth by treating cloth like a collection of cables and using Hyperbolic cosine (catenary) curves. Because of this, it is not suitable for dynamic models but works very well for stationary or single-frame renders. This technique creates an underlying shape out of single points; then, it parses through each set of three of these points and maps a catenary curve to the set. It then takes the lowest out of each overlapping set and uses it for the render.
Physical methods.
The second technique treats cloth like a grid work of particles connected to each other by springs. Whereas the geometric approach accounted for none of the inherent stretch of a woven material, this physical model accounts for stretch (tension), stiffness, and weight:
formula_1
Now we apply the basic principle of mechanical equilibrium in which all bodies seek lowest energy by differentiating this equation to find the minimum energy.
Particle/energy methods.
The last method is more complex than the first two. The particle technique takes the physical methods a step further and supposes that we have a network of particles interacting directly. Rather than springs, the energy interactions of the particles are used to determine the cloth's shape. An energy equation that adds onto the following is used:
formula_2
Terms for energy added by any source can be added to this equation, then derive and find minima, which generalizes our model. This allows for modeling cloth behavior under any circumstance, and since the cloth is treated as a collection of particles its behavior can be described with the dynamics provided in our physics engine. | [
{
"math_id": 0,
"text": "\\vec{F} = m \\vec{a}"
},
{
"math_id": 1,
"text": "E(Particle_{i,j}) = k_{s}E_{s,i,j} + k_{b}E_{b,i,j} + k_{g}E_{g,i,j}"
},
{
"math_id": 2,
"text": "U_{Total} = U_{Repel} + U_{Stretch} + U_{Bend} + U_{Trellis} + U_{Gravity}"
}
] | https://en.wikipedia.org/wiki?curid=9765502 |
976640 | Orbital state vectors | Cartesian vectors of position and velocity of an orbiting body in space
In astrodynamics and celestial dynamics, the orbital state vectors (sometimes state vectors) of an orbit are
Cartesian vectors of position (formula_0) and velocity (formula_1) that together with their time (epoch) (formula_2) uniquely determine the trajectory of the orbiting body in space.154
Orbital state vectors come in many forms including the traditional Position-Velocity vectors, Two-line element set (TLE), and Vector Covariance Matrix (VCM).
Frame of reference.
State vectors are defined with respect to some frame of reference, usually but not always an inertial reference frame. One of the more popular reference frames for the state vectors of bodies moving near Earth is the Earth-centered inertial (ECI) system defined as follows:23
The ECI reference frame is not truly inertial because of the slow, 26,000 year precession of Earth's axis, so the reference frames defined by Earth's orientation at a standard astronomical epoch such as B1950 or J2000 are also commonly used.24
Many other reference frames can be used to meet various application requirements, including those centered on the Sun or on other planets or moons, the one defined by the barycenter and total angular momentum of the solar system (in particular the ICRF), or even a spacecraft's own orbital plane and angular momentum.
Position and velocity vectors.
The "position vector" formula_0 describes the position of the body in the chosen frame of reference, while the "velocity vector" formula_1 describes its velocity in the same frame at the same time. Together, these two vectors and the time at which they are valid uniquely describe the body's trajectory as detailed in Orbit determination. The principal reasoning is that Newton's law of gravitation yields an acceleration formula_3; if the product formula_4 of gravitational constant and attractive mass at the center of the orbit are known, position and velocity are the initial values for that second order differential equation for formula_5 which has a unique solution.
The body does not actually have to be in orbit for its state vectors to determine its trajectory; it only has to move ballistically, i.e., solely under the effects of its own inertia and gravity. For example, it could be a spacecraft or missile in a suborbital trajectory. If other forces such as drag or thrust are significant, they must be added vectorially to those of gravity when performing the integration to determine future position and velocity.
For any object moving through space, the velocity vector is tangent to the trajectory. If formula_6 is the unit vector tangent to the trajectory, then
formula_7
Derivation.
The velocity vector formula_8 can be derived from position vector formula_0 by differentiation with respect to time:
formula_9
An object's state vector can be used to compute its classical or Keplerian orbital elements and vice versa. Each representation has its advantages. The elements are more descriptive of the size, shape and orientation of an orbit, and may be used to quickly and easily estimate the object's state at any arbitrary time provided its motion is accurately modeled by the two-body problem with only small perturbations.
On the other hand, the state vector is more directly useful in a numerical integration that accounts for significant, arbitrary, time-varying forces such as drag, thrust and gravitational perturbations from third bodies as well as the gravity of the primary body.
The state vectors (formula_0 and formula_1) can be easily used to compute the specific angular momentum vector as
formula_10.
Because even satellites in low Earth orbit experience significant perturbations from non-spherical Earth's figure, solar radiation pressure, lunar tide, and atmospheric drag, the Keplerian elements computed from the state vector at any moment are only valid for a short period of time and need to be recomputed often to determine a valid object state. Such element sets are known as "osculating elements" because they coincide with the actual orbit only at that moment. | [
{
"math_id": 0,
"text": "\\mathbf{r}"
},
{
"math_id": 1,
"text": "\\mathbf{v}"
},
{
"math_id": 2,
"text": "t"
},
{
"math_id": 3,
"text": "\\ddot \\mathbf{r}=-GM/r^2"
},
{
"math_id": 4,
"text": "GM"
},
{
"math_id": 5,
"text": "\\mathbf{r}(t)"
},
{
"math_id": 6,
"text": " \\hat{\\mathbf{u}}_t"
},
{
"math_id": 7,
"text": "\\mathbf{v} = v\\hat{\\mathbf{u}}_t"
},
{
"math_id": 8,
"text": "\\mathbf{v}\\,"
},
{
"math_id": 9,
"text": "\\mathbf{v} = \\frac{d\\mathbf{r}}{dt}"
},
{
"math_id": 10,
"text": "\\mathbf{h} = \\mathbf{r}\\times\\mathbf{v}"
}
] | https://en.wikipedia.org/wiki?curid=976640 |
976642 | Multiplication operator | In operator theory, a multiplication operator is an operator "T""f" defined on some vector space of functions and whose value at a function φ is given by multiplication by a fixed function f. That is,
formula_0
for all φ in the domain of "T""f", and all x in the domain of φ (which is the same as the domain of f).
Multiplication operators generalize the notion of operator given by a diagonal matrix. More precisely, one of the results of operator theory is a spectral theorem that states that every self-adjoint operator on a Hilbert space is unitarily equivalent to a multiplication operator on an "L""2" space.
These operators are often contrasted with composition operators, which are similarly induced by any fixed function f. They are also closely related to Toeplitz operators, which are compressions of multiplication operators on the circle to the Hardy space.
Example.
Consider the Hilbert space "X" = "L"2[−1, 3] of complex-valued square integrable functions on the interval [−1, 3]. With "f"("x") = "x"2, define the operator
formula_12
for any function φ in X. This will be a self-adjoint bounded linear operator, with domain all of "X" = "L"2[−1, 3] and with norm 9. Its spectrum will be the interval [0, 9] (the range of the function "x"↦ "x"2 defined on [−1, 3]). Indeed, for any complex number λ, the operator "T""f" − "λ" is given by
formula_13
It is invertible if and only if λ is not in [0, 9], and then its inverse is
formula_14
which is another multiplication operator.
This example can be easily generalized to characterizing the norm and spectrum of a multiplication operator on any "L""p" space.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "T_f\\varphi(x) = f(x) \\varphi (x) \\quad "
},
{
"math_id": 1,
"text": "T_f"
},
{
"math_id": 2,
"text": "L^2(X)"
},
{
"math_id": 3,
"text": "\\sigma"
},
{
"math_id": 4,
"text": "L^\\infty(X)"
},
{
"math_id": 5,
"text": "\\|f\\|_\\infty"
},
{
"math_id": 6,
"text": "T_\\overline{f}"
},
{
"math_id": 7,
"text": "\\overline{f}"
},
{
"math_id": 8,
"text": "(T_f - \\lambda)"
},
{
"math_id": 9,
"text": "T_{\\frac{1}{f - \\lambda}}."
},
{
"math_id": 10,
"text": "T_g"
},
{
"math_id": 11,
"text": "L^2"
},
{
"math_id": 12,
"text": "T_f\\varphi(x) = x^2 \\varphi (x) "
},
{
"math_id": 13,
"text": "(T_f - \\lambda)(\\varphi)(x) = (x^2-\\lambda) \\varphi(x). "
},
{
"math_id": 14,
"text": "(T_f - \\lambda)^{-1}(\\varphi)(x) = \\frac{1}{x^2-\\lambda} \\varphi(x),"
}
] | https://en.wikipedia.org/wiki?curid=976642 |
976666 | Monomial basis | Basis of polynomials consisting of monomials
In mathematics the monomial basis of a polynomial ring is its basis (as a vector space or free module over the field or ring of coefficients) that consists of all monomials. The monomials form a basis because every polynomial may be uniquely written as a finite linear combination of monomials (this is an immediate consequence of the definition of a polynomial).
One indeterminate.
The polynomial ring "K"["x"] of univariate polynomials over a field "K" is a "K"-vector space, which has
formula_0
as an (infinite) basis. More generally, if "K" is a ring then "K"["x"] is a free module which has the same basis.
The polynomials of degree at most "d" form also a vector space (or a free module in the case of a ring of coefficients), which has formula_1 as a basis.
The canonical form of a polynomial is its expression on this basis:
formula_2
or, using the shorter sigma notation:
formula_3
The monomial basis is naturally totally ordered, either by increasing degrees
formula_4
or by decreasing degrees
formula_5
Several indeterminates.
In the case of several indeterminates formula_6 a monomial is a product
formula_7
where the formula_8 are non-negative integers. As formula_9 an exponent equal to zero means that the corresponding indeterminate does not appear in the monomial; in particular formula_10 is a monomial.
Similar to the case of univariate polynomials, the polynomials in formula_11 form a vector space (if the coefficients belong to a field) or a free module (if the coefficients belong to a ring), which has the set of all monomials as a basis, called the monomial basis.
The homogeneous polynomials of degree formula_12 form a subspace which has the monomials of degree formula_13 as a basis. The dimension of this subspace is the number of monomials of degree formula_12, which is
formula_14
where formula_15 is a binomial coefficient.
The polynomials of degree at most formula_12 form also a subspace, which has the monomials of degree at most formula_12 as a basis. The number of these monomials is the dimension of this subspace, equal to
formula_16
In contrast to the univariate case, there is no natural total order of the monomial basis in the multivariate case. For problems which require choosing a total order, such as Gröbner basis computations, one generally chooses an "admissible" monomial order – that is, a total order on the set of monomials such that
formula_17
and formula_18 for every monomial formula_19 | [
{
"math_id": 0,
"text": "1, x, x^2, x^3, \\ldots"
},
{
"math_id": 1,
"text": "\\{ 1, x, x^2, \\ldots, x^{d-1}, x^d \\}"
},
{
"math_id": 2,
"text": "a_0 + a_1 x + a_2 x^2 + \\dots + a_d x^d,"
},
{
"math_id": 3,
"text": "\\sum_{i=0}^d a_ix^i."
},
{
"math_id": 4,
"text": "1 < x < x^2 < \\cdots, "
},
{
"math_id": 5,
"text": "1 > x > x^2 > \\cdots. "
},
{
"math_id": 6,
"text": "x_1, \\ldots, x_n,"
},
{
"math_id": 7,
"text": "x_1^{d_1}x_2^{d_2}\\cdots x_n^{d_n},"
},
{
"math_id": 8,
"text": "d_i"
},
{
"math_id": 9,
"text": "x_i^0 = 1,"
},
{
"math_id": 10,
"text": " 1 = x_1^0 x_2^0\\cdots x_n^0"
},
{
"math_id": 11,
"text": "x_1, \\ldots, x_n"
},
{
"math_id": 12,
"text": "d"
},
{
"math_id": 13,
"text": "d = d_1+\\cdots+d_n"
},
{
"math_id": 14,
"text": "\\binom{d+n-1}{d} = \\frac{n(n+1)\\cdots (n+d-1)}{d!},"
},
{
"math_id": 15,
"text": "\\binom{d+n-1}{d}"
},
{
"math_id": 16,
"text": "\\binom{d + n}{d}= \\binom{d + n}{n}=\\frac{(d+1)\\cdots(d+n)}{n!}."
},
{
"math_id": 17,
"text": "m<n \\iff mq < nq"
},
{
"math_id": 18,
"text": "1 \\leq m"
},
{
"math_id": 19,
"text": "m, n, q."
}
] | https://en.wikipedia.org/wiki?curid=976666 |
976673 | Hilbert–Schmidt operator | Topic in mathematics
In mathematics, a Hilbert–Schmidt operator, named after David Hilbert and Erhard Schmidt, is a bounded operator formula_0 that acts on a Hilbert space formula_1 and has finite Hilbert–Schmidt norm
formula_2
where formula_3 is an orthonormal basis. The index set formula_4 need not be countable. However, the sum on the right must contain at most countably many non-zero terms, to have meaning. This definition is independent of the choice of the orthonormal basis.
In finite-dimensional Euclidean space, the Hilbert–Schmidt norm formula_5 is identical to the Frobenius norm.
||·||HS is well defined.
The Hilbert–Schmidt norm does not depend on the choice of orthonormal basis. Indeed, if formula_6 and formula_7 are such bases, then
formula_8
If formula_9 then formula_10 As for any bounded operator, formula_11 Replacing formula_12 with formula_13 in the first formula, obtain formula_14 The independence follows.
Examples.
An important class of examples is provided by Hilbert–Schmidt integral operators.
Every bounded operator with a finite-dimensional range (these are called operators of finite rank) is a Hilbert–Schmidt operator.
The identity operator on a Hilbert space is a Hilbert–Schmidt operator if and only if the Hilbert space is finite-dimensional.
Given any formula_15 and formula_16 in formula_17, define formula_18 by formula_19, which is a continuous linear operator of rank 1 and thus a Hilbert–Schmidt operator;
moreover, for any bounded linear operator "formula_20" on formula_17 (and into formula_17), formula_21.
If formula_22 is a bounded compact operator with eigenvalues formula_23 of formula_24, where each eigenvalue is repeated as often as its multiplicity, then formula_25 is Hilbert–Schmidt if and only if formula_26, in which case the Hilbert–Schmidt norm of formula_25 is formula_27.
If formula_28, where formula_29 is a measure space, then the integral operator formula_30 with kernel formula_31 is a Hilbert–Schmidt operator and formula_32.
Space of Hilbert–Schmidt operators.
The product of two Hilbert–Schmidt operators has finite trace-class norm; therefore, if "A" and "B" are two Hilbert–Schmidt operators, the Hilbert–Schmidt inner product can be defined as
formula_33
The Hilbert–Schmidt operators form a two-sided *-ideal in the Banach algebra of bounded operators on "H".
They also form a Hilbert space, denoted by "B"HS("H") or "B"2("H"), which can be shown to be naturally isometrically isomorphic to the tensor product of Hilbert spaces
formula_34
where "H"∗ is the dual space of "H".
The norm induced by this inner product is the Hilbert–Schmidt norm under which the space of Hilbert–Schmidt operators is complete (thus making it into a Hilbert space).
The space of all bounded linear operators of finite rank (i.e. that have a finite-dimensional range) is a dense subset of the space of Hilbert–Schmidt operators (with the Hilbert–Schmidt norm).
The set of Hilbert–Schmidt operators is closed in the norm topology if, and only if, "H" is finite-dimensional.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " A \\colon H \\to H "
},
{
"math_id": 1,
"text": " H "
},
{
"math_id": 2,
"text": "\\|A\\|^2_{\\operatorname{HS}} \\ \\stackrel{\\text{def}}{=}\\ \\sum_{i \\in I} \\|Ae_i\\|^2_H,"
},
{
"math_id": 3,
"text": "\\{e_i: i \\in I\\}"
},
{
"math_id": 4,
"text": " I "
},
{
"math_id": 5,
"text": "\\|\\cdot\\|_\\text{HS}"
},
{
"math_id": 6,
"text": "\\{e_i\\}_{i\\in I}"
},
{
"math_id": 7,
"text": "\\{f_j\\}_{j\\in I}"
},
{
"math_id": 8,
"text": "\n\\sum_i \\|Ae_i\\|^2 = \\sum_{i,j} \\left| \\langle Ae_i, f_j\\rangle \\right|^2 = \\sum_{i,j} \\left| \\langle e_i, A^*f_j\\rangle \\right|^2 = \\sum_j\\|A^* f_j\\|^2.\n"
},
{
"math_id": 9,
"text": "e_i = f_i, "
},
{
"math_id": 10,
"text": " \\sum_i \\|Ae_i\\|^2 = \\sum_i\\|A^* e_i\\|^2. "
},
{
"math_id": 11,
"text": " A = A^{**}. "
},
{
"math_id": 12,
"text": " A "
},
{
"math_id": 13,
"text": " A^* "
},
{
"math_id": 14,
"text": " \\sum_i \\|A^* e_i\\|^2 = \\sum_j\\|A f_j\\|^2. "
},
{
"math_id": 15,
"text": "x"
},
{
"math_id": 16,
"text": "y"
},
{
"math_id": 17,
"text": "H"
},
{
"math_id": 18,
"text": "x \\otimes y : H \\to H"
},
{
"math_id": 19,
"text": "(x \\otimes y)(z) = \\langle z, y \\rangle x"
},
{
"math_id": 20,
"text": "A"
},
{
"math_id": 21,
"text": "\\operatorname{Tr}\\left( A\\left( x \\otimes y \\right) \\right) = \\left\\langle A x, y \\right\\rangle"
},
{
"math_id": 22,
"text": "T: H \\to H"
},
{
"math_id": 23,
"text": "\\ell_1, \\ell_2, \\dots"
},
{
"math_id": 24,
"text": "|T| = \\sqrt{T^*T}"
},
{
"math_id": 25,
"text": "T"
},
{
"math_id": 26,
"text": "\\sum_{i=1}^{\\infty} \\ell_i^2 < \\infty"
},
{
"math_id": 27,
"text": "\\left\\| T \\right\\|_{\\operatorname{HS}} = \\sqrt{\\sum_{i=1}^{\\infty} \\ell_i^2}"
},
{
"math_id": 28,
"text": "k \\in L^2\\left( \\mu \\times \\mu \\right)"
},
{
"math_id": 29,
"text": "\\left( X, \\Omega, \\mu \\right)"
},
{
"math_id": 30,
"text": "K : L^2\\left( \\mu \\right) \\to L^2\\left( \\mu \\right)"
},
{
"math_id": 31,
"text": "k"
},
{
"math_id": 32,
"text": "\\left\\| K \\right\\|_{\\operatorname{HS}} = \\left\\| k \\right\\|_2"
},
{
"math_id": 33,
"text": "\\langle A, B \\rangle_\\text{HS} = \\operatorname{Tr}(A^* B) = \\sum_i \\langle Ae_i, Be_i \\rangle."
},
{
"math_id": 34,
"text": "H^* \\otimes H,"
},
{
"math_id": 35,
"text": "\\left| T \\right| := \\sqrt{T^* T}"
},
{
"math_id": 36,
"text": "S : H_1 \\to H_2"
},
{
"math_id": 37,
"text": "T : H_2 \\to H_3"
},
{
"math_id": 38,
"text": "T \\circ S : H_1 \\to H_3"
},
{
"math_id": 39,
"text": "\\left\\| T \\right\\| \\leq \\left\\| T \\right\\|_{\\operatorname{HS}}"
},
{
"math_id": 40,
"text": "\\operatorname{Tr}"
},
{
"math_id": 41,
"text": "T^{*} T"
},
{
"math_id": 42,
"text": "\\|T\\|^2_\\text{HS} = \\operatorname{Tr}(T^* T)"
},
{
"math_id": 43,
"text": "\\left\\| S^* \\right\\|_{\\operatorname{HS}} = \\left\\| S \\right\\|_{\\operatorname{HS}}"
},
{
"math_id": 44,
"text": "\\left\\| T S \\right\\|_{\\operatorname{HS}} \\leq \\left\\| T \\right\\| \\left\\| S \\right\\|_{\\operatorname{HS}}"
},
{
"math_id": 45,
"text": "\\left\\| S T \\right\\|_{\\operatorname{HS}} \\leq \\left\\| S \\right\\|_{\\operatorname{HS}} \\left\\| T \\right\\|"
},
{
"math_id": 46,
"text": "B\\left( H \\right)"
},
{
"math_id": 47,
"text": "\\|A\\|^2_\\text{HS} = \\sum_{i,j} |\\langle e_i, Ae_j \\rangle|^2 = \\|A\\|^2_2"
},
{
"math_id": 48,
"text": "\\|A\\|_2"
}
] | https://en.wikipedia.org/wiki?curid=976673 |
9767106 | Low (computability) | In computability theory, a Turing degree ["X"] is low if the Turing jump ["X"′] is 0′. A set is low if it has low degree. Since every set is computable from its jump, any low set is computable in 0′, but the jump of sets computable in 0′ can bound any degree recursively enumerable in 0′ (Schoenfield Jump Inversion). "X" being low says that its jump "X"′ has the least possible degree in terms of Turing reducibility for the jump of a set.
There are various related properties to low degrees:
More generally, properties of sets which describe their being computationally weak (when used as a Turing oracle) are referred to under the umbrella term "lowness properties".
By the Low basis theorem of Jockusch and Soare, any nonempty formula_0 class in formula_1 contains a set of low degree. This implies that, although low sets are computationally weak, they can still accomplish such feats as computing a completion of Peano Arithmetic. In practice, this allows a restriction on the computational power of objects needed for recursion theoretic constructions: for example, those used in the analyzing the proof-theoretic strength of Ramsey's theorem.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\Pi^0_1"
},
{
"math_id": 1,
"text": "2^\\omega"
}
] | https://en.wikipedia.org/wiki?curid=9767106 |
9767227 | Low basis theorem | The low basis theorem is one of several basis theorems in computability theory, each of which showing that, given an infinite subtree of the binary tree formula_0, it is possible to find an infinite path through the tree with particular computability properties. The low basis theorem, in particular, shows that there must be a path which is low; that is, the Turing jump of the path is Turing equivalent to the halting problem formula_1.
Statement and proof.
The low basis theorem states that every nonempty formula_2 class in formula_3 (see arithmetical hierarchy) contains a set of low degree (Soare 1987:109). This is equivalent, by definition, to the statement that each infinite computable subtree of the binary tree formula_0 has an infinite path of low degree.
The proof uses the method of forcing with formula_2 classes (Cooper 2004:330). Hájek and Kučera (1989) showed that the low basis is provable in the formal system of arithmetic known as formula_4.
The forcing argument can also be formulated explicitly as follows. For a set "X"⊆ω, let "f"("X") = Σ{"i"}("X")↓2−"i", where {"i"}("X")↓ means that Turing machine "i" halts on "X" (with the sum being over all such "i"). Then, for every nonempty (lightface) formula_2 "S"⊆2ω, the (unique) "X"∈"S" minimizing "f"("X") has a low Turing degree. This is because "X" satisfies {"i"}("X")↓ ⇔ ∀"Y"∈"S" ({"i"}("Y")↓ ∨ ∃"j"<"i" ({"j"}("Y")↓ ∧ ¬{"j"}("X")↓)), so the function "i" ↦ {"i"}("X")↓ can be computed from formula_1 by induction on "i"; note that ∀"Y"∈"S" φ("Y") is formula_5 for any formula_5 set φ. In other words, whether a machine halts on "X" is "forced" by a finite condition, which allows for "X"′ = formula_1.
Application.
One application of the low basis theorem is to construct completions of effective theories so that the completions have low Turing degree. For example, the low basis theorem implies the existence of PA degrees strictly below formula_1. | [
{
"math_id": 0,
"text": "2^{<\\omega}"
},
{
"math_id": 1,
"text": "\\emptyset'"
},
{
"math_id": 2,
"text": "\\Pi^0_1"
},
{
"math_id": 3,
"text": "2^\\omega"
},
{
"math_id": 4,
"text": "\\text{I-}\\Sigma_1"
},
{
"math_id": 5,
"text": "\\Sigma^0_1"
}
] | https://en.wikipedia.org/wiki?curid=9767227 |
976793 | Whitehead theorem | When a mapping that induces isomorphisms on all homotopy groups is a homotopy equivalence
In homotopy theory (a branch of mathematics), the Whitehead theorem states that if a continuous mapping "f" between CW complexes "X" and "Y" induces isomorphisms on all homotopy groups, then "f" is a homotopy equivalence. This result was proved by J. H. C. Whitehead in two landmark papers from 1949, and provides a justification for working with the concept of a CW complex that he introduced there. It is a model result of algebraic topology, in which the behavior of certain algebraic invariants (in this case, homotopy groups) determines a topological property of a mapping.
Statement.
In more detail, let "X" and "Y" be topological spaces. Given a continuous mapping
formula_0
and a point "x" in "X", consider for any "n" ≥ 1 the induced homomorphism
formula_1
where π"n"("X","x") denotes the "n"-th homotopy group of "X" with base point "x". (For "n" = 0, π0("X") just means the set of path components of "X".) A map "f" is a weak homotopy equivalence if the function
formula_2
is bijective, and the homomorphisms "f"* are bijective for all "x" in "X" and all "n" ≥ 1. (For "X" and "Y" path-connected, the first condition is automatic, and it suffices to state the second condition for a single point "x" in "X".) The Whitehead theorem states that a weak homotopy equivalence from one CW complex to another is a homotopy equivalence. (That is, the map "f": "X" → "Y" has a homotopy inverse "g": "Y" → "X", which is not at all clear from the assumptions.) This implies the same conclusion for spaces "X" and "Y" that are homotopy equivalent to CW complexes.
Combining this with the Hurewicz theorem yields a useful corollary: a continuous map formula_0 between simply connected CW complexes that induces an isomorphism on all integral homology groups is a homotopy equivalence.
Spaces with isomorphic homotopy groups may not be homotopy equivalent.
A word of caution: it is not enough to assume π"n"("X") is isomorphic to π"n"("Y") for each "n" in order to conclude that "X" and "Y" are homotopy equivalent. One really needs a map "f" : "X" → "Y" inducing an isomorphism on homotopy groups. For instance, take "X"= "S"2 × RP3 and "Y"= RP2 × "S"3. Then "X" and "Y" have the same fundamental group, namely the cyclic group Z/2, and the same universal cover, namely "S"2 × "S"3; thus, they have isomorphic homotopy groups. On the other hand their homology groups are different (as can be seen from the Künneth formula); thus, "X" and "Y" are not homotopy equivalent.
The Whitehead theorem does not hold for general topological spaces or even for all subspaces of Rn. For example, the Warsaw circle, a compact subset of the plane, has all homotopy groups zero, but the map from the Warsaw circle to a single point is not a homotopy equivalence. The study of possible generalizations of Whitehead's theorem to more general spaces is part of the subject of shape theory.
Generalization to model categories.
In any model category, a weak equivalence between cofibrant-fibrant objects is a homotopy equivalence. | [
{
"math_id": 0,
"text": "f\\colon X \\to Y"
},
{
"math_id": 1,
"text": "f_*\\colon \\pi_n(X,x) \\to \\pi_n(Y,f(x)),"
},
{
"math_id": 2,
"text": "f_*\\colon \\pi_0(X) \\to \\pi_0(Y)"
}
] | https://en.wikipedia.org/wiki?curid=976793 |
976834 | Decomposition of spectrum (functional analysis) | Construction in functional analysis, useful to solve differential equations
The spectrum of a linear operator formula_0 that operates on a Banach space formula_1 is a fundamental concept of functional analysis. The spectrum consists of all scalars formula_2 such that the operator formula_3 does not have a bounded inverse on formula_1. The spectrum has a standard decomposition into three parts:
This decomposition is relevant to the study of differential equations, and has applications to many branches of science and engineering. A well-known example from quantum mechanics is the explanation for the discrete spectral lines and the continuous band in the light emitted by excited atoms of hydrogen.
Decomposition into point spectrum, continuous spectrum, and residual spectrum.
For bounded Banach space operators.
Let "X" be a Banach space, "B"("X") the family of bounded operators on "X", and "T" ∈ "B"("X"). By definition, a complex number "λ" is in the spectrum of "T", denoted "σ"("T"), if "T" − "λ" does not have an inverse in "B"("X").
If "T" − "λ" is one-to-one and onto, i.e. bijective, then its inverse is bounded; this follows directly from the open mapping theorem of functional analysis. So, "λ" is in the spectrum of "T" if and only if "T" − "λ" is not one-to-one or not onto. One distinguishes three separate cases:
So "σ"("T") is the disjoint union of these three sets,
formula_4The complement of the spectrum formula_5 is known as resolvent set formula_6 that is formula_7.
In addition, when "T" − "λ" does not have dense range, whether is injective or not, then "λ" is said to be in the compression spectrum of "T", "σcp"("T"). The compression spectrum consists of the whole residual spectrum and part of point spectrum.
For unbounded operators.
The spectrum of an unbounded operator can be divided into three parts in the same way as in the bounded case, but because the operator is not defined everywhere, the definitions of domain, inverse, etc. are more involved.
Examples.
Multiplication operator.
Given a σ-finite measure space ("S", "Σ", "μ"), consider the Banach space "Lp"("μ"). A function "h": "S" → C is called essentially bounded if "h" is bounded "μ"-almost everywhere. An essentially bounded "h" induces a bounded multiplication operator "Th" on "Lp"("μ"):
formula_8
The operator norm of "T" is the essential supremum of "h". The essential range of "h" is defined in the following way: a complex number "λ" is in the essential range of "h" if for all "ε" > 0, the preimage of the open ball "Bε"("λ") under "h" has strictly positive measure. We will show first that "σ"("Th") coincides with the essential range of "h" and then examine its various parts.
If "λ" is not in the essential range of "h", take "ε" > 0 such that "h"−1("Bε"("λ")) has zero measure. The function "g"("s") = 1/("h"("s") − "λ") is bounded almost everywhere by 1/"ε". The multiplication operator "Tg" satisfies "T""g" · ("T""h" − "λ") = ("T""h" − "λ") · "T""g" = "I". So "λ" does not lie in spectrum of "Th". On the other hand, if "λ" lies in the essential range of "h", consider the sequence of sets {"Sn" =
"h"−1("B"1/"n"("λ"))}. Each "Sn" has positive measure. Let "fn" be the characteristic function of "Sn". We can compute directly
formula_9
This shows "Th" − "λ" is not bounded below, therefore not invertible.
If "λ" is such that "μ"( "h"−1({"λ"})) > 0, then "λ" lies in the point spectrum of "Th" as follows. Let "f" be the characteristic function of the measurable set "h"−1("λ"), then by considering two cases, we find
formula_10
so λ is an eigenvalue of "T""h".
Any "λ" in the essential range of "h" that does not have a positive measure preimage is in the continuous spectrum of "Th". To show this, we must show that "Th" − "λ" has dense range. Given "f" ∈ "Lp"("μ"), again we consider the sequence of sets {"Sn" = "h"−1("B"1/n("λ"))}. Let "gn" be the characteristic function of "S" − "Sn". Define
formula_11
Direct calculation shows that "fn" ∈ "Lp"("μ"), with formula_12. Then by the dominated convergence theorem,
formula_13
in the "Lp"("μ") norm.
Therefore, multiplication operators have no residual spectrum. In particular, by the spectral theorem, normal operators on a Hilbert space have no residual spectrum.
Shifts.
In the special case when "S" is the set of natural numbers and "μ" is the counting measure, the corresponding "Lp"("μ") is denoted by l"p". This space consists of complex valued sequences {"xn"} such that
formula_14
For 1 < "p" < ∞, "l p" is reflexive. Define the left shift "T" : "l p" → "l p" by
formula_15
"T" is a partial isometry with operator norm 1. So "σ"("T") lies in the closed unit disk of the complex plane.
"T*" is the right shift (or unilateral shift), which is an isometry on "l q", where 1/"p" + 1/"q" = 1:
formula_16
For "λ" ∈ C with |"λ"| < 1,
formula_17
and "T x" = "λ x". Consequently, the point spectrum of "T" contains the open unit disk. Now, "T*" has no eigenvalues, i.e. "σp"("T*") is empty. Thus, invoking reflexivity and the theorem in Spectrum_(functional_analysis)#Spectrum_of_the_adjoint_operator (that "σp"("T") ⊂ "σr"("T"*) ∪ "σp"("T"*)), we can deduce that the open unit disk lies in the residual spectrum of "T*".
The spectrum of a bounded operator is closed, which implies the unit circle, { |"λ"| = 1 } ⊂ C, is in "σ"("T"). Again by reflexivity of "l p" and the theorem given above (this time, that "σr"("T") ⊂ "σp"("T"*)), we have that "σr"("T") is also empty. Therefore, for a complex number "λ" with unit norm, one must have "λ" ∈ "σp"("T") or "λ" ∈ "σc"("T"). Now if |"λ"| = 1 and
formula_18
then
formula_19
which cannot be in "l p", a contradiction. This means the unit circle must lie in the continuous spectrum of "T".
So for the left shift "T", "σp"("T") is the open unit disk and "σc"("T") is the unit circle, whereas for the right shift "T*", "σr"("T*") is the open unit disk and "σc"("T*") is the unit circle.
For "p" = 1, one can perform a similar analysis. The results will not be exactly the same, since reflexivity no longer holds.
Self-adjoint operators on Hilbert space.
Hilbert spaces are Banach spaces, so the above discussion applies to bounded operators on Hilbert spaces as well. A subtle point concerns the spectrum of "T"*. For a Banach space, "T"* denotes the transpose and "σ"("T*") = "σ"("T"). For a Hilbert space, "T"* normally denotes the adjoint of an operator "T" ∈ "B"("H"), not the transpose, and "σ"("T*") is not "σ"("T") but rather its image under complex conjugation.
For a self-adjoint "T" ∈ "B"("H"), the Borel functional calculus gives additional ways to break up the spectrum naturally.
Borel functional calculus.
This subsection briefly sketches the development of this calculus. The idea is to first establish the continuous functional calculus, and then pass to measurable functions via the Riesz–Markov–Kakutani representation theorem. For the continuous functional calculus, the key ingredients are the following:
The family "C"("σ"("T")) is a Banach algebra when endowed with the uniform norm. So the mapping
formula_21
is an isometric homomorphism from a dense subset of "C"("σ"("T")) to "B"("H"). Extending the mapping by continuity gives "f"("T") for "f" ∈ C("σ"("T")): let "Pn" be polynomials such that "Pn" → "f" uniformly and define "f"("T") = lim "Pn"("T"). This is the continuous functional calculus.
For a fixed "h" ∈ "H", we notice that
formula_22
is a positive linear functional on "C"("σ"("T")). According to the Riesz–Markov–Kakutani representation theorem a unique measure "μh" on "σ"("T") exists such that
formula_23
This measure is sometimes called the spectral measure associated to " h". The spectral measures can be used to extend the continuous functional calculus to bounded Borel functions. For a bounded function "g" that is Borel measurable, define, for a proposed "g"("T")
formula_24
Via the polarization identity, one can recover (since "H" is assumed to be complex)
formula_25
and therefore "g"("T") "h" for arbitrary "h".
In the present context, the spectral measures, combined with a result from measure theory, give a decomposition of "σ"("T").
Decomposition into absolutely continuous, singular continuous, and pure point.
Let "h" ∈ "H" and "μh" be its corresponding spectral measure on "σ"("T") ⊂ R. According to a refinement of Lebesgue's decomposition theorem, "μh" can be decomposed into three mutually singular parts:
formula_26
where "μ"ac is absolutely continuous with respect to the Lebesgue measure, "μ"sc is singular with respect to the Lebesgue measure and atomless, and "μ"pp is a pure point measure.
All three types of measures are invariant under linear operations. Let "H"ac be the subspace consisting of vectors whose spectral measures are absolutely continuous with respect to the Lebesgue measure. Define "H"pp and "H"sc in analogous fashion. These subspaces are invariant under "T". For example, if "h" ∈ "H"ac and "k" = "T h". Let "χ" be the characteristic function of some Borel set in "σ"("T"), then
formula_27
So
formula_28
and "k" ∈ "H"ac. Furthermore, applying the spectral theorem gives
formula_29
This leads to the following definitions:
The closure of the eigenvalues is the spectrum of "T" restricted to "H"pp.
So
formula_30
Comparison.
A bounded self-adjoint operator on Hilbert space is, a fortiori, a bounded operator on a Banach space. Therefore, one can also apply to "T" the decomposition of the spectrum that was achieved above for bounded operators on a Banach space. Unlike the Banach space formulation, the union
formula_31
need not be disjoint. It is disjoint when the operator "T" is of uniform multiplicity, say "m", i.e. if "T" is unitarily equivalent to multiplication by "λ" on the direct sum
formula_32
for some Borel measures formula_33. When more than one measure appears in the above expression, we see that it is possible for the union of the three types of spectra to not be disjoint. If "λ" ∈ "σac"("T") ∩ "σpp"("T"), "λ" is sometimes called an eigenvalue "embedded" in the absolutely continuous spectrum.
When "T" is unitarily equivalent to multiplication by "λ" on
formula_34
the decomposition of "σ"("T") from Borel functional calculus is a refinement of the Banach space case.
Quantum mechanics.
The preceding comments can be extended to the unbounded self-adjoint operators since Riesz-Markov holds for locally compact Hausdorff spaces.
In quantum mechanics, observables are (often unbounded) self-adjoint operators and their spectra are the possible outcomes of measurements.
The pure point spectrum corresponds to bound states in the following way:
A particle is said to be in a bound state if it remains "localized" in a bounded region of space. Intuitively one might therefore think that the "discreteness" of the spectrum is intimately related to the corresponding states being "localized". However, a careful mathematical analysis shows that this is not true in general. For example, consider the function
formula_37
This function is normalizable (i.e. formula_38) as
formula_39
Known as the Basel problem, this series converges to formula_40. Yet, formula_41 increases as formula_42, i.e, the state "escapes to infinity". The phenomena of Anderson localization and dynamical localization describe when the eigenfunctions are localized in a physical sense. Anderson Localization means that eigenfunctions decay exponentially as formula_43. Dynamical localization is more subtle to define.
Sometimes, when performing quantum mechanical measurements, one encounters "eigenstates" that are not localized, e.g., quantum states that do not lie in "L"2(R). These are free states belonging to the absolutely continuous spectrum. In the spectral theorem for unbounded self-adjoint operators, these states are referred to as "generalized eigenvectors" of an observable with "generalized eigenvalues" that do not necessarily belong to its spectrum. Alternatively, if it is insisted that the notion of eigenvectors and eigenvalues survive the passage to the rigorous, one can consider operators on rigged Hilbert spaces.
An example of an observable whose spectrum is purely absolutely continuous is the position operator of a free particle moving on the entire real line. Also, since the momentum operator is unitarily equivalent to the position operator, via the Fourier transform, it has a purely absolutely continuous spectrum as well.
The singular spectrum correspond to physically impossible outcomes. It was believed for some time that the singular spectrum was something artificial. However, examples as the almost Mathieu operator and random Schrödinger operators have shown, that all types of spectra arise naturally in physics.
Decomposition into essential spectrum and discrete spectrum.
Let formula_44 be a closed operator defined on the domain formula_45 which is dense in "X". Then there is a decomposition of the spectrum of "A" into a disjoint union,
formula_46
where
Notes.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "T"
},
{
"math_id": 1,
"text": "X"
},
{
"math_id": 2,
"text": "\\lambda"
},
{
"math_id": 3,
"text": "T-\\lambda"
},
{
"math_id": 4,
"text": "\\sigma(T) = \\sigma_p (T) \\cup \\sigma_c (T) \\cup \\sigma_r (T)."
},
{
"math_id": 5,
"text": "\\sigma(T)"
},
{
"math_id": 6,
"text": "\\rho(T)"
},
{
"math_id": 7,
"text": "\\rho(T)=\\mathbb{C}\\setminus\\sigma(T)"
},
{
"math_id": 8,
"text": "(T_h f)(s) = h(s) \\cdot f(s)."
},
{
"math_id": 9,
"text": "\n\\| (T_h - \\lambda) f_n \\|_p ^p = \\| (h - \\lambda) f_n \\|_p ^p = \\int_{S_n} | h - \\lambda \\; |^p d \\mu \n\\leq \\frac{1}{n^p} \\; \\mu(S_n) = \\frac{1}{n^p} \\| f_n \\|_p ^p.\n"
},
{
"math_id": 10,
"text": "\\forall s \\in S, \\; (T_h f)(s) = \\lambda f(s),"
},
{
"math_id": 11,
"text": "f_n(s) = \\frac{1}{ h(s) - \\lambda} \\cdot g_n(s) \\cdot f(s)."
},
{
"math_id": 12,
"text": "\\| f_n\\|_p\\leq n \\|f\\|_p"
},
{
"math_id": 13,
"text": "(T_h - \\lambda) f_n \\rightarrow f"
},
{
"math_id": 14,
"text": "\\sum_{n \\geq 0} | x_n |^p < \\infty."
},
{
"math_id": 15,
"text": "T(x_1, x_2, x_3, \\dots) = (x_2, x_3, x_4, \\dots)."
},
{
"math_id": 16,
"text": "T^*(x_1, x_2, x_3, \\dots) = (0, x_1, x_2, \\dots)."
},
{
"math_id": 17,
"text": "x = (1, \\lambda, \\lambda ^2, \\dots) \\in l^p"
},
{
"math_id": 18,
"text": "T x = \\lambda x, \\qquad i.e. \\; (x_2, x_3, x_4, \\dots) = \\lambda (x_1, x_2, x_3, \\dots),"
},
{
"math_id": 19,
"text": "x = x_1 (1, \\lambda, \\lambda^2, \\dots),"
},
{
"math_id": 20,
"text": "\\| P(T) \\| = \\sup_{\\lambda \\in \\sigma(T)} |P(\\lambda)|."
},
{
"math_id": 21,
"text": "P \\rightarrow P(T)"
},
{
"math_id": 22,
"text": "f \\rightarrow \\langle h, f(T) h \\rangle"
},
{
"math_id": 23,
"text": "\\int_{\\sigma(T)} f \\, d \\mu_h = \\langle h, f(T) h \\rangle."
},
{
"math_id": 24,
"text": "\\int_{\\sigma(T)} g \\, d \\mu_h = \\langle h, g(T) h \\rangle."
},
{
"math_id": 25,
"text": "\\langle k, g(T) h \\rangle."
},
{
"math_id": 26,
"text": " \\mu_h = \\mu_{\\mathrm{ac}} + \\mu_{\\mathrm{sc}} + \\mu_{\\mathrm{pp}}"
},
{
"math_id": 27,
"text": "\\langle k, \\chi(T) k \\rangle = \\int_{\\sigma(T)} \\chi(\\lambda) \\cdot \\lambda^2 d \\mu_{h}(\\lambda) = \\int_{\\sigma(T)} \\chi(\\lambda) \\; d \\mu_k(\\lambda)."
},
{
"math_id": 28,
"text": "\\lambda^2 d \\mu_{h} = d \\mu_{k}"
},
{
"math_id": 29,
"text": "H = H_{\\mathrm{ac}} \\oplus H_{\\mathrm{sc}} \\oplus H_{\\mathrm{pp}}."
},
{
"math_id": 30,
"text": "\\sigma(T) = \\sigma_{\\mathrm{ac}}(T) \\cup \\sigma_{\\mathrm{sc}}(T) \\cup {\\bar \\sigma_{\\mathrm{pp}}(T)}."
},
{
"math_id": 31,
"text": "\\sigma(T) = {\\bar \\sigma_{\\mathrm{pp}}(T)} \\cup \\sigma_{\\mathrm{ac}}(T) \\cup \\sigma_{\\mathrm{sc}}(T)"
},
{
"math_id": 32,
"text": "\\bigoplus _{i = 1} ^m L^2(\\mathbb{R}, \\mu_i)"
},
{
"math_id": 33,
"text": "\\mu_i"
},
{
"math_id": 34,
"text": "L^2(\\mathbb{R}, \\mu),"
},
{
"math_id": 35,
"text": "t\\in\\mathbb{R}"
},
{
"math_id": 36,
"text": "H"
},
{
"math_id": 37,
"text": " f(x) = \\begin{cases}\nn & \\text{if }x \\in \\left[n, n+\\frac{1}{n^4}\\right] \\\\\n0 & \\text{else}\n\\end{cases}, \\quad \\forall n \\in \\mathbb{N}. "
},
{
"math_id": 38,
"text": "f\\in L^2(\\mathbb{R})"
},
{
"math_id": 39,
"text": "\\int_{n}^{n+\\frac{1}{n^4}}n^2\\,dx = \\frac{1}{n^2} \\Rightarrow \\int_{-\\infty}^{\\infty} |f(x)|^2\\,dx = \\sum_{n=1}^\\infty \\frac{1}{n^2}."
},
{
"math_id": 40,
"text": "\\frac{\\pi^2}{6}"
},
{
"math_id": 41,
"text": "f"
},
{
"math_id": 42,
"text": "x \\to \\infty"
},
{
"math_id": 43,
"text": " x \\to \\infty "
},
{
"math_id": 44,
"text": "A:\\,X\\to X"
},
{
"math_id": 45,
"text": "D(A)\\subset X"
},
{
"math_id": 46,
"text": "\\sigma(A)=\\sigma_{\\mathrm{ess},5}(A)\\sqcup\\sigma_{\\mathrm{d}}(A),"
},
{
"math_id": 47,
"text": "\\sigma_{\\mathrm{ess},5}(A)"
},
{
"math_id": 48,
"text": "\\sigma_{\\mathrm{ess},k}(A)=\\sigma_{\\mathrm{ess}}(A)"
},
{
"math_id": 49,
"text": "1\\le k\\le 5"
},
{
"math_id": 50,
"text": "\\sigma_{\\mathrm{d}}(A)"
},
{
"math_id": 51,
"text": "\\sigma(A)"
},
{
"math_id": 52,
"text": "\\sigma_d(A)\\subset\\sigma_p(A)"
}
] | https://en.wikipedia.org/wiki?curid=976834 |
976887 | List of Somalis | This is a list of notable Somalis from Somalia, Somaliland, Djibouti, Kenya, Ethiopia as well as the Somali diaspora.
Politicians.
Other politicians.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\thicksim"
}
] | https://en.wikipedia.org/wiki?curid=976887 |
9769502 | Localized molecular orbitals | Localized molecular orbitals are molecular orbitals which are concentrated in a limited spatial region of a molecule, such as a specific bond or lone pair on a specific atom. They can be used to relate molecular orbital calculations to simple bonding theories, and also to speed up post-Hartree–Fock electronic structure calculations by taking advantage of the local nature of electron correlation. Localized orbitals in systems with periodic boundary conditions are known as Wannier functions.
Standard ab initio quantum chemistry methods lead to delocalized orbitals that, in general, extend over an entire molecule and have the symmetry of the molecule. Localized orbitals may then be found as linear combinations of the delocalized orbitals, given by an appropriate unitary transformation.
In the water molecule for example, ab initio calculations show bonding character primarily in two molecular orbitals, each with electron density equally distributed among the two O-H bonds. The localized orbital corresponding to one O-H bond is the sum of these two delocalized orbitals, and the localized orbital for the other O-H bond is their difference; as per Valence bond theory.
For multiple bonds and lone pairs, different localization procedures give different orbitals. The Boys and Edmiston-Ruedenberg localization methods mix these orbitals to give equivalent bent bonds in ethylene and "rabbit ear" lone pairs in water, while the Pipek-Mezey method preserves their respective σ and π symmetry.
Equivalence of localized and delocalized orbital descriptions.
For molecules with a closed electron shell, in which each molecular orbital is doubly occupied, the localized and delocalized orbital descriptions are in fact equivalent and represent the same physical state. It might seem, again using the example of water, that placing two electrons in the first bond and two "other" electrons in the second bond is not the same as having four electrons free to move over both bonds. However, in quantum mechanics all electrons are identical and cannot be distinguished as "same" or "other". The total wavefunction must have a form which satisfies the Pauli exclusion principle such as a Slater determinant (or linear combination of Slater determinants), and it can be shown that if two electrons are exchanged, such a function is unchanged by any unitary transformation of the doubly occupied orbitals.
For molecules with an open electron shell, in which some molecular orbitals are singly occupied, the electrons of alpha and beta spin must be localized separately. This applies to radical species such as nitric oxide and dioxygen. Again, in this case the localized and delocalized orbital descriptions are equivalent and represent the same physical state.
Computation methods.
Localized molecular orbitals (LMO) are obtained by unitary transformation upon a set of canonical molecular orbitals (CMO). The transformation usually involves the optimization (either minimization or maximization) of the expectation value of a specific operator. The generic form of the localization potential is:
formula_0,
where formula_1 is the localization operator and formula_2 is a molecular spatial orbital. Many methodologies have been developed during the past decades, differing in the form of formula_1.
The optimization of the objective function is usually performed using pairwise Jacobi rotations. However, this approach is prone to saddle point convergence (if it even converges), and thus other approaches have also been developed, from simple conjugate gradient methods with exact line searches, to Newton-Raphson and trust-region methods.
Foster-Boys.
The Foster-Boys (also known as Boys) localization method minimizes the spatial extent of the orbitals by minimizing formula_3, where formula_4. This turns out to be equivalent to the easier task of maximizing formula_5. In one dimension, the Foster-Boys (FB) objective function can also be written as
formula_6.
Fourth moment.
The fourth moment (FM) procedure is analogous to Foster-Boys scheme, however the orbital fourth moment is used instead of the orbital second moment. The objective function to be minimized is
formula_7.
The fourth moment method produces more localized virtual orbitals than Foster-Boys method, since it implies a larger penalty on the delocalized tails. For graphene (a delocalized system), the fourth moment method produces more localized occupied orbitals than Foster-Boys and Pipek-Mezey schemes.
Edmiston-Ruedenberg.
Edmiston-Ruedenberg localization maximizes the electronic self-repulsion energy by maximizing formula_8, where formula_9.
Pipek-Mezey.
Pipek-Mezey localization takes a slightly different approach, maximizing the sum of orbital-dependent partial charges on the nuclei:
formula_10.
Pipek and Mezey originally used Mulliken charges, which are mathematically ill defined. Recently, Pipek-Mezey style schemes based on a variety of mathematically well-defined partial charge estimates have been discussed. Some notable choices are Voronoi charges, Becke charges, Hirshfeld or Stockholder charges, intrinsic atomic orbital charges (see intrinsic bond orbitals)", Bader charges, or "fuzzy atom" charges. Rather surprisingly, despite the wide variation in the (total) partial charges reproduced by the different estimates, analysis of the resulting Pipek-Mezey orbitals has shown that the localized orbitals are rather insensitive to the partial charge estimation scheme used in the localization process. However, due to the ill-defined mathematical nature of Mulliken charges (and Löwdin charges, which have also been used in some works), as better alternatives are nowadays available it is advisable to use them in favor of the original version.
The most important quality of the Pipek-Mezey scheme is that it preserves σ-π separation in planar systems, which sets it apart from the Foster-Boys and Edmiston-Ruedenberg schemes that mix σ and π bonds. This property holds independent of the partial charge estimate used.
While the usual formulation of the Pipek-Mezey method invokes an iterative procedure to localize the orbitals, a non-iterative method has also been recently suggested.
In organic chemistry.
Organic chemistry is often discussed in terms of localized molecular orbitals in a qualitative and informal sense. Historically, much of classical organic chemistry was built on the older valence bond / orbital hybridization models of bonding. To account for phenomena like aromaticity, this simple model of bonding is supplemented by semi-quantitative results from Hückel molecular orbital theory. However, the understanding of stereoelectronic effects requires the analysis of interactions between donor and acceptor orbitals between two molecules or different regions within the same molecule, and molecular orbitals must be considered. Because proper (symmetry-adapted) molecular orbitals are fully delocalized and do not admit a ready correspondence with the "bonds" of the molecule, as visualized by the practicing chemist, the most common approach is to instead consider the interaction between filled and unfilled localized molecular orbitals that correspond to σ bonds, π bonds, lone pairs, and their unoccupied counterparts. These orbitals and typically given the notation σ (sigma bonding), π (pi bonding), "n" (occupied nonbonding orbital, "lone pair"), "p" (unoccupied nonbonding orbital, "empty p orbital"; the symbol "n"* for unoccupied nonbonding orbital is seldom used), π* (pi antibonding), and σ* (sigma antibonding). (Woodward and Hoffmann use ω for nonbonding orbitals in general, occupied or unoccupied.) When comparing localized molecular orbitals derived from the same atomic orbitals, these classes generally follow the order σ < π < "n" < "p" ("n"*) < π* < σ* when ranked by increasing energy.
The localized molecular orbitals that organic chemists often depict can be thought of as qualitative renderings of orbitals generated by the computational methods described above. However, they do not map onto any single approach, nor are they used consistently. For instance, the lone pairs of water are usually treated as two equivalent sp"x" hybrid orbitals, while the corresponding "nonbonding" orbitals of carbenes are generally treated as a filled σ(out) orbital and an unfilled pure p orbital, even though the lone pairs of water could be described analogously by filled σ(out) and p orbitals ("for further discussion, see the article on" lone pair "and the discussion above on" sigma-pi and equivalent-orbital models). In other words, the type of localized orbital invoked depends on context and considerations of convenience and utility. | [
{
"math_id": 0,
"text": " \\langle \\hat{L} \\rangle = \\sum_{i=1}^{n} \\langle \\phi_i \\phi_i | \\hat{L} | \\phi_i \\phi_i \\rangle "
},
{
"math_id": 1,
"text": "\\hat{L}"
},
{
"math_id": 2,
"text": "\\phi_i"
},
{
"math_id": 3,
"text": " \\langle \\hat{L} \\rangle "
},
{
"math_id": 4,
"text": " \\hat{L} = |\\vec{r}_1 - \\vec{r}_2|^2 "
},
{
"math_id": 5,
"text": " \\sum_{i}^{n}[ \\langle \\phi_i | \\vec{r} | \\phi_i \\rangle ] ^2 "
},
{
"math_id": 6,
"text": " \\langle \\hat{L}_\\text{FB} \\rangle = \\sum_i \\langle \\phi_i | (\\hat{x} - \\langle i | \\hat{x} | i \\rangle ) ^2 | \\phi_i \\rangle "
},
{
"math_id": 7,
"text": " \\langle \\hat{L}_\\text{FM} \\rangle = \\sum_i \\langle \\phi_i | (\\hat{x} - \\langle i | \\hat{x} | \\phi_i \\rangle ) ^4 | i \\rangle "
},
{
"math_id": 8,
"text": " \\langle \\hat{L}_\\text{ER} \\rangle "
},
{
"math_id": 9,
"text": " \\hat{L} = |\\vec{r}_1 - \\vec{r}_2|^{-1} "
},
{
"math_id": 10,
"text": " \\langle \\hat{L} \\rangle_\\textrm{PM} = \\sum_{A}^{\\textrm{atoms}} \\sum_{i}^{\\textrm{orbitals}} |q_i^A|^2 "
}
] | https://en.wikipedia.org/wiki?curid=9769502 |
9770 | Eclipse | Astronomical event where one body is hidden by another
An eclipse is an astronomical event which occurs when an astronomical object or spacecraft is temporarily obscured, by passing into the shadow of another body or by having another body pass between it and the viewer. This alignment of three celestial objects is known as a "syzygy". An eclipse is the result of either an "occultation" (completely hidden) or a "transit" (partially hidden). A "deep eclipse" (or "deep occultation") is when a small astronomical object is behind a bigger one.
The term "eclipse" is most often used to describe either a solar eclipse, when the Moon's shadow crosses the Earth's surface, or a lunar eclipse, when the Moon moves into the Earth's shadow. However, it can also refer to such events beyond the Earth–Moon system: for example, a planet moving into the shadow cast by one of its moons, a moon passing into the shadow cast by its host planet, or a moon passing into the shadow of another moon. A binary star system can also produce eclipses if the plane of the orbit of its constituent stars intersects the observer's position.
For the special cases of solar and lunar eclipses, these only happen during an "eclipse season", the two times of each year when the plane of the Earth's orbit around the Sun crosses with the plane of the Moon's orbit around the Earth and the line defined by the intersecting planes points near the Sun. The type of solar eclipse that happens during each season (whether total, annular, hybrid, or partial) depends on apparent sizes of the Sun and Moon. If the orbit of the Earth around the Sun and the Moon's orbit around the Earth were both in the same plane with each other, then eclipses would happen every month. There would be a lunar eclipse at every full moon, and a solar eclipse at every new moon. It is because of the non-planar differences that eclipses are not a common event. If both orbits were perfectly circular, then each eclipse would be the same type every month.
Lunar eclipses can be viewed from the entire nightside half of the Earth. But solar eclipses, particularly total eclipses occurring at any one particular point on the Earth's surface, are very rare events that can be many decades apart.
Etymology.
The term is derived from the ancient Greek noun ('), which means 'the abandonment', 'the downfall', or 'the darkening of a heavenly body', which is derived from the verb (') which means 'to abandon', 'to darken', or 'to cease to exist', a combination of prefix ('), from preposition ('), 'out', and of verb (""), 'to be absent'.
Umbra, penumbra and antumbra.
For any two objects in space, a line can be extended from the first through the second. The latter object will block some amount of light being emitted by the former, creating a region of shadow around the axis of the line. Typically these objects are moving with respect to each other and their surroundings, so the resulting shadow will sweep through a region of space, only passing through any particular location in the region for a fixed interval of time. As viewed from such a location, this shadowing event is known as an eclipse.
Typically the cross-section of the objects involved in an astronomical eclipse is roughly disk-shaped. The region of an object's shadow during an eclipse is divided into three parts:
A total eclipse occurs when the observer is within the umbra, an annular eclipse when the observer is within the antumbra, and a partial eclipse when the observer is within the penumbra. During a lunar eclipse only the umbra and penumbra are applicable, because the antumbra of the Sun-Earth system lies far beyond the Moon. Analogously, Earth's apparent diameter from the viewpoint of the Moon is nearly four times that of the Sun and thus cannot produce an annular eclipse. The same terms may be used analogously in describing other eclipses, e.g., the antumbra of Deimos crossing Mars, or Phobos entering Mars's penumbra.
The "first contact" occurs when the eclipsing object's disc first starts to impinge on the light source; "second contact" is when the disc moves completely within the light source; "third contact" when it starts to move out of the light; and "fourth" or "last contact" when it finally leaves the light source's disc entirely.
For spherical bodies, when the occulting object is smaller than the star, the length ("L") of the umbra's cone-shaped shadow is given by:
formula_0
where "Rs" is the radius of the star, "Ro" is the occulting object's radius, and "r" is the distance from the star to the occulting object. For Earth, on average "L" is equal to 1.384×106 km, which is much larger than the Moon's semimajor axis of 3.844×105 km. Hence the umbral cone of the Earth can completely envelop the Moon during a lunar eclipse. If the occulting object has an atmosphere, however, some of the luminosity of the star can be refracted into the volume of the umbra. This occurs, for example, during an eclipse of the Moon by the Earth—producing a faint, ruddy illumination of the Moon even at totality.
On Earth, the shadow cast during an eclipse moves very approximately at 1 km per sec. This depends on the location of the shadow on the Earth and the angle in which it is moving.
Eclipse cycles.
An eclipse cycle takes place when eclipses in a series are separated by a certain interval of time. This happens when the orbital motions of the bodies form repeating harmonic patterns. A particular instance is the saros, which results in a repetition of a solar or lunar eclipse every 6,585.3 days, or a little over 18 years. Because this is not a whole number of days, successive eclipses will be visible from different parts of the world. In one saros period there are 239.0 anomalistic periods, 241.0 sidereal periods, 242.0 nodical periods, and 223.0 synodic periods. Although the orbit of the Moon does not give exact integers, the numbers of orbit cycles are close enough to integers to give strong similarity for eclipses spaced at 18.03 yr intervals.
Earth–Moon system.
An eclipse involving the Sun, Earth, and Moon can occur only when they are nearly in a straight line, allowing one to be hidden behind another, viewed from the third. Because the orbital plane of the Moon is tilted with respect to the orbital plane of the Earth (the ecliptic), eclipses can occur only when the Moon is close to the intersection of these two planes (the nodes). The Sun, Earth and nodes are aligned twice a year (during an eclipse season), and eclipses can occur during a period of about two months around these times. There can be from four to seven eclipses in a calendar year, which repeat according to various eclipse cycles, such as a saros.
Between 1901 and 2100 there are the maximum of seven eclipses in:
Excluding penumbral lunar eclipses, there are a maximum of seven eclipses in:
Solar eclipse.
As observed from the Earth, a solar eclipse occurs when the Moon passes in front of the Sun. The type of solar eclipse event depends on the distance of the Moon from the Earth during the event. A total solar eclipse occurs when the Earth intersects the umbra portion of the Moon's shadow. When the umbra does not reach the surface of the Earth, the Sun is only partially occulted, resulting in an annular eclipse. Partial solar eclipses occur when the viewer is inside the penumbra.
The eclipse magnitude is the fraction of the Sun's diameter that is covered by the Moon. For a total eclipse, this value is always greater than or equal to one. In both annular and total eclipses, the eclipse magnitude is the ratio of the angular sizes of the Moon to the Sun.
Solar eclipses are relatively brief events that can only be viewed in totality along a relatively narrow track. Under the most favorable circumstances, a total solar eclipse can last for 7 minutes, 31 seconds, and can be viewed along a track that is up to 250 km wide. However, the region where a partial eclipse can be observed is much larger. The Moon's umbra will advance eastward at a rate of 1,700 km/h, until it no longer intersects the Earth's surface.
During a solar eclipse, the Moon can sometimes perfectly cover the Sun because its apparent size is nearly the same as the Sun's when viewed from the Earth. A total solar eclipse is in fact an occultation while an annular solar eclipse is a transit.
When observed at points in space other than from the Earth's surface, the Sun can be eclipsed by bodies other than the Moon. Two examples include when the crew of Apollo 12 observed the in 1969 and when the "Cassini" probe observed in 2006.
Lunar eclipse.
Lunar eclipses occur when the Moon passes through the Earth's shadow. This happens only during a full moon, when the Moon is on the far side of the Earth from the Sun. Unlike a solar eclipse, an eclipse of the Moon can be observed from nearly an entire hemisphere. For this reason it is much more common to observe a lunar eclipse from a given location. A lunar eclipse lasts longer, taking several hours to complete, with totality itself usually averaging anywhere from about 30 minutes to over an hour.
There are three types of lunar eclipses: penumbral, when the Moon crosses only the Earth's penumbra; partial, when the Moon crosses partially into the Earth's umbra; and total, when the Moon crosses entirely into the Earth's umbra. Total lunar eclipses pass through all three phases. Even during a total lunar eclipse, however, the Moon is not completely dark. Sunlight refracted through the Earth's atmosphere enters the umbra and provides a faint illumination. Much as in a sunset, the atmosphere tends to more strongly scatter light with shorter wavelengths, so the illumination of the Moon by refracted light has a red hue, thus the phrase 'Blood Moon' is often found in descriptions of such lunar events as far back as eclipses are recorded.
Historical record.
Records of solar eclipses have been kept since ancient times. Eclipse dates can be used for chronological dating of historical records. A Syrian clay tablet, in the Ugaritic language, records a solar eclipse which occurred on March 5, 1223, B.C., while Paul Griffin argues that a stone in Ireland records an eclipse on November 30, 3340 B.C. Positing classical-era astronomers' use of Babylonian eclipse records mostly from the 13th century BC provides a feasible and mathematically consistent explanation for the Greek finding all three lunar mean motions (synodic, anomalistic, draconitic) to a precision of about one part in a million or better. Chinese historical records of solar eclipses date back over 3,000 years and have been used to measure changes in the Earth's rate of spin.
The first person to give scientific explanation on eclipses was Anaxagoras [c500BC - 428BC]. Anaxagoras stated that the Moon shines by reflected light from the Sun.
In 5th century AD, solar and lunar eclipses were scientifically explained by Aryabhata, in his treatise "Aryabhatiya." Aryabhata states that the Moon and planets shine by reflected sunlight and explains eclipses in terms of shadows cast by and falling on Earth. Aryabhata provides the computation and the size of the eclipsed part during an eclipse. Indian computations were very accurate that 18th-century French scientist Guillaume Le Gentil, during a visit to Pondicherry, India, found the Indian computations of the duration of the lunar eclipse of 30 August 1765 to be short by only 41 seconds, whereas Le Gentil's charts were long by 68 seconds.
By the 1600s, European astronomers were publishing books with diagrams explaining how lunar and solar eclipses occurred. In order to disseminate this information to a broader audience and decrease fear of the consequences of eclipses, booksellers printed broadsides explaining the event either using the science or via astrology.
Eclipses in mythology and religion.
The American author Gene Weingarten described the tension between belief and eclipses thus: "I am a devout atheist but can't explain why the moon is exactly the right size, and gets positioned so precisely between the Earth and the sun, that total solar eclipses are perfect. It bothers me."
The Graeco-Roman historian Cassius Dio, writing between AD 211-229, relates the anecdote that Emperor Claudius considered it necessary to prevent disturbance among the Roman population by publishing a prediction for a solar eclipse which would fall on his birthday anniversary [1 August in the year AD 45]. In this context, Cassius Dio provides a detailed explanation of solar and lunar eclipses.
Typically in mythology, eclipses were understood to be one variation or another of a spiritual battle between the sun and evil forces or spirits of darkness. More specifically, in Norse mythology, it is believed that there is a wolf by the name of Fenrir that is in constant pursuit of the Sun, and eclipses are thought to occur when the wolf successfully devours the divine Sun. Other Norse tribes believe that there are two wolves by the names of Sköll and Hati that are in pursuit of the Sun and the Moon, known by the names of Sol and Mani, and these tribes believe that an eclipse occurs when one of the wolves successfully eats either the Sun or the Moon.
In most types of mythologies and certain religions, eclipses were seen as a sign that the gods were angry and that danger was soon to come, so people often altered their actions in an effort to dissuade the gods from unleashing their wrath. In the Hindu religion, for example, people often sing religious hymns for protection from the evil spirits of the eclipse, and many people of the Hindu religion refuse to eat during an eclipse to avoid the effects of the evil spirits. Hindu people living in India will also wash off in the Ganges River, which is believed to be spiritually cleansing, directly following an eclipse to clean themselves of the evil spirits. In early Judaism and Christianity, eclipses were viewed as signs from God, and some eclipses were seen as a display of God's greatness or even signs of cycles of life and death. However, more ominous eclipses such as a blood moon were believed to be a divine sign that God would soon destroy their enemies.
Other planets and dwarf planets.
Gas giants.
The gas giant planets have many moons and thus frequently display eclipses. The most striking involve Jupiter, which has four large moons and a low axial tilt, making eclipses more frequent as these bodies pass through the shadow of the larger planet. Transits occur with equal frequency. It is common to see the larger moons casting circular shadows upon Jupiter's cloudtops.
The eclipses of the Galilean moons by Jupiter became accurately predictable once their orbital elements were known. During the 1670s, it was discovered that these events were occurring about 17 minutes later than expected when Jupiter was on the far side of the Sun. Ole Rømer deduced that the delay was caused by the time needed for light to travel from Jupiter to the Earth. This was used to produce the first estimate of the speed of light.
The timing of the Jovian satellite eclipses was also used to calculate an observer's longitude upon the Earth. By knowing the expected time when an eclipse would be observed at a standard longitude (such as Greenwich), the time difference could be computed by accurately observing the local time of the eclipse. The time difference gives the longitude of the observer because every hour of difference corresponded to 15° around the Earth's equator. This technique was used, for example, by Giovanni D. Cassini in 1679 to re-map France.
On the other three gas giants (Saturn, Uranus and Neptune) eclipses only occur at certain periods during the planet's orbit, due to their higher inclination between the orbits of the moon and the orbital plane of the planet. The moon Titan, for example, has an orbital plane tilted about 1.6° to Saturn's equatorial plane. But Saturn has an axial tilt of nearly 27°. The orbital plane of Titan only crosses the line of sight to the Sun at two points along Saturn's orbit. As the orbital period of Saturn is 29.7 years, an eclipse is only possible about every 15 years.
Mars.
On Mars, only partial solar eclipses (transits) are possible, because neither of its moons is large enough, at their respective orbital radii, to cover the Sun's disc as seen from the surface of the planet. Eclipses of the moons by Mars are not only possible, but commonplace, with hundreds occurring each Earth year. There are also rare occasions when Deimos is eclipsed by Phobos. Martian eclipses have been photographed from both the surface of Mars and from orbit.
Pluto.
Pluto, with its proportionately largest moon Charon, is also the site of many eclipses. A series of such mutual eclipses occurred between 1985 and 1990. These daily events led to the first accurate measurements of the physical parameters of both objects.
Mercury and Venus.
Eclipses are impossible on Mercury and Venus, which have no moons. However, as seen from the Earth, both have been observed to transit across the face of the Sun. Transits of Venus occur in pairs separated by an interval of eight years, but each pair of events happen less than once a century. According to NASA, the next pair of Venus transits will occur on December 10, 2117, and December 8, 2125. Transits of Mercury are much more common, occurring 13 times each century, on average.
Eclipsing binaries.
A binary star system consists of two stars that orbit around their common centre of mass. The movements of both stars lie on a common orbital plane in space. When this plane is very closely aligned with the location of an observer, the stars can be seen to pass in front of each other. The result is a type of extrinsic variable star system called an eclipsing binary.
The maximum luminosity of an eclipsing binary system is equal to the sum of the luminosity contributions from the individual stars. When one star passes in front of the other, the luminosity of the system is seen to decrease. The luminosity returns to normal once the two stars are no longer in alignment.
The first eclipsing binary star system to be discovered was Algol, a star system in the constellation Perseus. Normally this star system has a visual magnitude of 2.1. However, every 2.867 days the magnitude decreases to 3.4 for more than nine hours. This is caused by the passage of the dimmer member of the pair in front of the brighter star. The concept that an eclipsing body caused these luminosity variations was introduced by John Goodricke in 1783.
Types.
Sun – Moon – Earth: Solar eclipse | annular eclipse | hybrid eclipse | partial eclipse
Sun – Earth – Moon: Lunar eclipse | penumbral eclipse | partial lunar eclipse | central lunar eclipse
Sun – Phobos – Mars: Transit of Phobos from Mars | Solar eclipses on Mars
Sun – Deimos – Mars: Transit of Deimos from Mars | Solar eclipses on Mars
Other types: Solar eclipses on Jupiter | Solar eclipses on Saturn | Solar eclipses on Uranus | Solar eclipses on Neptune | Solar eclipses on Pluto
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "L\\ =\\ \\frac{r \\cdot R_o}{R_s - R_o}"
}
] | https://en.wikipedia.org/wiki?curid=9770 |
9770927 | Depth (ring theory) | In commutative and homological algebra, depth is an important invariant of rings and modules. Although depth can be defined more generally, the most common case considered is the case of modules over a commutative Noetherian local ring. In this case, the depth of a module is related with its projective dimension by the Auslander–Buchsbaum formula. A more elementary property of depth is the inequality
formula_0
where formula_1 denotes the Krull dimension of the module formula_2. Depth is used to define classes of rings and modules with good properties, for example, Cohen-Macaulay rings and modules, for which equality holds.
Definition.
Let formula_3 be a commutative ring, formula_4 an ideal of formula_3 and formula_2 a finitely generated formula_3-module with the property that formula_5 is properly contained in formula_2. (That is, some elements of formula_2 are not in formula_5.) Then the formula_4-depth of formula_2, also commonly called the grade of formula_2, is defined as
formula_6
By definition, the depth of a local ring formula_3 with a maximal ideal formula_7 is its formula_7-depth as a module over itself. If formula_3 is a Cohen-Macaulay local ring, then depth of formula_3 is equal to the dimension of formula_3.
By a theorem of David Rees, the depth can also be characterized using the notion of a regular sequence.
Theorem (Rees).
Suppose that formula_3 is a commutative Noetherian local ring with the maximal ideal formula_7 and formula_2 is a finitely generated formula_3-module. Then all maximal regular sequences formula_8 for formula_2, where each formula_9 belongs to formula_7, have the same length formula_10 equal to the formula_7-depth of formula_2.
Depth and projective dimension.
The projective dimension and the depth of a module over a commutative Noetherian local ring are complementary to each other. This is the content of the Auslander–Buchsbaum formula, which is not only of fundamental theoretical importance, but also provides an effective way to compute the depth of a module.
Suppose that formula_3 is a commutative Noetherian local ring with the maximal ideal formula_7 and formula_2 is a finitely generated formula_3-module. If the projective dimension of formula_2 is finite, then the Auslander–Buchsbaum formula states
formula_11
Depth zero rings.
A commutative Noetherian local ring formula_3 has depth zero if and only if its maximal ideal formula_7 is an associated prime, or, equivalently, when there is a nonzero element formula_12 of formula_3 such that formula_13 (that is, formula_12 annihilates formula_7). This means, essentially, that the closed point is an embedded component.
For example, the ring formula_14 (where formula_15 is a field), which represents a line (formula_16) with an embedded double point at the origin, has depth zero at the origin, but dimension one: this gives an example of a ring which is not Cohen–Macaulay. | [
{
"math_id": 0,
"text": " \\mathrm{depth}(M) \\leq \\dim(M), "
},
{
"math_id": 1,
"text": "\\dim M"
},
{
"math_id": 2,
"text": "M"
},
{
"math_id": 3,
"text": "R"
},
{
"math_id": 4,
"text": "I"
},
{
"math_id": 5,
"text": "I M"
},
{
"math_id": 6,
"text": " \\mathrm{depth}_I(M) = \\min \\{i: \\operatorname{Ext}^i(R/I,M)\\ne 0\\}. "
},
{
"math_id": 7,
"text": "\\mathfrak{m}"
},
{
"math_id": 8,
"text": "x_1, \\ldots, x_n"
},
{
"math_id": 9,
"text": "x_i"
},
{
"math_id": 10,
"text": "n"
},
{
"math_id": 11,
"text": " \\mathrm{pd}_R(M) + \\mathrm{depth}(M) = \\mathrm{depth}(R)."
},
{
"math_id": 12,
"text": "x"
},
{
"math_id": 13,
"text": "x\\mathfrak{m}=0"
},
{
"math_id": 14,
"text": "k[x,y]/(x^2,xy)"
},
{
"math_id": 15,
"text": "k"
},
{
"math_id": 16,
"text": "x=0"
}
] | https://en.wikipedia.org/wiki?curid=9770927 |
977099 | Template matching | Technique in digital image processing
Template matching is a technique in digital image processing for finding small parts of an image which match a template image. It can be used for quality control in manufacturing, navigation of mobile robots, or edge detection in images.
The main challenges in a template matching task are detection of occlusion, when a sought-after object is partly hidden in an image; detection of non-rigid transformations, when an object is distorted or imaged from different angles; sensitivity to illumination and background changes; background clutter; and scale changes.
Feature-based approach.
The feature-based approach to template matching relies on the extraction of image features, such as shapes, textures, and colors, that match the target image or frame. This approach is usually achieved using neural networks and deep-learning classifiers such as VGG, AlexNet, and ResNet.Convolutional neural networks (CNNs), which many modern classifiers are based on, process an image by passing it through different hidden layers, producing a vector at each layer with classification information about the image. These vectors are extracted from the network and used as the features of the image. Feature extraction using deep neural networks, like CNNs, has proven extremely effective has become the standard in state-of-the-art template matching algorithms.
This feature-based approach is often more robust than the template-based approach described below. As such, it has become the state-of-the-art method for template matching, as it can match templates with non-rigid and out-of-plane transformations, as well as high background clutter and illumination changes.
Template-based approach.
For templates without strong features, or for when the bulk of a template image constitutes the matching image as a whole, a template-based approach may be effective. Since template-based matching may require sampling of a large number of data points, it is often desirable to reduce the number of sampling points by reducing the resolution of search and template images by the same factor before performing the operation on the resultant downsized images. This pre-processing method creates a multi-scale, or pyramid, representation of images, providing a reduced search window of data points within a search image so that the template does not have to be compared with every viable data point. Pyramid representations are a method of dimensionality reduction, a common aim of machine learning on data sets that suffer the curse of dimensionality.
Common challenges.
In instances where the template may not provide a direct match, it may be useful to implement eigenspaces to create templates that detail the matching object under a number of different conditions, such as varying perspectives, illuminations, color contrasts, or object poses. For example, if an algorithm is looking for a face, its template eigenspaces may consist of images (i.e., templates) of faces in different positions to the camera, in different lighting conditions, or with different expressions (i.e., poses).
It is also possible for a matching image to be obscured or occluded by an object. In these cases, it is unreasonable to provide a multitude of templates to cover each possible occlusion. For example, the search object may be a playing card, and in some of the search images, the card is obscured by the fingers of someone holding the card, or by another card on top of it, or by some other object in front of the camera. In cases where the object is malleable or poseable, motion becomes an additional problem, and problems involving both motion and occlusion become ambiguous. In these cases, one possible solution is to divide the template image into multiple sub-images and perform matching on each subdivision.
Deformable templates in computational anatomy.
Template matching is a central tool in computational anatomy (CA). In this field, a deformable template model is used to model the space of human anatomies and their orbits under the group of diffeomorphisms, functions which smoothly deform an object. Template matching arises as an approach to finding the unknown diffeomorphism that acts on a template image to match the target image.
Template matching algorithms in CA have come to be called large deformation diffeomorphic metric mappings (LDDMMs). Currently, there are LDDMM template matching algorithms for matching anatomical landmark points, curves, surfaces, volumes.
Template-based matching explained using cross correlation or sum of absolute differences.
A basic method of template matching sometimes called "Linear Spatial Filtering" uses an image patch (i.e., the "template image" or "filter mask") tailored to a specific feature of search images to detect. This technique can be easily performed on grey images or edge images, where the additional variable of color is either not present or not relevant. Cross correlation techniques compare the similarities of the search and template images. Their outputs should be highest at places where the image structure matches the template structure, i.e., where large search image values get multiplied by large template image values.
This method is normally implemented by first picking out a part of a search image to use as a template. Let formula_0 represent the value of a search image pixel, where formula_1 represents the coordinates of the pixel in the search image. For simplicity, assume pixel values are scalar, as in a greyscale image. Similarly, let formula_2 represent the value of a template pixel, where formula_3 represents the coordinates of the pixel in the template image. To apply the filter, simply move the center (or origin) of the template image over each point in the search image and calculate the sum of products, similar to a dot product, between the pixel values in the search and template images over the whole area spanned by the template. More formally, if formula_4 is the center (or origin) of the template image, then the cross correlation formula_5 at each point formula_1 in the search image can be computed as:formula_6For convenience, formula_7 denotes both the pixel values of the template image as well as its domain, the bounds of the template. Note that all possible positions of the template with respect to the search image are considered. Since cross correlation values are greatest when the values of the search and template pixels align, the best matching position formula_8 corresponds to the maximum value of formula_5 over formula_9.
Another way to handle translation problems on images using template matching is to compare the intensities of the pixels, using the sum of absolute differences (SAD) measure. To formulate this, let formula_10 and formula_11 denote the light intensity of pixels in the search and template images with coordinates formula_12 and formula_13, respectively. Then by moving the center (or origin) of the template to a point formula_1 in the search image, as before, the sum of absolute differences between the template and search pixel intensities at that point is:formula_14With this measure, the "lowest" SAD gives the best position for the template, rather than the greatest as with cross correlation. SAD tends to be relatively simple to implement and understand, but it also tends to be relatively slow to execute. A simple C++ implementation of SAD template matching is given below.
Implementation.
In this simple implementation, it is assumed that the above described method is applied on grey images: This is why Grey is used as pixel intensity. The final position in this implementation gives the top left location for where the template image best matches the search image.
minSAD = VALUE_MAX;
// loop through the search image
for ( size_t x = 0; x <= S_cols - T_cols; x++ ) {
for ( size_t y = 0; y <= S_rows - T_rows; y++ ) {
SAD = 0.0;
// loop through the template image
for ( size_t j = 0; j < T_cols; j++ )
for ( size_t i = 0; i < T_rows; i++ ) {
pixel p_SearchIMG = S[y+i][x+j];
pixel p_TemplateIMG = T[i][j];
SAD += abs( p_SearchIMG.Grey - p_TemplateIMG.Grey );
// save the best found position
if ( minSAD > SAD ) {
minSAD = SAD;
// give me min SAD
position.bestRow = y;
position.bestCol = x;
position.bestSAD = SAD;
One way to perform template matching on color images is to decompose the pixels into their color components and measure the quality of match between the color template and search image using the sum of the SAD computed for each color separately.
Speeding up the process.
In the past, this type of spatial filtering was normally only used in dedicated hardware solutions because of the computational complexity of the operation, however we can lessen this complexity by filtering it in the frequency domain of the image, referred to as 'frequency domain filtering,' this is done through the use of the convolution theorem.
Another way of speeding up the matching process is through the use of an image pyramid. This is a series of images, at different scales, which are formed by repeatedly filtering and subsampling the original image in order to generate a sequence of reduced resolution images. These lower resolution images can then be searched for the template (with a similarly reduced resolution), in order to yield possible start positions for searching at the larger scales. The larger images can then be searched in a small window around the start position to find the best template location.
Other methods can handle problems such as translation, scale, image rotation and even all affine transformations.
Improving the accuracy of the matching.
Improvements can be made to the matching method by using more than one template (eigenspaces), these other templates can have different scales and rotations.
It is also possible to improve the accuracy of the matching method by hybridizing the feature-based and template-based approaches. Naturally, this requires that the search and template images have features that are apparent enough to support feature matching.
Similar methods.
Other methods which are similar include 'Stereo matching', 'Image registration' and 'Scale-invariant feature transform'.
Examples of use.
Template matching has various applications and is used in such fields as face recognition (see facial recognition system) and medical image processing. Systems have been developed and used in the past to count the number of faces that walk across part of a bridge within a certain amount of time. Other systems include automated calcified nodule detection within digital chest X-rays.
Recently, this method was implemented in geostatistical simulation which could provide a fast algorithm.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "S(x,y)"
},
{
"math_id": 1,
"text": "(x,y)"
},
{
"math_id": 2,
"text": "T(x_t,y_t)"
},
{
"math_id": 3,
"text": "(x_t,y_t)"
},
{
"math_id": 4,
"text": "(0,0)"
},
{
"math_id": 5,
"text": "T\\star S"
},
{
"math_id": 6,
"text": "(T\\star S)(x,y) = \\sum_{(x_t,y_t)\\in T} T(x_t,y_t) \\cdot S(x_t+x, y_t+y)"
},
{
"math_id": 7,
"text": "T"
},
{
"math_id": 8,
"text": "(x_m,y_m)"
},
{
"math_id": 9,
"text": "S"
},
{
"math_id": 10,
"text": "I_S(x_s,y_s)"
},
{
"math_id": 11,
"text": "I_T(x_t,y_t)"
},
{
"math_id": 12,
"text": "(x_s,y_s)"
},
{
"math_id": 13,
"text": "(x_t,y_t)"
},
{
"math_id": 14,
"text": " SAD(x, y) = \\sum_{(x_t,y_t)\\in T} \\left\\vert I_T(x_t,y_t) - I_S(x_t+x,y_t+y) \\right\\vert "
}
] | https://en.wikipedia.org/wiki?curid=977099 |
977221 | Trabecula | Tissue element that supports or anchors a framework of parts within a body or organ
A trabecula (pl.: trabeculae, from Latin for 'small beam') is a small, often microscopic, tissue element in the form of a small beam, strut or rod that supports or anchors a framework of parts within a body or organ. A trabecula generally has a mechanical function, and is usually composed of dense collagenous tissue (such as the trabecula of the spleen). It can be composed of other material such as muscle and bone. In the heart, muscles form trabeculae carneae and septomarginal trabeculae, and the left atrial appendage has a tubular trabeculated structure.
Cancellous bone is formed from groupings of trabeculated bone tissue. In cross section, trabeculae of a cancellous bone can look like septa, but in three dimensions they are topologically distinct, with trabeculae being roughly rod or pillar-shaped and septa being sheet-like.
When crossing fluid-filled spaces, trabeculae may offer the function of resisting tension (as in the penis, see for example trabeculae of corpora cavernosa and trabeculae of corpus spongiosum) or providing a cell filter (as in the trabecular meshwork of the eye).
Bone trabecula.
Structure.
Trabecular bone, also called cancellous bone, is porous bone composed of trabeculated bone tissue. It can be found at the ends of long bones like the femur, where the bone is actually not solid but is full of holes connected by thin rods and plates of bone tissue. The holes (the volume not directly occupied by bone trabecula) is the "intertrabecular space", and is occupied by red bone marrow, where all the blood cells are made, as well as fibrous tissue. Even though trabecular bone contains a lot of intertrabecular space, its spatial complexity contributes the maximal strength with minimum mass. It is noted that the form and structure of trabecular bone are organized to optimally resist loads imposed by functional activities, like jumping, running and squatting. And according to Wolff's law, proposed in 1892, the external shape and internal architecture of bone are determined by external stresses acting on it. The internal structure of the trabecular bone firstly undergoes adaptive changes along stress direction and then the external shape of cortical bone undergoes secondary changes. Finally bone structure becomes thicker and denser to resist external loading.
Because of the increased occurrence of total joint replacement and its impact on bone remodeling, understanding the stress-related and adaptive process of trabecular bone has become a central concern for bone physiologists. To understand the role of trabecular bone in age-related bone structure and in the design for bone-implant systems, it is important to study the mechanical properties of trabecular bone as a function of variables such as anatomic site, bone density, and age related issues. Mechanical factors including modulus, uniaxial strength, and fatigue properties must be taken into account.
Typically, the porosity percent of trabecular bone is in the range 75–95% and the density ranges from 0.2 to 0.8 g/cm3. It is noted that the porosity can reduce the strength of the bone, but also reduce its weight. The porosity and the manner that porosity is structured affect the strength of material. Thus, the micro structure of trabecular bone is typically oriented and "grain" of porosity is aligned in a direction at which mechanical stiffness and strength are greatest. Because of the microstructural directionality, the mechanical properties of trabecular bone are highly an-isotropic. The range of Young's modulus for trabecular bone is 800 to 14,000 MPa and the strength of failure is 1 to 100 MPa.
As mentioned above, the mechanical properties of trabecular bone are very sensitive to apparent density. The relationship between modulus of trabecular bone and its apparent density was demonstrated by Carter and Hayes in 1976. The resulting equation states:
formula_0
where formula_1 represents the modulus of trabecular bone in any loading direction, formula_2 represents the apparent density, and formula_3 formula_4 and formula_5 are constants depending on the architecture of tissue.
Using scanning electron microscopy, it was found that the variation in trabecular architecture with different anatomic sites lead to different modulus. To understand structure-anisotropy and material property relations, one must correlate the measured mechanical properties of anisotropic trabecular specimens with the stereological descriptions of their architecture.
The compressive strength of trabecular bone is also very important because it is believed that the inside failure of trabecular bone arise from compressive stress. On the stress-strain curves for both trabecular bone and cortical bone with different apparent density, there are three stages in stress-strain curve. The first is the linear region where individual trabecula bend and compress as the bulk tissue is compressed. The second stage occurs after yielding, where trabecular bonds start to fracture, and the final stage is the stiffening stage. Typically, lower density trabecular areas offer more deformed staging before stiffening than higher density specimens.
In summary, trabecular bone is very compliant and heterogeneous. The heterogeneous character makes it difficult to summarize the general mechanical properties for trabecular bone. High porosity makes trabecular bone compliant and large variations in architecture leads to high heterogeneity. The modulus and strength vary inversely with porosity and are highly dependent on the porosity structure. The effects of aging and small cracking of trabecular bone on its mechanical properties are a source of further study.
Clinical significance.
Studies have shown that once a human reaches adulthood, bone density steadily decreases with age, to which loss of trabecular bone mass is a partial contributor. Loss of bone mass is defined by the World Health Organization as osteopenia if bone mineral density (BMD) is one standard deviation below mean BMD in young adults, and is defined as osteoporosis if it is more than 2.5 standard deviations below the mean. A low bone density greatly increases risk for stress fracture, which can occur without warning. The resulting low-impact fractures from osteoporosis most commonly occur in the upper femur, which consists of 25-50% trabecular bone depending on the region, in the vertebrae, which are about 90% trabecular, or in the wrist.
When trabecular bone volume decreases, its original plate-and-rod structure is disturbed; plate-like structures are converted to rod-like structures and pre-existing rod-like structures thin until they disconnect and resorb into the body. Changes in trabecular bone are typically gender-specific, with the most notable differences in bone mass and trabecular microstructure occurring within the age range for menopause. Trabeculae degradation over time causes a decrease in bone strength that is disproportionately large in comparison to volume of trabecular bone loss, leaving the remaining bone vulnerable to fracture.
With osteoporosis there are often also symptoms of osteoarthritis, which occurs when cartilage in joints is under excessive stress and degrades over time, causing stiffness, pain, and loss of movement. With osteoarthritis, the underlying bone plays a significant role in cartilage degradation. Thus any trabecular degradation can significantly affect stress distribution and adversely affect the cartilage in question.
Due to its strong effect on overall bone strength, there is currently strong speculation that analysis in patterns of trabeculae degradation may be useful in the near future in tracking the progression of osteoporosis.
Birds.
The hollow design of bird bones is multifunctional. It establishes high specific strength and supplements open airways to accommodate the skeletal pneumaticity common to many birds. The specific strength and resistance to buckling is optimized through a bone design that combines a thin, hard shell that encases a spongy core of trabeculae. The allometry of the trabeculae allows the skeleton to tolerate loads without significantly increasing the bone mass. The red-tailed hawk optimizes its weight with a repeating pattern of V-shaped struts that give the bones the necessary lightweight and stiff characteristics. The inner network of trabeculae shifts mass away from the neutral axis, which ultimately increases the resistance to buckling.
As in humans, the distribution of trabeculae in bird species is uneven and is dependent on load conditions. The bird with the highest density of trabeculae is the kiwi, a flightless bird. There is also uneven distribution of trabeculae within similar species such as the great spotted woodpecker or grey-headed woodpecker. After examining a microCT scan of the woodpecker's forehead, temporomandibulum, and occiput it was determined that there is significantly more trabeculae in the forehead and occiput. Besides the difference in distribution, the aspect ratio of the individual struts was higher in woodpeckers than in other birds of similar size such as the Eurasian hoopoe or the lark. The woodpeckers' trabeculae are more plate-like while the hawk and lark have rod-like structures networked through their bones. The decrease in strain on the woodpecker's brain has been attributed to the higher quantity of thicker plate-like struts packed more closely together than the hawk or hoopoe or the lark. Conversely, the thinner rod-like structures would lead to greater deformation. A destructive mechanical test with 12 samples show the woodpecker's trabeculae design has an average ultimate strength of 6.38MPa, compared to the lark's 0.55MPa.
Woodpeckers' beaks have tiny struts supporting the shell of the beak, but to a lesser extent compared to the skull. As a result of fewer trabeculae in the beak, the beak has a higher stiffness (1.0 GPa) compared to the skull (0.31 GPa). While the beak absorbs some of the impact from pecking, most of the impact is transferred to the skull where more trabeculae are actively available to absorb the shock. The ultimate strength of woodpeckers' and larks' beaks are similar, inferring the beak has a lesser role in impact absorption. One measured advantage of the woodpecker's beak is the slight overbite (upper beak is 1.6mm longer than lower beak) which offers a bimodal distribution of force due to the asymmetric surface contact. The staggered timing of impact induces a lower strain on the trabeculae in the forehead, occiput, and beak.
Trabecula in other organisms.
The larger the animal, the higher the load forces on its bones. Trabecular bone increases stiffness by increasing the amount of bone per unit volume or by altering the geometry and arrangement of individual trabeculae as body size and bone loading increases. Trabecular bone scales allometrically, reorganizing the bones' internal structure to increase the ability of the skeleton to sustain loads experienced by the trabeculae. Furthermore, scaling of trabecular geometry can moderate trabecular strain. Load acts as a stimulus to the trabecular, changing its geometry so as to sustain or mitigate strain loads. By using finite element modelling, a study tested four different species under an equal apparent stress (σapp) to show that trabecular scaling in animals alters the strain within the trabecular. It was observed that the strain within trabeculae from each species varied with the geometry of the trabeculae. From a scale of tens of micrometers, which is approximately the size of osteocytes, the figure below shows that thicker trabeculae exhibited less strain. The relative frequency distributions of element strain experienced by each species shows a higher elastic moduli of the trabeculae as the species size increases.
Additionally, trabeculae in larger animals are thicker, further apart, and less densely connected than those in smaller animals. Intra-trabecular osteon can commonly be found in thick trabeculae of larger animals, as well as thinner trabeculae of smaller animals such as cheetah and lemurs. The osteons play a role in the diffusion of nutrients and waste products of osteocytes by regulating the distance between osteocytes and bone surface to approximately 230 μm.
Due to an increased reduction of blood oxygen saturation, animals with high metabolic demands tend to have a lower trabecular thickness (Tb.Th) because they require increased vascular perfusion of trabeculae. The vascularization by tunneling osteons changes the trabecular geometry from solid to tube-like, increasing bending stiffness for individual trabeculae and sustaining blood supply to deep tissue osteocytes.
Bone volume fraction (BV/TV) was found to be relatively constant for the variety of animal sizes tested. Larger animals did not show a significantly larger mass per unit volume of trabecular bone. This may be due to an adaptation which reduces the physiological cost of producing, maintaining, and moving tissue. However, BV/TV showed significant positive scaling in avian femoral condyles. Larger birds present decreased flight habits due to avian BV/TV allometry. The flightless kiwi, weighing only 1–2 kg, had the greatest BV/TV of the birds tested in the study. This shows that trabecular bone geometry is related to ‘prevailing mechanical conditions’, so the differences in trabecular geometry in the femoral head and condyle could be attributed to different loading environments of coxofemoral and femorotibial joints.
The woodpecker's ability to resist repetitive head impact is correlated with its unique micro/nano-hierarchical composite structures. Microstructure and nanostructure of the woodpecker's skull consists of an uneven distribution of spongy bone, the organizational shape of individual trabeculae. This affects the woodpecker's mechanical properties, allowing the cranial bone to withstand a high ultimate strength (σu). Compared to the cranial bone of the lark, the woodpecker's cranial bone is denser and less spongy, having a more plate-like structure rather than the more rod-like structure observed in larks. Furthermore, the woodpecker's cranial bone is thicker and more individual trabeculae. Relative to the trabeculae in lark, the woodpecker's trabecular is more closely spaced and more plate-like. [19] These properties result in higher ultimate strength in the cranial bone of the woodpecker.
History.
The diminutive form of Latin "trabs", means a beam or bar. In the 19th century, the neologism "trabeculum" (with an assumed plural of "trabecula") became popular, but is less etymologically correct. "Trabeculum" persists in some countries as a synonym for the trabecular meshwork of the eye, but this can be considered poor usage on the grounds of both etymology and descriptive accuracy.
Other uses.
For the skull development component, see trabecular cartilage.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " E = a + b\\cdot\\rho^c "
},
{
"math_id": 1,
"text": "E"
},
{
"math_id": 2,
"text": "\\rho"
},
{
"math_id": 3,
"text": "a,"
},
{
"math_id": 4,
"text": "b,"
},
{
"math_id": 5,
"text": "c"
}
] | https://en.wikipedia.org/wiki?curid=977221 |
977527 | Truncated power function | In mathematics, the truncated power function with exponent formula_0 is defined as
formula_1
In particular,
formula_2
and interpret the exponent as conventional power. | [
{
"math_id": 0,
"text": "n"
},
{
"math_id": 1,
"text": "x_+^n = \n\\begin{cases} \nx^n &:\\ x > 0 \\\\\n0 &:\\ x \\le 0.\n\\end{cases}\n"
},
{
"math_id": 2,
"text": "x_+ = \n\\begin{cases} \nx &:\\ x > 0 \\\\\n0 &:\\ x \\le 0.\n\\end{cases}\n"
},
{
"math_id": 3,
"text": "x \\mapsto x_+^0"
},
{
"math_id": 4,
"text": "\\chi_{[a,b)}(x) = (b-x)_+^0 - (a-x)_+^0"
},
{
"math_id": 5,
"text": "\\chi"
}
] | https://en.wikipedia.org/wiki?curid=977527 |
9775327 | Earthen plaster | Earthen plaster is made of clay, sand and often mixed with plant fibers. The material is often used as an aesthetically pleasing finish coat and also has several functional benefits. This natural plaster layer is known for its breathability, moisture-regulating ability and ability to promote a healthy indoor environment. In the context of stricter indoor air quality regulations, earthen plaster shows great potential because of its properties as a building material.
Physical composition.
All plasters and stuccos have several common features: they all contain a structural component, a binding element, and some form of fiber. Usually the term plaster refers to a wall covering made from earth, lime or gypsum, while stucco uses a cement or synthetic binding element.
Clay: the binding agent.
Clay, a crucial soil component with particles smaller than 2 micrometers, exhibits glue-like properties in the presence of water due to its extremely small particle size and high surface-to-volume ratio. This allows it to bind effectively with sand and fibers, playing a key role in holding the mixture together and securing it to the wall. Additionally, when clay is wet, its plasticity enhances the workability of plaster mixtures.
Within the domain of earthen building materials, clay particles act as primary binders. These particles not only provide workability during the plastic phase but also ensure cohesion after drying, contributing to the structural integrity of the construction. Notable clay minerals involved in this process include montmorillonite, chlorite and illite, each adding distinct properties to the composition. Despite the chemical variation among clays, their prevailing crystalline phases primarily consist of phyllosilicates, such as the mentioned clay minerals. The colloidal component further includes poorly crystalline hydrous aluminum silicates, along with iron and aluminum oxides.
The clay proportion significantly influences mixture characteristics, impacting strength, shrinkage, and mixing water requirements. However, it's essential to note that the recommended maximum clay content in the earth mixture is 25%.
Sand: structural strength.
Sand, the granular skeletal component, provides structure, durability, and volume to earthen plasters. Consisting of tiny mineral particles derived from its original rock material, sand is predominantly made up of silicon dioxide (quartz) and is recognized as a non-reactive substance.
Sand is incorporated into the plaster mixture not just for structural purposes but also plays a vital role in minimizing the likelihood of cracks during the drying process. Moreover, the presence of sand not only helps in preventing cracks but also results in a reduction in the sorptive capacity of the mixture. This dual impact indicates the careful balancing act required in soil composition to achieve both structural integrity and controlled moisture absorption.
Given that sand naturally occurs in various subsoils, there's a possibility that all the necessary sand is already inherent in the soil.
Fiber: tensile strength and reinforcement.
In the context of improving adhesion and compatibility with different substrates, fibers may be introduced to earthen plasters without compromising their environmental profile. Various natural fibers, such as dry straw, hemp, cattails, coconut fiber, shells, and animal hair, prove to be suitable choices for reinforcing earthen plasters.
Research indicates that the inclusion of natural fibers, moderately increases open porosity, facilitating improved pore interconnection. A meshwork is formed within the plaster, enhancing cohesion and providing flexibility to the dried mixture.
Clay tends to shrink and crack during drying, the added fibers effectively counteract these issues. The presence of fibers in the mixture significantly reduces drying shrinkage, with larger fibers exhibiting a more pronounced effect than finer ones. This reduction is attributed to the increased water content required for workability when adding more and finer fibers.
Exploring the physical performance changes resulting from the addition of natural fibers reveals a reduction in material density. The bulk density decreases with higher fiber content, while adhesion strength experiences a positive trend with the addition of fibers, particularly when more and finer fibers are incorporated.
The addition of fibers to plasters is observed to have various benefits, including reduced density, minimized shrinkage cracks, and improved adhesion strength. While the general influence on compressive strength and tensile strength may vary depending on base materials and fibers, the overall conclusion of the research affirms the positive impact of adding fibers to earthen plasters. This enhancement encompasses reduced heat conduction, decreased drying shrinkage, and an improved hygienic buffering capacity.
Water: viscosity.
Water plays a crucial role in the formulation and application of clay plaster, impacting both its workability and structural integrity. As mentioned earlier, clay exhibits adhesive properties in the presence of water, emphasizing the waters vital role in providing structural support. The amount of water added is determined by the specific characteristics of the clay and the overall mixture proportions.
However, the balance between water content and plaster performance becomes apparent in the compressive strength of the material. An increase in initial water content can negatively affect compressive strength. Striking a balance is crucial. For optimal plasticity, the water requirement for plasters should fall within the liquid and plastic limits of the soil. Opting for a water-clay content close to the liquid boundary can enhance ease of application and mitigate surface cracking. The recommended approach is to maintain an initial water content between 30% and 40% of the clay's weight.
It's noteworthy that as the clay content in the mixture increases, so does the demand for water. However, a delicate equilibrium must be maintained to prevent potential shrinkage cracks associated with higher water content. Achieving an optimal water-clay ratio is crucial for utilize the benefits of clay plaster while preserving its structural integrity.
Additives.
Additives can be incorporated into the composition of clay, sand, water, and fiber to enhance various properties of the plaster. Depending on the application, these additives may be selectively applied to the final coat or included in all layers. Many commonly used additives originate either from natural sources or result from industrial and agricultural processes, providing a cost-effective means to refine the characteristics of clay plaster. The diversity of additives allows for their blending in various proportions, each inducing distinct alterations in the plaster. Due to the absence of a comprehensive theoretical model explaining these effects, predicting the impact of a specific additive in a particular plaster mixture relies on empirical testing for each combination.
The primary utilization of additives revolves around addressing inherent weaknesses in clay plaster, such as dry shrinkage, mechanical strength, or adhesion. Furthermore, certain additives aim to enhance properties crucial for indoor applications, including thermal resistance and moisture buffering capacity.
Biopolymers.
Biopolymers are a broad group of additives that are produced from plants or animals. They can serve many purposes: some biopolymers can act as a glue holding the matrix together, while others help fill cavities and supplement the particle distribution, both will increase the cohesion. This can cause multiple benefits: increased density often leads to an increase in overall strength, while less porous plasters prove more water resistant and durable. Some biopolymers also influence the viscosity and processability of the plaster, requiring less water and therefore reducing the dry shrinkage.
Some of the most common biopolymer additives are wheat flour paste, manure, cactus juice, casein (milk protein) and various natural oils such as linseed oil. Other additives include: stearate, tallow, tannin, leaves and bark of certain trees,natural gums and glues, kelp, powdered milk, or the blood of livestock.
Flour paste.
Cooked flour paste is a cheap natural glue that is easy to make from common ingredients. The water and flour slurry is cooked until the gluten binds the elements of the mixture, creating a durable glue. In plaster, the flour paste serves as a binding agent and a hardener.
Manure.
Manure serves as a binding agent and gives plaster more body. Manure also contains small natural fibers that provide additional tensile strength as well as reduce cracking and water erosion. Different types of manure have different effects. Horse manure has a high microfiber content, but cow manure has more hardening enzymes. People have reported success with llama and alpaca dung. Manure should be fresh or fermented when mixed with plaster, as composted manure loses its enzymes and adhesive qualities. Manure should be sifted before use.
Prickly pear cactus juice.
The liquid from prickly pear cactus used to be one of the most common additives in the Americas.
The juice from the prickly pear cactus leaf pads will serve many functions. According to some sources, it helps the plaster set and increases its stickiness or adhesion. Cactus juice also serves as a stabilizer in that it helps make earthen plasters more water-resistant and more durable. It also prevents dusting.
Cactus juice can increase plaster's workability and its ability to be formed into the desired shape. Workability depends on the water content, the shape and size distribution of its aggregate (such as rock, sand, natural fiber, etc.), the age of the plaster, and the amount of other natural binder(s) (such as lime, wheatpaste, cactus juice, hardening vegetable oil, casein and other proteins, etc.) Altering the water content, changing the aggregate mix, soaking the clay, or changing the binders will increase or decrease the plaster's workability. Excessive water will lead to increased bleeding (surface water) and/or segregation of aggregates (when the natural binder and aggregates start to separate), with the resulting plaster having reduced quality. The use of an aggregate with an undesirable gradation can result in a very harsh mix design with a very low workability, which cannot be readily made more workable by addition of reasonable amounts of water or binder.
Cactus juice works well because it contains pectin, a water-soluble long-chain carbohydrate that acts as the binding agent to increase the adhesion of an earthen plaster. Pectin is also responsible for increasing the water resistance of an earthen plaster and has been used to augment lime plasters in both Mexico and the southwestern United States for hundreds of years.
Cactus juice is extracted by immersing cut leaves in water for as long as two weeks.
Other additives.
Industrial waste.
Certain industrial byproducts can be added to attain better mechanical properties, namely strength and shrinkage. Researchers have tested fly ash, limestone sludge, hydraulic lime and dextrin and its effects on the plaster.The addition of limestone sludge and hydraulic lime resulted in a reduced shrinkage when drying, which helps prevent cracks and improve adhesion to application surface. Fly ash and dextrin both improved the mechanical strength of the plaster. It should be noted however that the dosage proved to be very important for the final properties, with each additive showing different results depending on the amount that was mixed in.
Paper waste.
Paper waste can also be included in the plaster to improve its hygrothermal properties. Because it is a waste product is often very cheap and broadly available. Research shows that the addition of paper waste improved the moisture buffering capacity of clay plaster, while also lowering its density. This lowering in density also means that the plaster becomes a better thermal insulator.
Interior earthen plaster.
Earthen plasters are becoming more popular in interior design due to its sustainable and eco-friendly characteristics. The plaster influences the thermal comfort, the indoor air quality and energy efficiency in a positive way. During the drying process however, there is shrinkage which affects its ability to adhere properly to the surface. This can be solved by using different types of wire meshes, using composite plasters or other additives. The other possibility is to paint a mixture of sand and wheat paste on to the surface on which the plaster will be applied.
There are different ways to applicate the earthen plaster. The plaster can be applied in three coats, this is the Spanish process known as 'alisando'. The first layer is the scratch coat which provides adherence for the second layer the brown coat or levelling coat. The final layer is the color coat or finishing coat. This layer is usually clay with sand but without fiber. Other manufacturers only apply the color/finishing coat. This single layer provides less of the advantages discussed later, but it still has advantages compared to gypsum plaster.
Effect on indoor climate.
In principle, all wall coverings have an effect on the room climate: vapour permeable coatings designed to be capillary conductive, allow the wall layers behind them to absorb moisture and release it again. Due to the property of clay plasters to absorb moisture, a buffer is created on the wall, which absorbs moisture and releases it again when the air humidity is low. The area and the thickness of the plastered wall have the greatest influence on the ability of the clay plaster to act as a climatic buffer. The majority of the moisture is kept in the top layer of the clay plaster, so this layer is the most important for the climate buffer effect. Clay also has a high specific heat capacity, this allows the clay plaster to compensate for temperature fluctuations in the room.
Moisture buffering.
Moisture has a significant impact on the indoor environment of a building. Excessive moisture can lead to mold growth, poor air quality, and structural damage. Conversely, a too dry environment can cause discomfort, affecting both health and material preservation. Effective moisture regulation is therefore crucial for a healthy, sustainable, and comfortable living environment. Clay is renowned for its remarkable ability to regulate moisture, a property known as moisture buffering.
Clay possesses the unique capability to both adsorb and absorb water. Adsorption refers to the retention of moisture on the surface of clay particles, while absorption pertains to the uptake of moisture into the material's pores. As the humidity in a space rises, clay can absorb excess moisture. At lower humidity levels, clay gradually releases the absorbed moisture through evaporation.
The porous nature of clay and its high specific surface area, contribute to its moisture-buffering properties. The pores act as reservoirs where moisture can be retained and released. Additionally, the clay content plays a role in moisture buffering. Clay naturally attracts and holds water molecules. Consequently, a higher clay content results in enhanced buffering, although it does not necessarily translate to improved clay plaster. However, clay has the drawback of shrinking as it dries. A higher clay content in the clay plaster may lead to increased shrinkage, potentially causing crack formation.
Not only does the clay content, but also the mineralogical composition, play a crucial role. Clay is considered hygroscopic, indicating its ability to absorb water from the surrounding environment. This contributes to the regulation of relative humidity in a space. However, different clay minerals exhibit varying hygroscopicity. For instance, the montmorillonite clay mineral demonstrates high hygroscopicity, whereas kaolinite exhibits low hygroscopicity. Clay plasters with different compositions and ratios will consequently have distinct moisture-buffering capacities.
Ozone.
General information.
Ozone reacts with many indoor materials, as well with compounds in the indoor air. Reactions between ozone and building surfaces are able to generate and release aerosols and irritating carcinogenic gases, they may be irritating or harmful for building occupants. Indoor air quality is very important, because it is known that most of the people in developed countries spend almost 90% of their lives indoor. In a human body, ozone reacts with tissue cells that promote inflammation and increased permeability of the epithelial lining fluid, which allows for greater penetration of pollutants from lung air into the blood stream. Several studies show that there are some PRMs, passive removal materials, that passively, without using energy, remove ozone out of the indoor air without generating harmful byproducts. Clay wall plaster appears a promising passive removal material for ozone, due to its relatively high ozone reaction probability.
Production of ozone.
Ozone is produced outdoors, but there are also sources of ozone in indoor environments, for example laser printers, photocopy machines. Various measurements show that the indoor ozone concentration closely tracks outdoor concentration and it is dependent on the air exchange rate. The indoor ozone concentration divided by the outdoor ozone concentration (I/O) remains relatively constant.
Indoor air pollution.
Many sources contribute to indoor air pollution. There are pollutants originate from the outside and pollutants which originate from indoor materials. Outdoor air pollutants are classified as biological pollutants (UOB), such as ozone, sulfur oxides, nitrogen oxides, benzene and lead compounds... . The pollutants originating from the interior of the building are building compounds and chemicals released from the indoor materials and pollutants resulting from human and machine activities. They are examined in three categories. The first category includes pollutants, second gases and chemicals, last particles and fibers. There are two types of indoor air pollutants. Primary pollutants or VOC’s can be emitted directly from a surface. Secondary pollutants or VOC’s are caused by gas-phase transformations or surface oxidation. An important difference between primary and secondary VOC’s is the temporal evolution. The emission of primary VOC’s decline at a predictable rate and reduces to lower levels within a year. The emission of secondary VOC’s is more prolonged and can continue for several years. Examples of secondary pollutants, which are more damaging for human health, are aldehydes, ketones and SOA.
Passive removal materials are an alternative method for removing ozone from indoor environments. The characteristics of a passive removal material are, removing ozone out of indoor environments without consuming energy, ozone removal over long time, minimal reaction product formation, large surface area coverage, while maintaining aesthetic appeal. PRM's for ozone are inorganic materials, including clay-based bricks and plasters.
Ozone reactions.
There are two types of reactions that take place. Gas-phase, or homogeneous reactions take place between ozone and some chemicals that are emitted to indoor air. For example, alkenes emitted from building materials, furniture, and numerous cleaning and consumer products. These homogeneous reactions can produce secondary organic aerosols (SOA's) as well as a range of gaseous oxidized products. There are also surface or heterogeneous reactions that can occur on furniture, dust, human skin.
These reactions can produce C1-C10 carbonyls, dicarbonyls and hydroxycarbonyls, that may be irritating or harmful to building occupants.
Effect of indoor air pollution on human health.
Important things to consider when talking about the impact of indoor air pollution on human health are the way of exposure to the pollutants, the interaction of the pollutants with their surrounding environment and the identification of the source. The nose and lungs are the parts of the human body which are most exposed to indoor air pollution, which is logically as the respiratory system is most affected by indoor air pollution. The size of the pollutants is also an important factor. Particles with a diameter greater than 10 microns are trapped in mouth and nose, smaller particles can pass through the mouth and nose into the respiratory system. The smallest particles of 2-3 microns can pass through the lungs and stick to the alveoli.
Quantified parameters ozone.
Deposition velocity.
Deposition velocity = is a mass-transfer coefficient that relates the bulk-air concentration to the flux of ozone to a surface.
Clay wall plaster and clay wall paint have a very high deposition velocity. In general, fleecy and porous materials exhibit higher deposition velocities than smooth sealed surfaces. The high deposition velocities exhibited by clay wall plaster or paint may be due to iron or aluminum catalyzed decomposition of ozone.
formula_0
Reaction probability.
2. Reaction probability is the probability of reaction if an ozone molecule collides with a surface. Where <formula_1> is the Boltzmann velocity (formula_2 for ozone at 296K)
formula_3
Reaction probabilities for clay paint in comparison to clay plaster are higher. The clay paint is statistically more reactive than the clay plaster because it contains cellulose and alcohol esters, two components who reacts with ozone. Reaction probabilities of clay plaster are due to its major component, kaolinite. Kaolinite is a hydrous aluminosilicate mineral that comprises 50% of the clay plaster. Consistent with the trend for deposition velocity, fleecy and porous materials exhibit higher reaction probabilities than smooth, non-porous materials.
yield.
3. Yield= molar yield is defined as the molar emission rate of carbonyl compounds formed due to reactions between the material and ozone divided by the molar flux of ozone between the material surface.
formula_4
Of the highly reactive materials, only clay-based wall plaster combines very low yields with high ozone removal rates.
Clay wall plaster exhibited very high deposition velocities and negligible yields. Clay and materials containing clay (e.g. bricks) consume ozone readily, perhaps because of a reaction catalyzed by metals present in the clay. Clay plaster with very high ozone uptake rates, has certain surface roughness, and porosity. Several studies propose that the high aluminum or iron content and high surface area combine to make the clay plaster, a particularly good ozone scavenging building material. Field test show that materials as clay paint and carpet become less reactive over interval of years, probably due to slow oxidation of organic coatings. This process is named as “ozone aging”. Clay does not appear to become substantially less reactive. Clay plaster has the ability to “regenerate” after periods without lot of ozone exposure. Materials composed of clay are not necessarily good at removing ozone. Even though they are composed of clay, ceramic tiles exhibit low deposition velocities.
Surface removal.
Ozone surface removal rate of a material (formula_5) depends on its ozone deposition rate (formula_6), surface area (A) and volume (V) of the enclosed space in which the material is placed.
formula_7
There is emerging evidence suggesting alternative indoor materials that can be used to reduce indoor ozone concentration (gas phase) with minimum oxidation product consequences. These materials are termed PRM's. It is suggested that PRM ozone removal effectiveness could be as high as 80% depending on the panel surface area and air speed across the panel.
Emission rates.
Clay based paint has higher emission rates of <chem>C5-C10</chem> n-aldehydes compared to clay plaster. The presence of <chem>C5-C10</chem> n-aldehydes, benzaldehyde and tolualdehyde reaction products led to lower assessment of perceived air quality.
Relative humidity and temperature.
Evidence have shown that indoor air parameters: relative humidity, temperature and ozone concentrations influence test results. Higher ozone concentrations can result in lower reaction probabilities and lower yields. Reaction probabilities can also fluctuate if there are modifications of the material surface (e.g. deposition of skin oils, cooking oils), then there is an increased reactivity. At higher temperatures the deposition velocity of ozone is slightly higher. The higher the relative humidity, the larger the deposition velocities of ozone to different surface and the larger the surface removal rate. The more hydrophilic the surface, the larger the effect.
“air purifying” (M).
Room ozonization has been used to freshen indoor air for more than 100 years. Several companies offer ozone generators that claim to remove chemical pollutants from indoor air. They claim that ozone can oxidize airborne gases, and particulates, to simple carbon dioxide and water vapor and that it can also remove unpleasant odors. Several studies have shown that the use of an ozone generator to improve the indoor air quality isn't the best option. Ozone concentrations less than 100 ppb have negligible effects on the majority of gaseous pollutants. There are some indoor air pollutants that react with ozone at a meaningful rate but these compounds typically represent less than 10 % percent of the total gas phase pollutants. It can also be dangerous to use ozone generators. It is difficult to control the ozone concentration and to high concentrations can lead to several complaints.
Conclusion.
Clay is a very promising passive removal material for ozone in many studies. Given the very high deposition velocities, it would substantially reduce indoor ozone concentrations without generating byproducts. Reaction probabilities of clay plaster are due to its major component, kaolinite. Kaolinite is a hydrous aluminosilicate mineral that comprises 50% of the clay plaster. Clay wall plaster can help to improve the indoor air quality, which is very important nowadays, because most of the people spend 90% of their lives indoors.
Perception of comfort.
Manufacturers of clay plasters often seek to draw attention to the positive impact of their products on air quality, emphasizing the improvements that clay plasters could bring about. In scientific literature, references to the favorable influence of clay plasters on indoor air quality are often superficial, with many studies primarily focusing on the hygroscopic behavior or <chem>CO2</chem> absorption of clay plasters.
A study by Darling et al. (2012) concluded that clay plaster has a positive impact on indoor air quality, especially in the presence of ozone, with or without the presence of carpet. The highest levels of air quality acceptance were observed when only clay plaster was present or when both clay plaster and carpet were present without ozone. Introducing clay plaster into a less favorable situation (carpet + ozone) resulted in significantly lower concentrations of both ozone and aldehyde, thereby significantly improving indoor air quality.
It is crucial to maintain a critical stance regarding the claim that clay plasters should improve indoor air quality. While the research by Darling et al. (2012) suggests positive results, additional research is necessary to confirm these findings.
Minerals.
Components.
Earthen plasters consist of various clay minerals that can influence the properties and performance of the plaster in numerous ways. Those minerals include kaoliniet, halloysiet, montmorilloniet, bentoniet, saponiet, vermiculiet, illiet, sepioliet en palygorskiet, zeolites (more used as an additive), chlorite and smectite.
The global distribution of clay minerals in modern oceans reveals patterns. With kaolinite and smectite concentrated in tropical zones, while chlorite and illite predominate in temperate and high latitudes. Smectite, which has a high absorption capacity for organic substances, significantly influences the adsorption of organic materials in sedimentary environments, potentially impacting geological phenomena such as hydrocarbon formation and oil and gas exploration. Some of these minerals provide certain advantages in regard to indoor air quality. The most important one’s are listed below.
Bentonites: Acid-activated bentonites exhibit increased gas adsorption, especially of <chem>SO2</chem>, due to crucial surface properties. However, the process may lower pH and release cations.
Kaolinites: Although smectites generally have superior gas adsorption properties, kaolinites can be enhanced through modifications, such as exchanging amorphous kaolinite with alkali metals.
Zeolites: Zeolites, serving as molecular sieves, are utilized for selective adsorption based on size, with applications in <chem>CO2</chem>capture and water purification.
Pillared Clays: With thermal stability and a large surface area, pillared clays are deployed for gas adsorption, including hydrogen (<chem>H2</chem>) and nitrogen oxides (<chem>NOx</chem>).
When adsorbing pollutants, the imbedded clay forms bonds with other substances in three ways: through Si-O bonds, OH groups, and van der Waals forces. Clay minerals, including smectite, illite, and kaolinite, exhibit different clay layers with a specific configuration of Si-O groups and OH groups, determining both physical and chemical properties.
These natural adsorption qualities can be enlarged by improving certain parameters. This is the case when the textural qualities are improved by increasing the porosity and specific surface area. This can be achieved by an acid treatment or by adding additives like other minerals or organic substances.
Additives.
Zeolites.
Zeolites have a proven effect on the removal efficiency of VOC’s in the indoor air. Especially when a photocatalyst is used, the adsorption reaction can be very effective (up to 90 % removal efficiency) according to some studies, in improving indoor air quality and olfactory comfort. As a possible additive of plaster zeolites have also given very promising results in multiple studies particularly when natural zeolites are used. Nevertheless, the concentration of VOC’s adsorbed by the plaster, were partly released again later. The higher the temperature the larger this effect appears to be.
Active carbon.
Active carbon can also be embedded into a plaster matrix and has shown positive effects on the indoor air quality. The plaster matrix can be used to form a modular sink that can be installed and removed very easily. Big advantage of this is the minimal adaptations and renovations that the existing structure needs to undergo in order to have this passive effect. The removal rate of a plaster containing active carbon, increases when the external surface and the thickness of the sink increase. An increase in the active carbon concentration that the plaster contains, will also increase the removal efficiency to some extent. Above a certain concentration of carbon this rate will not increase any further (above 20% for low concentrations and above 50 % for high concentrations). Unfortunately, in the case of active carbon, the adsorption of VOC’s is not easily reversed, meaning the adsorption potential of an active carbon-based plaster is finite. For now, the active lifetime of such a plaster remains unknown. Some studies present a possible lifetime of 20 but this could be a large exaggeration. More research on this is necessary.
Other possibilities.
Various modifications of clay minerals, including organomontmorillonites, demonstrate reversible <chem>CO2</chem>capture at room temperature. Inorganic-organic composite sorbents are also suitable for <chem>CO2</chem>capture. Other studies have also proven that clay as a component can have a positive influence on the removal rate of organic acids. More research on this part is necessary.
Advantages and disadvantages of earthen plaster.
Advantages.
Earth plasters have many advantages. Consisting mainly of clay, sand and possibly straw, they are a 100% renewable product and contain no harmful substances. Compared to other wall coverings, they are less toxic and energy-intensive, as little energy is required in extraction, production and processing, making them attractive to environmentally conscious people. Moreover, they are easier to repair and cheap. In addition, earthen patches can improve PAQ (Perceived Air Quality). They have a low impact on the environment and have the ability to regulate the hydrothermal conditions of the indoor environment, which can lead to better public health. When decomposed, clay plaster leaves no ecological footprint and since it contains no synthetic additives, it can be recycled and reused indefinitely. It is a circular product. Loam can also practice its role of humidity regulator and they can be applied to most types of supports for renovation or in construction. It is a water-vapor permeable material and has a high storage/heat release capacity, contributing to thermal comfort, improved air quality and energy efficiency. It is a bio-based material that has high breathability due to its hygroscopic porous structure, which also contributes to moisture buffering.
Disadvantages.
The downside, however, is their mechanical strength and resistance to the action of climatic factors, their reduced degree of compatibility with classic finishing materials currently available on the market. In addition, earthen plasters have a high risk of fissuring during the drying process due to their significant shrinkage and high sensitivity to water. If the mixture does not have the right proportions of components, many other problems can occur, such as dust formation and cracks. They are often more labor-intensive (high cost) than other forms of wall coverings and have a granular texture, which, on contact, stains. The odor left by the material is also often found to be disturbing. Finally, a lot of issues surrounding earthen plasters are still undetermined or often read assumptions rather than facts. This shows that the material thus has potential, but there is still a lot of testing to be done around it.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "v_d=\\frac{\\lambda V}{A}\\Bigl(\\frac{C_{in}}{C_e}-1\\Bigr)-v_{d,w}\\Bigl(\\frac{A_w-A}{A}-1\\Bigr)"
},
{
"math_id": 1,
"text": "v_d"
},
{
"math_id": 2,
"text": "1,30\\cdot10^{-6}mh^{-1}"
},
{
"math_id": 3,
"text": "\\gamma=\\frac{4}{<v_b>}\\Bigl(\\frac{1}{v_d}-\\frac{1}{v_t}\\Bigr)^{-1}"
},
{
"math_id": 4,
"text": "Y=\\frac{C_{iO_3}-C_{iO}}{C_{O_3in}-C_{O_3e}}"
},
{
"math_id": 5,
"text": "k_{sr,O_3}"
},
{
"math_id": 6,
"text": "K_{d,O_3} "
},
{
"math_id": 7,
"text": "k_{sr,O_3}=k_{d,0_3}\\left[ \\frac{A}{V} \\right]"
}
] | https://en.wikipedia.org/wiki?curid=9775327 |
9775880 | Fraňková–Helly selection theorem | On convergent subsequences of regulated functions
In mathematics, the Fraňková–Helly selection theorem is a generalisation of Helly's selection theorem for functions of bounded variation to the case of regulated functions. It was proved in 1991 by the Czech mathematician Dana Fraňková.
Background.
Let "X" be a separable Hilbert space, and let BV([0, "T"]; "X") denote the normed vector space of all functions "f" : [0, "T"] → "X" with finite total variation over the interval [0, "T"], equipped with the total variation norm. It is well known that BV([0, "T"]; "X") satisfies the compactness theorem known as Helly's selection theorem: given any sequence of functions ("f""n")"n"∈N in BV([0, "T"]; "X") that is uniformly bounded in the total variation norm, there exists a subsequence
formula_0
and a limit function "f" ∈ BV([0, "T"]; "X") such that "f""n"("k")("t") converges weakly in "X" to "f"("t") for every "t" ∈ [0, "T"]. That is, for every continuous linear functional "λ" ∈ "X"*,
formula_1
Consider now the Banach space Reg([0, "T"]; "X") of all regulated functions "f" : [0, "T"] → "X", equipped with the supremum norm. Helly's theorem does not hold for the space Reg([0, "T"]; "X"): a counterexample is given by the sequence
formula_2
One may ask, however, if a weaker selection theorem is true, and the Fraňková–Helly selection theorem is such a result.
Statement of the Fraňková–Helly selection theorem.
As before, let "X" be a separable Hilbert space and let Reg([0, "T"]; "X") denote the space of regulated functions "f" : [0, "T"] → "X", equipped with the supremum norm. Let ("f""n")"n"∈N be a sequence in Reg([0, "T"]; "X") satisfying the following condition: for every "ε" > 0, there exists some "L"ε > 0 so that each "f""n" may be approximated by a "u""n" ∈ BV([0, "T"]; "X") satisfying
formula_3
and
formula_4
where |-| denotes the norm in "X" and Var("u") denotes the variation of "u", which is defined to be the supremum
formula_5
over all partitions
formula_6
of [0, "T"]. Then there exists a subsequence
formula_7
and a limit function "f" ∈ Reg([0, "T"]; "X") such that "f""n"("k")("t") converges weakly in "X" to "f"("t") for every "t" ∈ [0, "T"]. That is, for every continuous linear functional "λ" ∈ "X"*,
formula_1 | [
{
"math_id": 0,
"text": "\\left( f_{n(k)} \\right) \\subseteq (f_{n}) \\subset \\mathrm{BV}([0, T]; X)"
},
{
"math_id": 1,
"text": "\\lambda \\left( f_{n(k)}(t) \\right) \\to \\lambda(f(t)) \\mbox{ in } \\mathbb{R} \\mbox{ as } k \\to \\infty."
},
{
"math_id": 2,
"text": "f_{n} (t) = \\sin (n t)."
},
{
"math_id": 3,
"text": "\\| f_{n} - u_{n} \\|_{\\infty} < \\varepsilon"
},
{
"math_id": 4,
"text": "| u_{n}(0) | + \\mathrm{Var}(u_{n}) \\leq L_{\\varepsilon},"
},
{
"math_id": 5,
"text": "\\sup_{\\Pi} \\sum_{j=1}^{m} | u(t_{j}) - u(t_{j-1}) |"
},
{
"math_id": 6,
"text": "\\Pi = \\{ 0 = t_{0} < t_{1} < \\dots < t_{m} = T , m \\in \\mathbf{N} \\}"
},
{
"math_id": 7,
"text": "\\left( f_{n(k)} \\right) \\subseteq (f_{n}) \\subset \\mathrm{Reg}([0, T]; X)"
}
] | https://en.wikipedia.org/wiki?curid=9775880 |
977649 | Stem-and-leaf display | Format for presentation of quantitative data
A stem-and-leaf display or stem-and-leaf plot is a device for presenting quantitative data in a graphical format, similar to a histogram, to assist in visualizing the shape of a distribution. They evolved from Arthur Bowley's work in the early 1900s, and are useful tools in exploratory data analysis. Stemplots became more commonly used in the 1980s after the publication of John Tukey's book on "exploratory data analysis" in 1977. The popularity during those years is attributable to their use of monospaced (typewriter) typestyles that allowed computer technology of the time to easily produce the graphics. Modern computers' superior graphic capabilities have meant these techniques are less often used.
This plot has been implemented in Octave and R.
A stem-and-leaf plot is also called a stemplot, but the latter term often refers to another chart type. A simple stem plot may refer to plotting a matrix of "y" values onto a common "x" axis, and identifying the common" x" value with a vertical line, and the individual "y "values with symbols on the line.
Unlike histograms, stem-and-leaf displays retain the original data to at least two significant digits, and put the data in order, thereby easing the move to order-based inference and non-parametric statistics.
Construction.
To construct a stem-and-leaf display, the observations must first be sorted in ascending order: this can be done most easily if working by hand by constructing a draft of the stem-and-leaf display with the leaves unsorted, then sorting the leaves to produce the final stem-and-leaf display. Here is the sorted set of data values that will be used in the following example:
44, 46, 47, 49, 63, 64, 66, 68, 68, 72, 72, 75, 76, 81, 84, 88, 106
Next, it must be determined what the stems will represent and what the leaves will represent. Typically, the leaf contains the last digit of the number and the stem contains all of the other digits. In the case of very large numbers, the data values may be rounded to a particular place value (such as the hundreds place) that will be used for the leaves. The remaining digits to the left of the rounded place value are used as the stem.
In this example, the leaf represents the ones place and the stem will represent the rest of the number (tens place and higher).
The stem-and-leaf display is drawn with two columns separated by a vertical line. The stems are listed to the left of the vertical line. It is important that each stem is listed only once and that no numbers are skipped, even if it means that some stems have no leaves. The leaves are listed in increasing order in a row to the right of each stem.
When there is a repeated number in the data (such as two 72s), the plot must reflect such (so the plot would look like 7 | 2 2 5 6 7 when it has the numbers 72 72 75 76 77).
formula_0
Key: formula_1
Leaf unit: 1.0
Stem unit: 10.0
Rounding may be needed to create a stem-and-leaf display. Based on the following set of data, the stem plot below would be created:
−23.678758, −12.45, −3.4, 4.43, 5.5, 5.678, 16.87, 24.7, 56.8
For negative numbers, a negative is placed in front of the stem unit, which is still the value X / 10. Non-integers are rounded. This allows the stem and leaf plot to retain its shape, even for more complicated data sets. As in this example below:
formula_2
Key: formula_3
Usage.
Stem-and-leaf displays are useful for displaying the relative density and shape of the data, giving the reader a quick overview of the distribution. They retain (most of) the raw numerical data, often with perfect integrity. They are also useful for highlighting outliers and finding the mode. However, stem-and-leaf displays are only useful for moderately sized data sets (around 15–150 data points). With very small data sets a stem-and-leaf displays can be of little use, as a reasonable number of data points are required to establish definitive distribution properties. A dot plot may be better suited for such data. With very large data sets, a stem-and-leaf display will become very cluttered, since each data point must be represented numerically. A box plot or histogram may become more appropriate as the data size increases.
Non-numerical use.
Stem-and-leaf displays can also be used to convey non-numerical information. In this example of valid two-letter words in Collins Scrabble Words (the word list used in Scrabble tournaments outside the US) with their initials as stems, it can be easily seen that the three most common initials are o, a and e.
Some railway timetables use stem-and-leaf displays with hours as stems and minutes as leaves
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\n\\begin{array}{r|l}\n\\text{Stem} & \\text{Leaf} \\\\\n\\hline\n 4 & 4~6~7~9 \\\\\n 5 & \\\\\n 6 & 3~4~6~8~8 \\\\\n 7 & 2~2~5~6 \\\\\n 8 & 1~4~8 \\\\\n 9 & \\\\\n10 & 6\n\\end{array}\n"
},
{
"math_id": 1,
"text": "6 \\mid 3 = 63"
},
{
"math_id": 2,
"text": "\n\\begin{array}{r|l}\n\\text{Stem} & \\text{Leaf} \\\\\n\\hline\n-2 & 4 \\\\\n-1 & 2 \\\\\n-0 & 3 \\\\\n 0 & 4~6~6 \\\\\n 1 & 7 \\\\\n 2 & 5 \\\\\n 3 & \\\\\n 4 & \\\\\n 5 & 7\n\\end{array}\n"
},
{
"math_id": 3,
"text": "-2 \\mid 4 = -24"
}
] | https://en.wikipedia.org/wiki?curid=977649 |
9776617 | Regulated function | In mathematics, a regulated function, or ruled function, is a certain kind of well-behaved function of a single real variable. Regulated functions arise as a class of integrable functions, and have several equivalent characterisations. Regulated functions were introduced by Nicolas Bourbaki in 1949, in their book "Livre IV: Fonctions d'une variable réelle".
Definition.
Let "X" be a Banach space with norm || - ||"X". A function "f" : [0, "T"] → "X" is said to be a regulated function if one (and hence both) of the following two equivalent conditions holds true:
It requires a little work to show that these two conditions are equivalent. However, it is relatively easy to see that the second condition may be re-stated in the following equivalent ways:
formula_0
Properties of regulated functions.
Let Reg([0, "T"]; "X") denote the set of all regulated functions "f" : [0, "T"] → "X".
formula_1
formula_2
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\| f - \\varphi_\\delta \\|_\\infty = \\sup_{t \\in [0, T]} \\| f(t) - \\varphi_\\delta (t) \\|_X < \\delta;"
},
{
"math_id": 1,
"text": "\\mathrm{Reg}([0, T]; X) = \\overline{\\mathrm{BV} ([0, T]; X)} \\mbox{ w.r.t. } \\| \\cdot \\|_{\\infty}."
},
{
"math_id": 2,
"text": "\\mathrm{Reg}([0, T]; X) = \\bigcup_{\\varphi} \\mathrm{BV}_{\\varphi} ([0, T]; X)."
},
{
"math_id": 3,
"text": " \\epsilon > 0 "
},
{
"math_id": 4,
"text": " \\epsilon"
},
{
"math_id": 5,
"text": "F_\\sigma"
}
] | https://en.wikipedia.org/wiki?curid=9776617 |
9777020 | Divergent geometric series | In mathematics, an infinite geometric series of the form
formula_0
is divergent if and only if | "r" | ≥ 1. Methods for summation of divergent series are sometimes useful, and usually evaluate divergent geometric series to a sum that agrees with the formula for the convergent case
formula_1
This is true of any summation method that possesses the properties of regularity, linearity, and stability.
Examples.
In increasing order of difficulty to sum:
Motivation for study.
It is useful to figure out which summation methods produce the geometric series formula for which common ratios. One application for this information is the so-called Borel-Okada principle: If a regular summation method sums Σ"z""n" to 1/(1 - "z") for all "z" in a subset "S" of the complex plane, given certain restrictions on "S", then the method also gives the analytic continuation of any other function "f"("z") = Σ"a""n""z""n" on the intersection of "S" with the Mittag-Leffler star for "f".
Summability by region.
Open unit disk.
Ordinary summation succeeds only for common ratios |"z"| < 1.
Half-plane.
The series is Borel summable for every "z" with real part < 1. Any such series is also summable by the generalized Euler method (E, "a") for appropriate "a".
Shadowed plane.
Certain moment constant methods besides Borel summation can sum the geometric series on the entire Mittag-Leffler star of the function 1/(1 − "z"), that is, for all "z" except the ray "z" ≥ 1.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\sum_{n=1}^\\infty ar^{n-1} = a + ar + ar^2 + ar^3 +\\cdots"
},
{
"math_id": 1,
"text": "\\sum_{n=1}^\\infty ar^{n-1} = \\frac{a}{1-r}."
}
] | https://en.wikipedia.org/wiki?curid=9777020 |
9777067 | Holonomic constraints | Type of constraints for mechanical systems
In classical mechanics, holonomic constraints are relations between the position variables (and possibly time) that can be expressed in the following form:
formula_0
where formula_1 are n generalized coordinates that describe the system (in unconstrained configuration space). For example, the motion of a particle constrained to lie on the surface of a sphere is subject to a holonomic constraint, but if the particle is able to fall off the sphere under the influence of gravity, the constraint becomes non-holonomic. For the first case, the holonomic constraint may be given by the equation
formula_2
where formula_3 is the distance from the centre of a sphere of radius formula_4, whereas the second non-holonomic case may be given by
formula_5
Velocity-dependent constraints (also called semi-holonomic constraints) such as
formula_6
are not usually holonomic.
Holonomic system.
In classical mechanics a system may be defined as holonomic if all constraints of the system are holonomic. For a constraint to be holonomic it must be expressible as a function:
formula_7
i.e. a holonomic constraint depends only on the coordinates formula_8 and maybe time formula_9. It does not depend on the velocities or any higher-order derivative with respect to t. A constraint that cannot be expressed in the form shown above is a nonholonomic constraint.
Introduction.
As described above, a holonomic system is (simply speaking) a system in which one can deduce the state of a system by knowing only the change of "positions" of the components of the system over time, but "not" needing to know the velocity or in what order the components moved relative to each other. In contrast, a nonholonomic system is often a system where the velocities of the components over time must be known to be able to determine the change of state of the system, or a system where a moving part is not able to be bound to a constraint surface, real or imaginary. Examples of holonomic systems are gantry cranes, pendulums, and robotic arms. Examples of nonholonomic systems are Segways, unicycles, and automobiles.
Terminology.
The configuration space formula_10 lists the displacement of the components of the system, one for each degree of freedom. A system that can be described using a configuration space is called scleronomic.
formula_11
The event space is identical to the configuration space except for the addition of a variable formula_9 to represent the change in the system over time (if needed to describe the system). A system that must be described using an event space, instead of only a configuration space, is called rheonomic. Many systems can be described either scleronomically or rheonomically. For example, the total allowable motion of a pendulum can be described with a scleronomic constraint, but the motion over time of a pendulum must be described with a rheonomic constraint.
formula_12
The state space formula_13 is the configuration space, plus terms describing the velocity of each term in the configuration space.
formula_14
The state-time space adds time formula_9.
formula_15
Examples.
Gantry crane.
As shown on the right, a gantry crane is an overhead crane that is able to move its hook in 3 axes as indicated by the arrows. Intuitively, we can deduce that the crane should be a holonomic system as, for a given movement of its components, it doesn't matter what order or velocity the components move: as long as the total displacement of each component from a given starting condition is the same, all parts and the system as a whole will end up in the same state. Mathematically we can prove this as such:
We can define the configuration space of the system as:
formula_16
We can say that the deflection of each component of the crane from its "zero" position are formula_17, formula_18, and formula_19, for the blue, green, and orange components, respectively. The orientation and placement of the coordinate system does not matter in whether a system is holonomic, but in this example the components happen to move parallel to its axes. If the origin of the coordinate system is at the back-bottom-left of the crane, then we can write the position constraint equation as:
formula_20
Where formula_21 is the height of the crane. Optionally, we may simplify to the standard form where all constants are placed after the variables:
formula_22
Because we have derived a constraint equation in holonomic form (specifically, our constraint equation has the form formula_23 where formula_24), we can see that this system must be holonomic.
Pendulum.
As shown on the right, a simple pendulum is a system composed of a weight and a string. The string is attached at the top end to a pivot and at the bottom end to a weight. Being inextensible, the string’s length is a constant. This system is holonomic because it obeys the holonomic constraint
formula_25
where formula_26 is the position of the weight and formula_27 is length of the string.
Rigid body.
The particles of a rigid body obey the holonomic constraint
formula_28
where formula_29, formula_30 are respectively the positions of particles formula_31 and formula_32, and formula_33 is the distance between them. If a given system is holonomic, rigidly attaching additional parts to components of the system in question cannot make it non-holonomic, assuming that the degrees of freedom are not reduced (in other words, assuming the configuration space is unchanged).
Pfaffian form.
Consider the following differential form of a constraint:
formula_34
where formula_35 are the coefficients of the differentials formula_36 for the "i"th constraint equation. This form is called the Pfaffian form or the differential form.
If the differential form is integrable, i.e., if there is a function formula_37 satisfying the equality
formula_38
then this constraint is a holonomic constraint; otherwise, it is nonholonomic. Therefore, all holonomic and some nonholonomic constraints can be expressed using the differential form. Examples of nonholonomic constraints that cannot be expressed this way are those that are dependent on generalized velocities. With a constraint equation in Pfaffian form, whether the constraint is holonomic or nonholonomic depends on whether the Pfaffian form is integrable. See Universal test for holonomic constraints below for a description of a test to verify the integrability (or lack of) of a Pfaffian form constraint.
Universal test for holonomic constraints.
When the constraint equation of a system is written in Pfaffian constraint form, there exists a mathematical test to determine whether the system is holonomic.
For a constraint equation, or formula_39 sets of constraint equations (note that variable(s) representing time can be included, as from above formula_40 and formula_41 in the following form):
formula_42
we can use the test equation:
formula_43
where formula_44 in formula_45 combinations of test equations per constraint equation, for all formula_46 sets of constraint equations.
In other words, a system of three variables would have to be tested once with one test equation with the terms formula_47 being terms formula_48 in the constraint equation (in any order), but to test a system of four variables the test would have to be performed up to "four" times with four different test equations, with the terms formula_47 being terms formula_48, formula_49, formula_50, and formula_51 in the constraint equation (each in any order) in four different tests. For a system of five variables, "ten" tests would have to be performed on a holonomic system to verify that fact, and for a system of five variables with three sets of constraint equations, "thirty" tests (assuming a simplification like a change-of-variable could not be performed to reduce that number). For this reason, it is advisable when using this method on systems of more than three variables to use common sense as to whether the system in question is holonomic, and only pursue testing if the system likely is not. Additionally, it is likewise best to use mathematical intuition to try to predict which test would fail first and begin with that one, skipping tests at first that seem likely to succeed.
If every test equation is true for the entire set of combinations for all constraint equations, the system is holonomic. If it is untrue for even one test combination, the system is nonholonomic.
Example.
Consider this dynamical system described by a constraint equation in Pfaffian form.
formula_52
The configuration space, by inspection, is formula_53. Because there are only three terms in the configuration space, there will be only one test equation needed.
We can organize the terms of the constraint equation as such, in preparation for substitution:
formula_54
formula_55
formula_56
formula_57
formula_58
formula_59
Substituting the terms, our test equation becomes:
formula_60
After calculating all partial derivatives, we get:
formula_61
Simplifying, we find that:
formula_62
We see that our test equation is true, and thus, the system must be holonomic.
We have finished our test, but now knowing that the system is holonomic, we may wish to find the holonomic constraint equation. We can attempt to find it by integrating each term of the Pfaffian form and attempting to unify them into one equation, as such:
formula_63
formula_64
formula_65
It's easy to see that we can combine the results of our integrations to find the holonomic constraint equation:
formula_66
where C is the constant of integration.
Constraints of constant coefficients.
For a given Pfaffian constraint where every coefficient of every differential is a constant, in other words, a constraint in the form:
formula_67
the constraint must be holonomic.
We may prove this as follows: consider a system of constraints in Pfaffian form where every coefficient of every differential is a constant, as described directly above. To test whether this system of constraints is holonomic, we use the universal test. We can see that in the test equation, there are three terms that must sum to zero. Therefore, if each of those three terms in every possible test equation are each zero, then all test equations are true and this the system is holonomic. Each term of each test equation is in the form:
formula_68
where:
Additionally, there are formula_39 sets of test equations.
We can see that, by definition, all formula_80 are constants. It is well-known in calculus that any derivative (full or partial) of any constant is formula_81. Hence, we can reduce each partial derivative to:
formula_82
and hence each term is zero, the left side each test equation is zero, each test equation is true, and the system is holonomic.
Configuration spaces of two or one variable.
Any system that can be described by a Pfaffian constraint and has a configuration space or state space of only two variables or one variable is holonomic.
We may prove this as such: consider a dynamical system with a configuration space or state space described as:
formula_83
if the system is described by a state space, we simply say that formula_76 equals our time variable formula_9. This system will be described in Pfaffian form:
formula_84
with formula_39 sets of constraints. The system will be tested by using the universal test. However, the universal test requires three variables in the configuration or state space. To accommodate this, we simply add a dummy variable formula_85 to the configuration or state space to form:
formula_86
Because the dummy variable formula_85 is by definition not a measure of anything in the system, its coefficient in the Pfaffian form must be formula_81. Thus we revise our Pfaffian form:
formula_87
Now we may use the test as such, for a given constraint formula_39 if there are a set of constraints:
formula_88
Upon realizing that :formula_89 because the dummy variable formula_85 cannot appear in the coefficients used to describe the system, we see that the test equation must be true for all sets of constraint equations and thus the system must be holonomic. A similar proof can be conducted with one actual variable in the configuration or state space and two dummy variables to confirm that one-degree-of-freedom systems describable in Pfaffian form are also always holonomic.
In conclusion, we realize that even though it is possible to model nonholonomic systems in Pfaffian form, any system modellable in Pfaffian form with two or fewer degrees of freedom (the number of degrees of freedom is equal to the number of terms in the configuration space) must be holonomic.
Important note: realize that the test equation failed because the dummy variable, and hence the dummy differential included in the test, will differentiate anything that is a function of the actual configuration or state space variables to formula_81. Having a system with a configuration or state space of:
formula_90
and a set of constraints where one or more constraints are in the Pfaffian form:
formula_91
does "not" guarantee the system is holonomic, as even though one differential has a coefficient of formula_81, there are still three degrees of freedom described in the configuration or state space.
Transformation to independent generalized coordinates.
The holonomic constraint equations can help us easily remove some of the dependent variables in our system. For example, if we want to remove formula_92, which is a parameter in the constraint equation formula_93, we can rearrange the equation into the following form, assuming it can be done,
formula_94
and replace the formula_92 in every equation of the system using the above function. This can always be done for general physical systems, provided that the derivative of formula_93 is continuous, then by the implicit function theorem, the solution formula_95, is guaranteed in some open set. Thus, it is possible to remove all occurrences of the dependent variable formula_92.
Suppose that a physical system has formula_96 degrees of freedom. Now, formula_21 holonomic constraints are imposed on the system. Then, the number of degrees of freedom is reduced to formula_97. We can use formula_98 independent generalized coordinates (formula_99) to completely describe the motion of the system. The transformation equation can be expressed as follows:
formula_100
Classification of physical systems.
In order to study classical physics rigorously and methodically, we need to classify systems. Based on previous discussion, we can classify physical systems into holonomic systems and non-holonomic systems. One of the conditions for the applicability of many theorems and equations is that the system must be a holonomic system. For example, if a physical system is a holonomic system and a monogenic system, then Hamilton's principle is the necessary and sufficient condition for the correctness of Lagrange's equation.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "f(u_1, u_2, u_3,\\ldots, u_n, t) = 0"
},
{
"math_id": 1,
"text": "\\{ u_1, u_2, u_3, \\ldots, u_n \\}"
},
{
"math_id": 2,
"text": "r^2 - a^2 = 0"
},
{
"math_id": 3,
"text": "r"
},
{
"math_id": 4,
"text": "a"
},
{
"math_id": 5,
"text": "r^2 - a^2 \\geq 0"
},
{
"math_id": 6,
"text": "f(u_1,u_2,\\ldots,u_n,\\dot{u}_1,\\dot{u}_2,\\ldots,\\dot{u}_n,t)=0"
},
{
"math_id": 7,
"text": " f(u_1,\\ u_2,\\ u_3,\\ \\ldots,\\ u_n,\\ t)=0, \\, "
},
{
"math_id": 8,
"text": "u_j"
},
{
"math_id": 9,
"text": "t"
},
{
"math_id": 10,
"text": "\\mathbf{u}"
},
{
"math_id": 11,
"text": " \\mathbf{u}=\\begin{bmatrix}u_1 & u_2 &\\ldots& u_n \\end{bmatrix}^\\mathrm{T} "
},
{
"math_id": 12,
"text": " \\mathbf{u} = \\begin{bmatrix}u_1 & u_2 &\\ldots& u_n & t\\end{bmatrix}^\\mathrm{T} "
},
{
"math_id": 13,
"text": "\\mathbf{q}"
},
{
"math_id": 14,
"text": " \\mathbf{q}=\\begin{bmatrix}\\mathbf{u} \\\\\\mathbf{\\dot{u}} \\end{bmatrix}=\\begin{bmatrix}u_1 &\\ldots& u_n & \\dot{u}_1&\\ldots& \\dot{u}_n\\end{bmatrix}^\\mathrm{T} "
},
{
"math_id": 15,
"text": " \\mathbf{q} = \\begin{bmatrix}\\mathbf{u} \\\\\\mathbf{\\dot{u}} \\end{bmatrix}=\\begin{bmatrix}u_1 &\\ldots& u_n & t&\\dot{u}_1&\\ldots& \\dot{u}_n\\end{bmatrix}^\\mathrm{T} "
},
{
"math_id": 16,
"text": "\\mathbf{u} = \\begin{bmatrix} x \\\\ y \\\\ z \\end{bmatrix}"
},
{
"math_id": 17,
"text": "B"
},
{
"math_id": 18,
"text": "G"
},
{
"math_id": 19,
"text": "O"
},
{
"math_id": 20,
"text": "(x-B) + (y-G) + (z-(h-O)) = 0"
},
{
"math_id": 21,
"text": "h"
},
{
"math_id": 22,
"text": "x+y+z-(B+G+h-O) = 0"
},
{
"math_id": 23,
"text": "f(x, y, z) = 0"
},
{
"math_id": 24,
"text": "\\{x, y, z\\} \\in \\mathbf{u}"
},
{
"math_id": 25,
"text": " {x^2+y^2} - L^2=0, "
},
{
"math_id": 26,
"text": "(x,\\ y)"
},
{
"math_id": 27,
"text": "L"
},
{
"math_id": 28,
"text": "(\\mathbf{r}_i - \\mathbf{r}_j)^2 - L_{ij}^2=0, \\, "
},
{
"math_id": 29,
"text": "\\mathbf{r}_i"
},
{
"math_id": 30,
"text": "\\mathbf{r}_j"
},
{
"math_id": 31,
"text": "P_i"
},
{
"math_id": 32,
"text": "P_j"
},
{
"math_id": 33,
"text": "L_{ij}"
},
{
"math_id": 34,
"text": "\\sum_j\\ A_{ij} \\, du_j+A_i \\, dt=0"
},
{
"math_id": 35,
"text": "A_{ij},A_i"
},
{
"math_id": 36,
"text": "du_j,dt"
},
{
"math_id": 37,
"text": "f_i(u_1,\\ u_2,\\ u_3,\\ \\ldots,\\ u_n,\\ t)=0"
},
{
"math_id": 38,
"text": "df_i=\\sum_j\\ A_{ij} \\, du_j+A_i \\, dt=0"
},
{
"math_id": 39,
"text": "i"
},
{
"math_id": 40,
"text": "A_i\\in A_{ij}"
},
{
"math_id": 41,
"text": "\\, dt \\in du_j"
},
{
"math_id": 42,
"text": "\\sum_j^n\\ A_{ij} \\, du_j=0; \\, "
},
{
"math_id": 43,
"text": "A_\\gamma\\left(\\frac{\\partial A_\\beta}{\\partial u_\\alpha}-\\frac{\\partial A_\\alpha}{\\partial u_\\beta}\\right) + A_\\beta \\left(\\frac{\\partial A_\\alpha}{\\partial u_\\gamma} - \\frac{\\partial A_\\gamma}{\\partial u_\\alpha}\\right) + A_\\alpha \\left(\\frac{\\partial A_\\gamma}{\\partial u_\\beta} - \\frac{\\partial A_\\beta}{\\partial u_\\gamma}\\right) = 0"
},
{
"math_id": 44,
"text": "\\alpha,\\beta,\\gamma=1,2,3\\ldots n"
},
{
"math_id": 45,
"text": "\\binom n 3 = \\frac{n(n-1)(n-2)}{6}"
},
{
"math_id": 46,
"text": " i "
},
{
"math_id": 47,
"text": "\\alpha, \\beta, \\gamma"
},
{
"math_id": 48,
"text": "1, 2, 3"
},
{
"math_id": 49,
"text": "1, 2, 4"
},
{
"math_id": 50,
"text": "1, 3, 4"
},
{
"math_id": 51,
"text": "2, 3, 4"
},
{
"math_id": 52,
"text": "\\cos(\\theta) dx + \\sin(\\theta)dy + \\left[y\\cos(\\theta) - x\\sin(\\theta)\\right] d\\theta = 0"
},
{
"math_id": 53,
"text": " \\mathbf{u}=\\begin{bmatrix}x & y & \\theta \\end{bmatrix}^\\mathrm{T} "
},
{
"math_id": 54,
"text": "A_\\alpha =\\cos\\theta"
},
{
"math_id": 55,
"text": "A_\\beta = \\sin\\theta"
},
{
"math_id": 56,
"text": "A_\\gamma = y\\cos\\theta-x\\sin\\theta"
},
{
"math_id": 57,
"text": "u_\\alpha = dx"
},
{
"math_id": 58,
"text": "u_\\beta = dy"
},
{
"math_id": 59,
"text": "u_\\gamma = d\\theta"
},
{
"math_id": 60,
"text": "\\left(y\\cos\\theta - x\\sin\\theta\\right) \\left[\\frac{\\partial}{\\partial x}\\sin\\theta - \\frac{\\partial}{\\partial y}\\cos\\theta\\right] + \\sin\\theta \\left[\\frac{\\partial}{\\partial \\theta}\\cos\\theta - \\frac{\\partial}{\\partial x}(y\\cos\\theta-x\\sin\\theta)\\right] + \\cos\\theta \\left[\\frac{\\partial}{\\partial y}(y\\cos\\theta - x\\sin\\theta) - \\frac{\\partial}{\\partial \\theta}\\sin\\theta\\right] = 0"
},
{
"math_id": 61,
"text": "(y\\cos\\theta - x\\sin\\theta) \\left[0 - 0\\right] + \\sin\\theta \\left[-\\sin\\theta-(-\\sin\\theta)\\right] + \\cos\\theta \\left[\\cos\\theta-\\cos\\theta\\right] = 0"
},
{
"math_id": 62,
"text": "0=0"
},
{
"math_id": 63,
"text": "\\int \\cos\\theta \\, dx = x\\cos\\theta + f(y, \\theta)"
},
{
"math_id": 64,
"text": "\\int \\sin\\theta \\, dy = y\\sin\\theta + f(x, \\theta)"
},
{
"math_id": 65,
"text": "\\int \\left(y\\cos\\theta - x\\sin\\theta\\right) d\\theta = y\\sin\\theta + x\\cos\\theta + f(x, y)"
},
{
"math_id": 66,
"text": "y\\sin\\theta+x\\cos\\theta+C = 0"
},
{
"math_id": 67,
"text": "\\sum_j\\ A_{ij} \\, du_j+A_i \\, dt=0; \\;\\{A_{ij}, A_{i} ;\\,j = 1,2,\\ldots;\\,i = 1,2,\\ldots\\}\\in\\mathbb{R}"
},
{
"math_id": 68,
"text": "A_3\\left(\\frac{\\partial A_2}{\\partial u_1} - \\frac{\\partial A_1}{\\partial u_2}\\right)"
},
{
"math_id": 69,
"text": "A_1"
},
{
"math_id": 70,
"text": "A_2"
},
{
"math_id": 71,
"text": "A_3"
},
{
"math_id": 72,
"text": "\\binom n 3 = n (n - 1) (n - 2) / 6"
},
{
"math_id": 73,
"text": "A_{ij};\\,j = 1,2,\\ldots"
},
{
"math_id": 74,
"text": "A_{i}"
},
{
"math_id": 75,
"text": "u_1"
},
{
"math_id": 76,
"text": "u_2"
},
{
"math_id": 77,
"text": "u_3"
},
{
"math_id": 78,
"text": "u_j;\\,j = 1,2,\\ldots"
},
{
"math_id": 79,
"text": "dt"
},
{
"math_id": 80,
"text": "A_n"
},
{
"math_id": 81,
"text": "0"
},
{
"math_id": 82,
"text": "A_3\\big(0-0\\big)"
},
{
"math_id": 83,
"text": " \\mathbf{u}=\\begin{bmatrix}u_1 & u_2 \\end{bmatrix}^\\mathrm{T} "
},
{
"math_id": 84,
"text": "A_{i1} \\, du_1 + A_{i2} \\, du_2=0"
},
{
"math_id": 85,
"text": "\\lambda"
},
{
"math_id": 86,
"text": " \\mathbf{u}=\\begin{bmatrix}u_1 & u_2 & \\lambda\\end{bmatrix}^\\mathrm{T}"
},
{
"math_id": 87,
"text": "A_{i1} \\, du_1 + A_{i2} \\, du_2 + 0 \\, d\\lambda = 0"
},
{
"math_id": 88,
"text": "0 \\left(\\frac{\\partial A_{i2}}{\\partial u_1} - \\frac{\\partial A_{i1}}{\\partial u_2}\\right) + A_{i2} \\left(\\frac{\\partial A_{i1}}{\\partial \\lambda} - \\frac{\\partial}{\\partial u_1}0\\right) + A_{i1} \\left(\\frac{\\partial}{\\partial u_2} 0 - \\frac{\\partial A_{i2}}{\\partial \\lambda}\\right) = 0"
},
{
"math_id": 89,
"text": "\\frac{\\partial}{\\partial \\lambda} f(u_1, u_2) = 0"
},
{
"math_id": 90,
"text": " \\mathbf{u}=\\begin{bmatrix}u_1 & u_2 & u_3\\end{bmatrix}^\\mathrm{T}"
},
{
"math_id": 91,
"text": "A_{i1} du_1+A_{i2} du_2+0du_3=0"
},
{
"math_id": 92,
"text": "x_d"
},
{
"math_id": 93,
"text": "f_i"
},
{
"math_id": 94,
"text": "x_d=g_i(x_1,\\ x_2,\\ x_3,\\ \\dots,\\ x_{d-1},\\ x_{d+1},\\ \\dots,\\ x_N,\\ t), \\, "
},
{
"math_id": 95,
"text": "g_i"
},
{
"math_id": 96,
"text": "N"
},
{
"math_id": 97,
"text": "m=N - h"
},
{
"math_id": 98,
"text": "m"
},
{
"math_id": 99,
"text": "q_j"
},
{
"math_id": 100,
"text": "x_i=x_i(q_1,\\ q_2,\\ \\ldots,\\ q_m,\\ t)\\ ,\\qquad i=1,\\ 2,\\ \\ldots N. \\, "
}
] | https://en.wikipedia.org/wiki?curid=9777067 |
977725 | Semi-local ring | Algebraic ring classification
In mathematics, a semi-local ring is a ring for which "R"/J("R") is a semisimple ring, where J("R") is the Jacobson radical of "R".
The above definition is satisfied if "R" has a finite number of maximal right ideals (and finite number of maximal left ideals). When "R" is a commutative ring, the converse implication is also true, and so the definition of semi-local for commutative rings is often taken to be "having finitely many maximal ideals".
Some literature refers to a commutative semi-local ring in general as a
"quasi-semi-local ring", using semi-local ring to refer to a Noetherian ring with finitely many maximal ideals.
A semi-local ring is thus more general than a local ring, which has only one maximal (right/left/two-sided) ideal.
formula_3.
(The map is the natural projection). The right hand side is a direct sum of fields. Here we note that ∩i mi=J("R"), and we see that "R"/J("R") is indeed a semisimple ring. | [
{
"math_id": 0,
"text": "\\mathbb{Z}/m\\mathbb{Z}"
},
{
"math_id": 1,
"text": "m"
},
{
"math_id": 2,
"text": "\\bigoplus_{i=1}^n{F_i}"
},
{
"math_id": 3,
"text": "R/\\bigcap_{i=1}^n m_i\\cong\\bigoplus_{i=1}^n R/m_i\\,"
}
] | https://en.wikipedia.org/wiki?curid=977725 |
977983 | Charge qubit | Superconducting qubit implementation
In quantum computing, a charge qubit (also known as Cooper-pair box) is a qubit whose basis states are charge states (i.e. states which represent the presence or absence of excess Cooper pairs in the island). In superconducting quantum computing, a charge qubit is formed by a tiny superconducting island coupled by a Josephson junction (or practically, superconducting tunnel junction) to a superconducting reservoir (see figure). The state of the qubit is determined by the number of Cooper pairs that have tunneled across the junction. In contrast with the charge state of an atomic or molecular ion, the charge states of such an "island" involve a macroscopic number of conduction electrons of the island. The quantum superposition of charge states can be achieved by tuning the gate voltage "U" that controls the chemical potential of the island. The charge qubit is typically read-out by electrostatically coupling the island to an extremely sensitive electrometer such as the radio-frequency single-electron transistor.
Typical "T"2 coherence times for a charge qubit are on the order of 1–2 μs. Recent work has shown "T"2 times approaching 100 μs using a type of charge qubit known as a transmon inside a three-dimensional superconducting cavity. Understanding the limits of "T"2 is an active area of research in the field of superconducting quantum computing.
Fabrication.
Charge qubits are fabricated using techniques similar to those used for microelectronics. The devices are usually made on silicon or sapphire wafers using electron beam lithography (different from phase qubit, which uses photolithography) and metallic thin film evaporation processes. To create Josephson junctions, a technique known as shadow evaporation is normally used; this involves evaporating the source metal alternately at two angles through the lithography defined mask in the electron beam resist. This results in two overlapping layers of the superconducting metal, in between which a thin layer of insulator (normally aluminum oxide) is deposited.
Hamiltonian.
If the Josephson junction has a junction capacitance formula_0, and the gate capacitor formula_1, then the charging (Coulomb) energy of one Cooper pair is:
formula_2
If formula_3 denotes the number of excess Cooper pairs in the island (i.e. its net charge is formula_4), then the Hamiltonian is:
formula_5
where formula_6 is a control parameter known as effective offset charge (formula_7 is the gate voltage), and formula_8 the Josephson energy of the tunneling junction.
At low temperature and low gate voltage, one can limit the analysis to only the lowest formula_9 and formula_10 states, and therefore obtain a two-level quantum system (a.k.a. qubit).
Note that some recent papers adopt a different notation, and define the charging energy as that of one electron:
formula_11
and then the corresponding Hamiltonian is:
formula_12
Benefits.
To-date, the realizations of qubits that have had the most success are ion traps and NMR, with Shor's algorithm even being implemented using NMR. However, it is hard to see these two methods being scaled to the hundreds, thousands, or millions of qubits necessary to create a quantum computer. Solid-state representations of qubits are much more easily scalable, but they themselves have their own problem: decoherence. Superconductors, however, have the advantage of being more easily scaled, and they are more coherent than normal solid-state systems.
Experimental progresses.
The implementation of Superconducting charge qubits have been progressing quickly since 1996. Design was theoretically described in 1997 by Shnirman, while the evidence of quantum coherence of the charge in a Cooper pair box was published in February 1997 by Vincent Bouchiat et al. In 1999, coherent oscillations in the charge Qubit were first observed by Nakamura et al. Manipulation of the quantum states and full realization of the charge qubit was observed 2 years later. In 2007, a more advanced device known as Transmon showing enhanced coherence times due to its reduced sensitivity to charge noise was developed at Yale University by Robert J. Schoelkopf, Michel Devoret, Steven M. Girvin and their colleagues .
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "C_{\\rm J}"
},
{
"math_id": 1,
"text": "C_{\\rm g}"
},
{
"math_id": 2,
"text": "E_{\\rm C}=(2e)^2/2(C_{\\rm g}+C_{\\rm J})."
},
{
"math_id": 3,
"text": "n"
},
{
"math_id": 4,
"text": "-2ne"
},
{
"math_id": 5,
"text": "H=\\sum_n \\big[E_{\\rm C}(n-n_{\\rm g})^2 |n \\rangle \\langle n| - \\frac{1}{2} E_{\\rm J} (|n \\rangle \\langle n+1|+|n+1 \\rangle \\langle n|) \\big],"
},
{
"math_id": 6,
"text": "n_{\\rm g}=C_{\\rm g}V_{\\rm g}/(2e)"
},
{
"math_id": 7,
"text": "V_{\\rm g}"
},
{
"math_id": 8,
"text": "E_{\\rm J}"
},
{
"math_id": 9,
"text": "n=0"
},
{
"math_id": 10,
"text": "n=1"
},
{
"math_id": 11,
"text": "E_{\\rm C}=e^2/2(C_{\\rm g}+C_{\\rm J}),"
},
{
"math_id": 12,
"text": "H=\\sum_n \\big[4E_{\\rm C}(n-n_{\\rm g})^2 |n \\rangle \\langle n| - \\frac{1}{2} E_{\\rm J} (|n \\rangle \\langle n+1|+|n+1 \\rangle \\langle n|) \\big]."
}
] | https://en.wikipedia.org/wiki?curid=977983 |
9780066 | Euler summation | Summation method for some divergent series
In the mathematics of convergent and divergent series, Euler summation is a summation method. That is, it is a method for assigning a value to a series, different from the conventional method of taking limits of partial sums. Given a series Σ"a""n", if its Euler transform converges to a sum, then that sum is called the Euler sum of the original series. As well as being used to define values for divergent series, Euler summation can be used to speed the convergence of series.
Euler summation can be generalized into a family of methods denoted (E, "q"), where "q" ≥ 0. The (E, 1) sum is the ordinary Euler sum. All of these methods are strictly weaker than Borel summation; for "q" > 0 they are incomparable with Abel summation.
Definition.
For some value "y" we may define the Euler sum (if it converges for that value of "y") corresponding to a particular formal summation as:
formula_0
If all the formal sums actually converge, the Euler sum will equal the left hand side. However, using Euler summation can accelerate the convergence (this is especially useful for alternating series); sometimes it can also give a useful meaning to divergent sums.
To justify the approach notice that for interchanged sum, Euler's summation reduces to the initial series, because
formula_1
This method itself cannot be improved by iterated application, as
formula_2
With an appropriate choice of "y" (i.e. equal to or close to −) this series converges to .
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": " _{E_y}\\, \\sum_{j=0}^\\infty a_j := \\sum_{i=0}^\\infty \\frac{1}{(1+y)^{i+1}} \\sum_{j=0}^i \\binom{i}{j} y^{j+1} a_j ."
},
{
"math_id": 1,
"text": "y^{j+1}\\sum_{i=j}^\\infty \\binom{i}{j} \\frac{1}{(1+y)^{i+1}}=1."
},
{
"math_id": 2,
"text": " _{E_{y_1}} {}_{E_{y_2}}\\sum = \\, _{E_{\\frac{y_1 y_2}{1+y_1+y_2}}} \\sum."
},
{
"math_id": 3,
"text": "\\sum_{j=0}^\\infty (-1)^j P_k(j)"
},
{
"math_id": 4,
"text": "\\sum_{i=0}^k \\frac{1}{2^{i+1}} \\sum_{j=0}^i \\binom{i}{j} (-1)^j P_k(j),"
},
{
"math_id": 5,
"text": "P_k(j):= (j+1)^k"
},
{
"math_id": 6,
"text": "\\frac{B_{k+1}}{k+1}=-\\zeta(-k)"
},
{
"math_id": 7,
"text": "\\frac{1}{1-2^{k+1}}\\sum_{i=0}^k \\frac{1}{2^{i+1}} \\sum_{j=0}^i \\binom{i}{j} (-1)^j (j+1)^k "
},
{
"math_id": 8,
"text": "\\sum_{j=0}^\\infty z^j= \\sum_{i=0}^\\infty \\frac{1}{(1+y)^{i+1}} \\sum_{j=0}^i \\binom{i}{j} y^{j+1} z^j = \\frac{y}{1+y} \\sum_{i=0}^\\infty \\left( \\frac{1+yz}{1+y} \\right)^i"
}
] | https://en.wikipedia.org/wiki?curid=9780066 |
97807 | Jules Richard (mathematician) | French mathematician
Jules Richard (12 August 1862 – 14 October 1956) was a French mathematician who worked mainly in geometry but his name is most commonly associated with Richard's paradox.
Life and works.
Richard was born in Blet, in the Cher "département".
He taught at the lycées of Tours, Dijon and Châteauroux. He obtained his doctorate, at age of 39, from the Faculté des Sciences in Paris. His thesis of 126 pages concerns Fresnel's wave-surface. Richard worked mainly on the foundations of mathematics and geometry, relating to works by Hilbert, von Staudt and Méray.
In a more philosophical treatise about the nature of axioms of geometry Richard discusses and rejects the following basic principles:
The latter approach was essentially that proposed by Kant. Richard arrived at the result that the notion of identity of two objects and the invariability of an object are too vague and need to be specified more precisely. This should be done by axioms.
<templatestyles src="Template:Blockquote/styles.css" />Axioms are propositions, the task of which is to make precise the notion of identity of two objects pre-existing in our mind.
Further according to Richard, it is the aim of science to explain the material universe. And although non-Euclidean geometry had not found any applications (Albert Einstein finished his general theory of relativity only in 1915), Richard already stated clairvoyantly:
<templatestyles src="Template:Blockquote/styles.css" />One sees that having admitted the notion of angle, one is free to choose the notion of straight line in such a way that one or another of the three geometries is true.
Richard corresponded with Giuseppe Peano and Henri Poincaré. He became known to more than a small group of specialists by formulating his paradox which was extensively use by Poincaré to attack set theory whereupon the advocates of set theory had to refute these attacks.
He died in 1956 in Châteauroux, in the Indre "département", at the age of 94.
Richard's paradox.
The paradox was first stated in 1905 in a letter to Louis Olivier, director of the "Revue générale des sciences pures et appliquées". It was published in 1905 in the article "Les Principes des mathématiques et le problème des ensembles". The Principia Mathematica by Alfred North Whitehead and Bertrand Russell quote it together with six other paradoxes concerning the problem of self-reference. In one of the most important compendia of mathematical logic, compiled by Jean van Heijenoort, Richard's article is translated into English. The paradox can be interpreted as an application of Cantor's diagonal argument. It inspired Kurt Gödel and Alan Turing to their famous works. Kurt Gödel considered his incompleteness theorem as analogous to Richard's paradox which, in the original version runs as follows:
Let "E" be the set of real numbers that can be defined by a finite number of words. This set is denumerable. Let "p" be the "n"th decimal of the "n"th number of the set "E"; we form a number "N" having zero for the integral part and "p" + 1 for the "n"th decimal, if "p" is not equal either to 8 or 9, and unity in the contrary case. This number "N" does not belong to the set "E" because it differs from any number of this set, namely from the "n"th number by the "n"th digit. But "N" has been defined by a finite number of words. It should therefore belong to the set "E". That is a contradiction.
Richard never presented his paradox in another form, but meanwhile there exist several different versions, some of which being only very loosely connected to the original. For the sake of completeness they may be stated here.
Other versions of Richard's paradox.
(A) The version given in Principia Mathematica by Whitehead and Russell is similar to Richard's original version, alas not quite as exact. Here only the digit 9 is replaced by the digit 0, such that identities like 1.000... = 0.999... can spoil the result.
(B) Berry's Paradox, first mentioned in the Principia Mathematica as fifth of seven paradoxes, is credited to Mr. G. G. Berry of the Bodleian Library. It uses "the least integer not nameable in fewer than nineteen syllables"; in fact, in English it denotes 111,777. But "the least integer not nameable in fewer than nineteen syllables" is itself a name consisting of eighteen syllables; hence the least integer not nameable in fewer than nineteen syllables can be named in eighteen syllables, which is a contradiction
(C) Berry's Paradox with letters instead of syllables is often related to the set of all natural numbers which can be defined by less than 100 (or any other large number) letters. As the natural numbers are a well-ordered set there must be "the least number which cannot be defined by less than 100 letters". But this number was just defined by 65 letters including spaces.
(D) König's Paradox was also published in 1905 by Julius König. All real numbers which can be defined by a finite number of words form a subset of the real numbers. If the real numbers can be well-ordered, then there must be a first real number (according to this order) which cannot be defined by a finite number of words. But "the first real number which cannot be defined by a finite number of words" has just been defined by a finite number of words.
(E) The smallest natural number without interesting properties acquires an interesting property by this very lack of any interesting properties.
Reactions to Richard's paradox.
Georg Cantor wrote in a letter to David Hilbert:
Here Cantor is in error. Today we know that there are uncountably many real numbers without the possibility of a finite definition.
Ernst Zermelo comments Richard's argument:
Zermelo points to the reason why Richard's paradox fails. His last statement, however, is impossible to satisfy. A real number with infinitely many digits, which are not determined by some "rule", has an infinitely large contents of information. Such a number could only be identified by a short name if there were only one or few of them existing. If there exist uncountably many, as is the case, an identification is impossible. | [
{
"math_id": 0,
"text": "\\aleph_0"
}
] | https://en.wikipedia.org/wiki?curid=97807 |
9783393 | Magnetic trap (atoms) | Use of magnetic fields to isolate particles or atoms
In experimental physics, a magnetic trap is an apparatus which uses a magnetic field gradient to trap neutral particles with magnetic moments. Although such traps have been employed for many purposes in physics research, they are best known as the last stage in cooling atoms to achieve Bose–Einstein condensation. The magnetic trap (as a way of trapping very cold atoms) was first proposed by David E. Pritchard.
Operating principle.
Many atoms have a magnetic moment; their energy shifts in a magnetic field according to the formula
formula_0.
According to the principles of quantum mechanics the magnetic moment of an atom will be quantized; that is, it will take on one of certain discrete values. If the atom is placed in a strong magnetic field, its magnetic moment will be aligned with the field. If a number of atoms are placed in the same field, they will be distributed over the various allowed values of magnetic quantum number for that atom.
If a magnetic field gradient is superimposed on the uniform field, those atoms whose magnetic moments are aligned with the field will have lower energies in a higher field. Like a ball rolling down a hill, these atoms will tend to occupy locations with higher fields and are known as "high-field-seeking" atoms. Conversely, those atoms with magnetic moments aligned opposite the field will have higher energies in a higher field, tend to occupy locations with lower fields, and are called "low-field-seeking" atoms.
It is impossible to produce a local maximum of the magnetic-field magnitude in free space; however, a local minimum may be produced. This minimum can trap atoms which are low-field-seeking if they do not have enough kinetic energy to escape the minimum. Typically, magnetic traps have relatively shallow field minima and are only able to trap atoms whose kinetic energies correspond to temperatures of a fraction of a kelvin. The field minima required for magnetic trapping can be produced in a variety of ways. These include permanent magnet traps, Ioffe configuration traps, QUIC traps and others.
Microchip atom trap.
The minimum magnitude of the magnetic field can be realized with the "atom microchip".
One of the first microchip atomic traps is shown on the right. The Z-shaped conductor (actually the golden Z-shaped strip painted on the Si surface) is placed into the uniform magnetic field (the field's source is not shown in the figure). Only atoms with positive spin-field energy were trapped. To prevent the mixing of spin states, the external magnetic field was inclined in the plane of the chip, providing the adiabatic rotation of the spin at the movement of the atom. In the first approximation, magnitude (but not orientation) of the magnetic field is responsible for effective energy of the trapped atom. The chip shown is 2 cm x 2 cm; this size was chosen for ease in manufacture. In principle, the size of such microchip traps can be drastically reduced. An array of such traps can be manufactured with conventional lithographic methods; such an array is considered a prototype of a q-bit memory cell for the quantum computer. Ways of transferring atoms and/or q-bits between traps are under development; the adiabatic optical (with off-resonant frequencies) and/or the electrical control (with additional electrodes) is assumed.
Applications in Bose–Einstein condensation.
Bose–Einstein condensation (BEC) requires conditions of very low density and very low temperature in a gas of atoms. Laser cooling in a magneto-optical trap (MOT) is typically used to cool atoms down to the microkelvin range. However, laser cooling is limited by the momentum recoils an atom receives from single photons. Achieving BEC requires cooling the atoms beyond the limits of laser cooling, which means the lasers used in the MOT must be turned off and a new method of trapping devised. Magnetic traps have been used to hold very cold atoms, while evaporative cooling has reduced the temperature of the atoms enough to reach BEC.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\Delta E = - \\vec{\\mu} \\cdot \\vec{B}"
}
] | https://en.wikipedia.org/wiki?curid=9783393 |
97835 | Zener diode | Diode that allows current to flow in the reverse direction at a specific voltage
A Zener diode is a special type of diode designed to reliably allow current to flow "backwards" (inverted polarity) when a certain set reverse voltage, known as the Zener voltage, is reached.
Zener diodes are manufactured with a great variety of Zener voltages and some are even variable. Some Zener diodes have an abrupt, heavily doped p–n junction with a low Zener voltage, in which case the reverse conduction occurs due to electron quantum tunnelling in the short distance between p and n regions − this is known as the Zener effect, after Clarence Zener. Diodes with a higher Zener voltage have lighter doped junctions which causes their mode of operation to involve avalanche breakdown. Both breakdown types are present in Zener diodes with the Zener effect predominating at lower voltages and avalanche breakdown at higher voltages.
They are used to generate low-power stabilized supply rails from a higher voltage and to provide reference voltages for circuits, especially stabilized power supplies. They are also used to protect circuits from overvoltage, especially electrostatic discharge.
History.
The device is named after American physicist Clarence Zener who first described the Zener effect in 1934 in his primarily theoretical studies of breakdown of electrical insulator properties. Later, his work led to the Bell Labs implementation of the effect in form of an electronic device, the Zener diode.
Operation.
A conventional solid-state diode allows significant current if it is reverse-biased above its reverse breakdown voltage. When the reverse bias breakdown voltage is exceeded, a conventional diode will conduct a high current due to avalanche breakdown. Unless this current is limited by external circuits, the diode may be permanently damaged due to overheating. A Zener diode exhibits almost the same properties, except the device is specially designed so as to have a reduced breakdown voltage, the so-called Zener voltage. By contrast with the conventional device, a reverse-biased Zener diode exhibits a controlled breakdown and allows the current to keep the voltage across the Zener diode close to the Zener breakdown voltage. For example, a diode with a Zener breakdown voltage of 3.2 V exhibits a voltage drop of very nearly 3.2 V across a wide range of reverse currents. The Zener diode is therefore well suited for applications such as the generation of a reference voltage (e.g. for an amplifier stage), or as a voltage stabilizer for low-current applications.
Another mechanism that produces a similar effect is the avalanche effect as in the avalanche diode. The two types of diode are in fact constructed in a similar way and both effects are present in diodes of this type. In silicon diodes up to about 5.6 volts, the Zener effect is the predominant effect and shows a marked negative temperature coefficient. Above 5.6 volts, the avalanche effect dominates and exhibits a positive temperature coefficient.
In a 5.6 V diode, the two effects occur together, and their temperature coefficients nearly cancel each other out, thus the 5.6 V diode is useful in temperature-critical applications. An alternative, which is used for voltage references that need to be highly stable over long periods of time, is to use a Zener diode with a temperature coefficient (TC) of +2 mV/°C (breakdown voltage 6.2–6.3 V) connected in series with a forward-biased silicon diode (or a transistor B-E junction) manufactured on the same chip. The forward-biased diode has a temperature coefficient of −2 mV/°C, causing the TCs to cancel out for a net nearly zero temperature coefficient.
It is also worth noting that the temperature coefficient of a 4.7 V Zener diode is close to that of the emitter-base junction of a silicon transistor at around -2 mV/°C, so in a simple regulating circuit where the 4.7 V diode sets the voltage at the base of an NPN transistor (i.e. their coefficients are acting in parallel), the emitter will be at around 4 V and quite stable with temperature.
Modern designs have produced devices with voltages lower than 5.6 V with negligible temperature coefficients. Higher voltage devices have a temperature coefficient that is approximately proportional to the amount by which the breakdown voltage exceeds 5 V. Thus a 75 V diode has 10 times the coefficient of a 12 V diode.
Zener and avalanche diodes, regardless of breakdown voltage, are usually marketed under the umbrella term of "Zener diode".
Under 5.6 V, where the Zener effect dominates, the IV curve near breakdown is much more rounded, which calls for more care in choosing its biasing conditions. The IV curve for Zeners above 5.6 V (being dominated by avalanche), is much more precise at breakdown.
Construction.
The Zener diode's operation depends on the heavy doping of its p–n junction. The depletion region formed in the diode is very thin (<1 μm) and the electric field is consequently very high (about 500 kV/m) even for a small reverse bias voltage of about 5 V, allowing electrons to tunnel from the valence band of the p-type material to the conduction band of the n-type material.
At the atomic scale, this tunneling corresponds to the transport of valence band electrons into the empty conduction band states; as a result of the reduced barrier between these bands and high electric fields that are induced due to the high levels of doping on both sides. The breakdown voltage can be controlled quite accurately by the doping process. Adding impurities, or doping, changes the behaviour of the semiconductor material in the diode. In the case of Zener diodes, this heavy doping creates a situation where the diode can operate in the breakdown region. While tolerances within 0.07% are available, commonly available tolerances are 5% and 10%. Breakdown voltage for commonly available Zener diodes can vary from 1.2 V to 200 V.
For diodes that are lightly doped, the breakdown is dominated by the avalanche effect rather than the Zener effect. Consequently, the breakdown voltage is higher (over 5.6 V) for these devices.
Surface Zeners.
The emitter-base junction of a bipolar NPN transistor behaves as a Zener diode, with breakdown voltage at about 6.8 V for common bipolar processes and about 10 V for lightly doped base regions in BiCMOS processes. Older processes with poor control of doping characteristics had the variation of Zener voltage up to ±1 V, newer processes using ion implantation can achieve no more than ±0.25 V. The NPN transistor structure can be employed as a "surface Zener diode", with collector and emitter connected together as its cathode and base region as anode. In this approach the base doping profile usually narrows towards the surface, creating a region with intensified electric field where the avalanche breakdown occurs. Hot carriers produced by acceleration in the intense field can inject into the oxide layer above the junction and become trapped there. The accumulation of trapped charges can then cause 'Zener walkout', a corresponding change of the Zener voltage of the junction. The same effect can be achieved by radiation damage.
The emitter-base Zener diodes can handle only low currents as the energy is dissipated in the base depletion region which is very small. Higher amounts of dissipated energy (higher current for longer time, or a short very high current spike) causes thermal damage to the junction and/or its contacts. Partial damage of the junction can shift its Zener voltage. Total destruction of the Zener junction by overheating it and causing migration of metallization across the junction ("spiking") can be used intentionally as a 'Zener zap' antifuse.
Subsurface Zeners.
A subsurface Zener diode, also called 'buried Zener', is a device similar to the surface Zener, but the doping and design is such that the avalanche region is located deeper in the structure, typically several micrometers below the oxide. Hot carriers then lose energy by collisions with the semiconductor lattice before reaching the oxide layer and cannot be trapped there. The Zener walkout phenomenon therefore does not occur here, and the buried Zeners have stable voltage over their entire lifetime. Most buried Zeners have breakdown voltage of 5–7 volts. Several different junction structures are used.
Uses.
Zener diodes are widely used as voltage references and as shunt regulators to regulate the voltage across small circuits. When connected in parallel with a variable voltage source so that it is reverse biased, a Zener diode conducts when the voltage reaches the diode's reverse breakdown voltage. From that point on, the low impedance of the diode keeps the voltage across the diode at that value.
In this circuit, a typical voltage reference or regulator, an input voltage, "U"in (with + on the top), is regulated down to a stable output voltage "U"out. The breakdown voltage of diode D is stable over a wide current range and holds "U"out approximately constant even though the input voltage may fluctuate over a wide range. Because of the low impedance of the diode when operated like this, resistor "R" is used to limit current through the circuit.
In the case of this simple reference, the current flowing in the diode is determined using Ohm's law and the known voltage drop across the resistor "R";
formula_0
The value of "R" must satisfy two conditions:
A load may be placed across the diode in this reference circuit, and as long as the Zener stays in reverse breakdown, the diode provides a stable voltage source to the load. Zener diodes in this configuration are often used as stable references for more advanced voltage regulator circuits.
Shunt regulators are simple, but the requirements that the ballast resistor be small enough to avoid excessive voltage drop during worst-case operation (low input voltage concurrent with high load current) tends to leave a lot of current flowing in the diode much of the time, making for a fairly wasteful regulator with high quiescent power dissipation, suitable only for smaller loads.
These devices are also encountered, typically in series with a base-emitter junction, in transistor stages where selective choice of a device centered on the avalanche or Zener point can be used to introduce compensating temperature co-efficient balancing of the transistor p–n junction. An example of this kind of use would be a DC error amplifier used in a regulated power supply circuit feedback loop system.
Zener diodes are also used in surge protectors to limit transient voltage spikes.
Noise generator.
Another application of the Zener diode is using its avalanche breakdown noise (see ), which for instance can be used for dithering in an analog-to-digital converter when at a rms level equivalent to <templatestyles src="Fraction/styles.css" />1⁄3 to 1 lsb or to create a random number generator.
Waveform clipper.
Two Zener diodes facing each other in series clip both halves of an input signal. Waveform clippers can be used not only to reshape a signal, but also to prevent voltage spikes from affecting circuits that are connected to the power supply.
Voltage shifter.
A Zener diode can be applied to a circuit with a resistor to act as a voltage shifter. This circuit lowers the output voltage by a quantity that is equal to the Zener diode's breakdown voltage.
Voltage regulator.
A Zener diode can be applied in a voltage regulator circuit to regulate the voltage applied to a load, such as in a linear regulator.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "I_\\text{diode} = \\frac{U_\\text{in} - U_\\text{out}}{R}"
},
{
"math_id": 1,
"text": "I_D V_B < P_\\text{max}"
}
] | https://en.wikipedia.org/wiki?curid=97835 |
97848 | Row and column spaces | Vector spaces associated to a matrix
In linear algebra, the column space (also called the range or image) of a matrix "A" is the span (set of all possible linear combinations) of its column vectors. The column space of a matrix is the image or range of the corresponding matrix transformation.
Let formula_0 be a field. The column space of an "m" × "n" matrix with components from formula_0 is a linear subspace of the "m"-space formula_1. The dimension of the column space is called the rank of the matrix and is at most min("m", "n"). A definition for matrices over a ring formula_2 is also possible.
The row space is defined similarly.
The row space and the column space of a matrix A are sometimes denoted as C("A"T) and C("A") respectively.
This article considers matrices of real numbers. The row and column spaces are subspaces of the real spaces formula_3 and formula_4 respectively.
Overview.
Let A be an m-by-n matrix. Then
If the matrix represents a linear transformation, the column space of the matrix equals the image of this linear transformation.
The column space of a matrix A is the set of all linear combinations of the columns in A. If "A" = [a1 ⋯ a"n"], then colsp("A") = span({a1, ..., a"n"}).
Given a matrix A, the action of the matrix A on a vector x returns a linear combination of the columns of A with the coordinates of x as coefficients; that is, the columns of the matrix generate the column space.
Example.
Given a matrix J:
formula_5
the rows are
formula_6,
formula_7,
formula_8,
formula_9.
Consequently, the row space of J is the subspace of formula_10 spanned by {r1, r2, r3, r4}.
Since these four row vectors are linearly independent, the row space is 4-dimensional. Moreover, in this case it can be seen that they are all orthogonal to the vector n = [6, −1, 4, −4, 0] (n is an element of the kernel of J ), so it can be deduced that the row space consists of all vectors in formula_10 that are orthogonal to n.
Column space.
Definition.
Let K be a field of scalars. Let A be an "m" × "n" matrix, with column vectors v1, v2, ..., v"n". A linear combination of these vectors is any vector of the form
formula_11
where "c"1, "c"2, ..., "cn" are scalars. The set of all possible linear combinations of v1, ..., v"n" is called the column space of A. That is, the column space of A is the span of the vectors v1, ..., v"n".
Any linear combination of the column vectors of a matrix A can be written as the product of A with a column vector:
formula_12
Therefore, the column space of A consists of all possible products "A"x, for x ∈ "K""n". This is the same as the image (or range) of the corresponding matrix transformation.
Example.
If formula_13, then the column vectors are v1 = [1, 0, 2]T and v2 = [0, 1, 0]T.
A linear combination of v1 and v2 is any vector of the form
formula_14
The set of all such vectors is the column space of A. In this case, the column space is precisely the set of vectors ("x", "y", "z") ∈ R3 satisfying the equation "z" = 2"x" (using Cartesian coordinates, this set is a plane through the origin in three-dimensional space).
Basis.
The columns of A span the column space, but they may not form a basis if the column vectors are not linearly independent. Fortunately, elementary row operations do not affect the dependence relations between the column vectors. This makes it possible to use row reduction to find a basis for the column space.
For example, consider the matrix
formula_15
The columns of this matrix span the column space, but they may not be linearly independent, in which case some subset of them will form a basis. To find this basis, we reduce A to reduced row echelon form:
formula_16
At this point, it is clear that the first, second, and fourth columns are linearly independent, while the third column is a linear combination of the first two. (Specifically, v3 = −2v1 + v2.) Therefore, the first, second, and fourth columns of the original matrix are a basis for the column space:
formula_17
Note that the independent columns of the reduced row echelon form are precisely the columns with pivots. This makes it possible to determine which columns are linearly independent by reducing only to echelon form.
The above algorithm can be used in general to find the dependence relations between any set of vectors, and to pick out a basis from any spanning set. Also finding a basis for the column space of A is equivalent to finding a basis for the row space of the transpose matrix "A"T.
To find the basis in a practical setting (e.g., for large matrices), the singular-value decomposition is typically used.
Dimension.
The dimension of the column space is called the rank of the matrix. The rank is equal to the number of pivots in the reduced row echelon form, and is the maximum number of linearly independent columns that can be chosen from the matrix. For example, the 4 × 4 matrix in the example above has rank three.
Because the column space is the image of the corresponding matrix transformation, the rank of a matrix is the same as the dimension of the image. For example, the transformation formula_18 described by the matrix above maps all of formula_19 to some three-dimensional subspace.
The nullity of a matrix is the dimension of the null space, and is equal to the number of columns in the reduced row echelon form that do not have pivots. The rank and nullity of a matrix A with n columns are related by the equation:
formula_20
This is known as the rank–nullity theorem.
Relation to the left null space.
The left null space of A is the set of all vectors x such that xT"A" = 0T. It is the same as the null space of the transpose of A. The product of the matrix "A"T and the vector x can be written in terms of the dot product of vectors:
formula_21
because row vectors of "A"T are transposes of column vectors v"k" of A. Thus "A"Tx = 0 if and only if x is orthogonal (perpendicular) to each of the column vectors of A.
It follows that the left null space (the null space of "A"T) is the orthogonal complement to the column space of A.
For a matrix A, the column space, row space, null space, and left null space are sometimes referred to as the "four fundamental subspaces".
For matrices over a ring.
Similarly the column space (sometimes disambiguated as "right" column space) can be defined for matrices over a ring K as
formula_22
for any "c"1, ..., "cn", with replacement of the vector m-space with "right free module", which changes the order of scalar multiplication of the vector v"k" to the scalar "ck" such that it is written in an unusual order "vector"–"scalar".
Row space.
Definition.
Let K be a field of scalars. Let A be an "m" × "n" matrix, with row vectors r1, r2, ..., r"m". A linear combination of these vectors is any vector of the form
formula_23
where "c"1, "c"2, ..., "cm" are scalars. The set of all possible linear combinations of r1, ..., r"m" is called the row space of A. That is, the row space of A is the span of the vectors r1, ..., r"m".
For example, if
formula_24
then the row vectors are r1 = [1, 0, 2] and r2 = [0, 1, 0]. A linear combination of r1 and r2 is any vector of the form
formula_25
The set of all such vectors is the row space of A. In this case, the row space is precisely the set of vectors ("x", "y", "z") ∈ "K"3 satisfying the equation "z" = 2"x" (using Cartesian coordinates, this set is a plane through the origin in three-dimensional space).
For a matrix that represents a homogeneous system of linear equations, the row space consists of all linear equations that follow from those in the system.
The column space of A is equal to the row space of "A"T.
Basis.
The row space is not affected by elementary row operations. This makes it possible to use row reduction to find a basis for the row space.
For example, consider the matrix
formula_26
The rows of this matrix span the row space, but they may not be linearly independent, in which case the rows will not be a basis. To find a basis, we reduce A to row echelon form:
r1, r2, r3 represents the rows.
formula_27
Once the matrix is in echelon form, the nonzero rows are a basis for the row space. In this case, the basis is {[1, 3, 2], [2, 7, 4]}. Another possible basis {[1, 0, 2], [0, 1, 0]} comes from a further reduction.
This algorithm can be used in general to find a basis for the span of a set of vectors. If the matrix is further simplified to reduced row echelon form, then the resulting basis is uniquely determined by the row space.
It is sometimes convenient to find a basis for the row space from among the rows of the original matrix instead (for example, this result is useful in giving an elementary proof that the determinantal rank of a matrix is equal to its rank). Since row operations can affect linear dependence relations of the row vectors, such a basis is instead found indirectly using the fact that the column space of "A"T is equal to the row space of A. Using the example matrix A above, find "A"T and reduce it to row echelon form:
formula_28
The pivots indicate that the first two columns of "A"T form a basis of the column space of "A"T. Therefore, the first two rows of A (before any row reductions) also form a basis of the row space of A.
Dimension.
The dimension of the row space is called the rank of the matrix. This is the same as the maximum number of linearly independent rows that can be chosen from the matrix, or equivalently the number of pivots. For example, the 3 × 3 matrix in the example above has rank two.
The rank of a matrix is also equal to the dimension of the column space. The dimension of the null space is called the nullity of the matrix, and is related to the rank by the following equation:
formula_29
where n is the number of columns of the matrix A. The equation above is known as the rank–nullity theorem.
Relation to the null space.
The null space of matrix A is the set of all vectors x for which "A"x = 0. The product of the matrix A and the vector x can be written in terms of the dot product of vectors:
formula_30
where r1, ..., r"m" are the row vectors of A. Thus "A"x = 0 if and only if x is orthogonal (perpendicular) to each of the row vectors of A.
It follows that the null space of A is the orthogonal complement to the row space. For example, if the row space is a plane through the origin in three dimensions, then the null space will be the perpendicular line through the origin. This provides a proof of the rank–nullity theorem (see dimension above).
The row space and null space are two of the four fundamental subspaces associated with a matrix A (the other two being the column space and left null space).
Relation to coimage.
If V and W are vector spaces, then the kernel of a linear transformation "T": "V" → "W" is the set of vectors v ∈ "V" for which "T"(v) = 0. The kernel of a linear transformation is analogous to the null space of a matrix.
If V is an inner product space, then the orthogonal complement to the kernel can be thought of as a generalization of the row space. This is sometimes called the coimage of T. The transformation T is one-to-one on its coimage, and the coimage maps isomorphically onto the image of T.
When V is not an inner product space, the coimage of T can be defined as the quotient space "V" / ker("T").
References & Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "F"
},
{
"math_id": 1,
"text": "F^m"
},
{
"math_id": 2,
"text": "R"
},
{
"math_id": 3,
"text": "\\R^n"
},
{
"math_id": 4,
"text": "\\R^m"
},
{
"math_id": 5,
"text": "\n J =\n \\begin{bmatrix}\n 2 & 4 & 1 & 3 & 2\\\\\n -1 & -2 & 1 & 0 & 5\\\\\n 1 & 6 & 2 & 2 & 2\\\\\n 3 & 6 & 2 & 5 & 1\n \\end{bmatrix}\n"
},
{
"math_id": 6,
"text": "\\mathbf{r}_1 = \\begin{bmatrix} 2 & 4 & 1 & 3 & 2 \\end{bmatrix}"
},
{
"math_id": 7,
"text": "\\mathbf{r}_2 = \\begin{bmatrix} -1 & -2 & 1 & 0 & 5 \\end{bmatrix}"
},
{
"math_id": 8,
"text": "\\mathbf{r}_3 = \\begin{bmatrix} 1 & 6 & 2 & 2 & 2 \\end{bmatrix}"
},
{
"math_id": 9,
"text": "\\mathbf{r}_4 = \\begin{bmatrix} 3 & 6 & 2 & 5 & 1 \\end{bmatrix}"
},
{
"math_id": 10,
"text": "\\R^5"
},
{
"math_id": 11,
"text": "c_1 \\mathbf{v}_1 + c_2 \\mathbf{v}_2 + \\cdots + c_n \\mathbf{v}_n,"
},
{
"math_id": 12,
"text": "\\begin{array} {rcl}\nA \\begin{bmatrix} c_1 \\\\ \\vdots \\\\ c_n \\end{bmatrix} \n& = & \\begin{bmatrix} a_{11} & \\cdots & a_{1n} \\\\ \\vdots & \\ddots & \\vdots \\\\ a_{m1} & \\cdots & a_{mn} \\end{bmatrix} \\begin{bmatrix} c_1 \\\\ \\vdots \\\\ c_n \\end{bmatrix}\n= \\begin{bmatrix} c_1 a_{11} + \\cdots + c_{n} a_{1n} \\\\ \\vdots \\\\ c_{1} a_{m1} + \\cdots + c_{n} a_{mn} \\end{bmatrix} = c_1 \\begin{bmatrix} a_{11} \\\\ \\vdots \\\\ a_{m1} \\end{bmatrix} + \\cdots + c_n \\begin{bmatrix} a_{1n} \\\\ \\vdots \\\\ a_{mn} \\end{bmatrix} \\\\\n& = & c_1 \\mathbf{v}_1 + \\cdots + c_n \\mathbf{v}_n\n\\end{array}"
},
{
"math_id": 13,
"text": "A = \\begin{bmatrix} 1 & 0 \\\\ 0 & 1 \\\\ 2 & 0 \\end{bmatrix}"
},
{
"math_id": 14,
"text": "c_1 \\begin{bmatrix} 1 \\\\ 0 \\\\ 2 \\end{bmatrix} + c_2 \\begin{bmatrix} 0 \\\\ 1 \\\\ 0 \\end{bmatrix} = \\begin{bmatrix} c_1 \\\\ c_2 \\\\ 2c_1 \\end{bmatrix}"
},
{
"math_id": 15,
"text": "A = \\begin{bmatrix} 1 & 3 & 1 & 4 \\\\ 2 & 7 & 3 & 9 \\\\ 1 & 5 & 3 & 1 \\\\ 1 & 2 & 0 & 8 \\end{bmatrix}."
},
{
"math_id": 16,
"text": "\\begin{bmatrix} 1 & 3 & 1 & 4 \\\\ 2 & 7 & 3 & 9 \\\\ 1 & 5 & 3 & 1 \\\\ 1 & 2 & 0 & 8 \\end{bmatrix}\n\\sim \\begin{bmatrix} 1 & 3 & 1 & 4 \\\\ 0 & 1 & 1 & 1 \\\\ 0 & 2 & 2 & -3 \\\\ 0 & -1 & -1 & 4 \\end{bmatrix}\n\\sim \\begin{bmatrix} 1 & 0 & -2 & 1 \\\\ 0 & 1 & 1 & 1 \\\\ 0 & 0 & 0 & -5 \\\\ 0 & 0 & 0 & 5 \\end{bmatrix}\n\\sim \\begin{bmatrix} 1 & 0 & -2 & 0 \\\\ 0 & 1 & 1 & 0 \\\\ 0 & 0 & 0 & 1 \\\\ 0 & 0 & 0 & 0 \\end{bmatrix}."
},
{
"math_id": 17,
"text": "\\begin{bmatrix} 1 \\\\ 2 \\\\ 1 \\\\ 1\\end{bmatrix},\\;\\;\n\\begin{bmatrix} 3 \\\\ 7 \\\\ 5 \\\\ 2\\end{bmatrix},\\;\\;\n\\begin{bmatrix} 4 \\\\ 9 \\\\ 1 \\\\ 8\\end{bmatrix}."
},
{
"math_id": 18,
"text": "\\R^4 \\to \\R^4"
},
{
"math_id": 19,
"text": "\\R^4"
},
{
"math_id": 20,
"text": "\\operatorname{rank}(A) + \\operatorname{nullity}(A) = n.\\,"
},
{
"math_id": 21,
"text": "A^\\mathsf{T}\\mathbf{x} = \\begin{bmatrix} \\mathbf{v}_1 \\cdot \\mathbf{x} \\\\ \\mathbf{v}_2 \\cdot \\mathbf{x} \\\\ \\vdots \\\\ \\mathbf{v}_n \\cdot \\mathbf{x} \\end{bmatrix},"
},
{
"math_id": 22,
"text": "\\sum\\limits_{k=1}^n \\mathbf{v}_k c_k"
},
{
"math_id": 23,
"text": "c_1 \\mathbf{r}_1 + c_2 \\mathbf{r}_2 + \\cdots + c_m \\mathbf{r}_m,"
},
{
"math_id": 24,
"text": "A = \\begin{bmatrix} 1 & 0 & 2 \\\\ 0 & 1 & 0 \\end{bmatrix},"
},
{
"math_id": 25,
"text": "c_1 \\begin{bmatrix}1 & 0 & 2\\end{bmatrix} + c_2 \\begin{bmatrix}0 & 1 & 0\\end{bmatrix} = \\begin{bmatrix}c_1 & c_2 & 2c_1\\end{bmatrix}."
},
{
"math_id": 26,
"text": "A = \\begin{bmatrix} 1 & 3 & 2 \\\\ 2 & 7 & 4 \\\\ 1 & 5 & 2\\end{bmatrix}."
},
{
"math_id": 27,
"text": "\n\\begin{align}\n\\begin{bmatrix} 1 & 3 & 2 \\\\ 2 & 7 & 4 \\\\ 1 & 5 & 2\\end{bmatrix}\n&\\xrightarrow{\\mathbf{r}_2-2\\mathbf{r}_1 \\to \\mathbf{r}_2}\n\\begin{bmatrix} 1 & 3 & 2 \\\\ 0 & 1 & 0 \\\\ 1 & 5 & 2\\end{bmatrix}\n\\xrightarrow{\\mathbf{r}_3-\\,\\,\\mathbf{r}_1 \\to \\mathbf{r}_3}\n\\begin{bmatrix} 1 & 3 & 2 \\\\ 0 & 1 & 0 \\\\ 0 & 2 & 0\\end{bmatrix} \\\\\n&\\xrightarrow{\\mathbf{r}_3-2\\mathbf{r}_2 \\to \\mathbf{r}_3}\n\\begin{bmatrix} 1 & 3 & 2 \\\\ 0 & 1 & 0 \\\\ 0 & 0 & 0\\end{bmatrix}\n\\xrightarrow{\\mathbf{r}_1-3\\mathbf{r}_2 \\to \\mathbf{r}_1}\n\\begin{bmatrix} 1 & 0 & 2 \\\\ 0 & 1 & 0 \\\\ 0 & 0 & 0\\end{bmatrix}.\n\\end{align}\n"
},
{
"math_id": 28,
"text": "\nA^{\\mathrm{T}} = \\begin{bmatrix} 1 & 2 & 1 \\\\ 3 & 7 & 5 \\\\ 2 & 4 & 2\\end{bmatrix} \\sim \n\\begin{bmatrix} 1 & 2 & 1 \\\\ 0 & 1 & 2 \\\\ 0 & 0 & 0\\end{bmatrix}. \n"
},
{
"math_id": 29,
"text": "\\operatorname{rank}(A) + \\operatorname{nullity}(A) = n,"
},
{
"math_id": 30,
"text": "A\\mathbf{x} = \\begin{bmatrix} \\mathbf{r}_1 \\cdot \\mathbf{x} \\\\ \\mathbf{r}_2 \\cdot \\mathbf{x} \\\\ \\vdots \\\\ \\mathbf{r}_m \\cdot \\mathbf{x} \\end{bmatrix},"
}
] | https://en.wikipedia.org/wiki?curid=97848 |
978611 | Joint quantum entropy | Measure of information in quantum information theory
The joint quantum entropy generalizes the classical joint entropy to the context of quantum information theory. Intuitively, given two quantum states formula_0 and formula_1, represented as density operators that are subparts of a quantum system, the joint quantum entropy is a measure of the total uncertainty or entropy of the joint system. It is written formula_2 or formula_3, depending on the notation being used for the von Neumann entropy. Like other entropies, the joint quantum entropy is measured in bits, i.e. the logarithm is taken in base 2.
In this article, we will use formula_2 for the joint quantum entropy.
Background.
In information theory, for any classical random variable formula_4, the classical Shannon entropy formula_5 is a measure of how uncertain we are about the outcome of formula_4. For example, if formula_4 is a probability distribution concentrated at one point, the outcome of formula_4 is certain and therefore its entropy formula_6. At the other extreme, if formula_4 is the uniform probability distribution with formula_7 possible values, intuitively one would expect formula_4 is associated with the most uncertainty. Indeed, such uniform probability distributions have maximum possible entropy formula_8.
In quantum information theory, the notion of entropy is extended from probability distributions to quantum states, or density matrices. For a state formula_0, the von Neumann entropy is defined by
formula_9
Applying the spectral theorem, or Borel functional calculus for infinite dimensional systems, we see that it generalizes the classical entropy. The physical meaning remains the same. A maximally mixed state, the quantum analog of the uniform probability distribution, has maximum von Neumann entropy. On the other hand, a pure state, or a rank one projection, will have zero von Neumann entropy. We write the von Neumann entropy formula_10 (or sometimes formula_11.
Definition.
Given a quantum system with two subsystems "A" and "B", the term joint quantum entropy simply refers to the von Neumann entropy of the combined system. This is to distinguish from the entropy of the subsystems.
In symbols, if the combined system is in state formula_12,
the joint quantum entropy is then
formula_13
Each subsystem has its own entropy. The state of the subsystems are given by the partial trace operation.
Properties.
The classical joint entropy is always at least equal to the entropy of each individual system. This is not the case for the joint quantum entropy. If the quantum state formula_12 exhibits quantum entanglement, then the entropy of each subsystem may be larger than the joint entropy. This is equivalent to the fact that the conditional quantum entropy may be negative, while the classical conditional entropy may never be.
Consider a maximally entangled state such as a Bell state. If formula_12 is a Bell state, say,
formula_14
then the total system is a pure state, with entropy 0, while each individual subsystem is a maximally mixed state, with maximum von Neumann entropy formula_15. Thus the joint entropy of the combined system is less than that of subsystems. This is because for entangled states, definite states cannot be assigned to subsystems, resulting in positive entropy.
Notice that the above phenomenon cannot occur if a state is a separable pure state. In that case, the reduced states of the subsystems are also pure. Therefore, all entropies are zero.
Relations to other entropy measures.
The joint quantum entropy formula_16 can be used to define of the conditional quantum entropy:
formula_17
and the quantum mutual information:
formula_18
These definitions parallel the use of the classical joint entropy to define the conditional entropy and mutual information. | [
{
"math_id": 0,
"text": "\\rho"
},
{
"math_id": 1,
"text": "\\sigma"
},
{
"math_id": 2,
"text": "S(\\rho,\\sigma)"
},
{
"math_id": 3,
"text": "H(\\rho,\\sigma)"
},
{
"math_id": 4,
"text": "X"
},
{
"math_id": 5,
"text": "H(X)"
},
{
"math_id": 6,
"text": "H(X)=0"
},
{
"math_id": 7,
"text": "n"
},
{
"math_id": 8,
"text": "H(X) = \\log_2(n)"
},
{
"math_id": 9,
"text": "- \\operatorname{Tr} \\rho \\log \\rho."
},
{
"math_id": 10,
"text": "S(\\rho)"
},
{
"math_id": 11,
"text": "H(\\rho)"
},
{
"math_id": 12,
"text": "\\rho^{AB}"
},
{
"math_id": 13,
"text": "S(\\rho^A,\\rho^B) = S(\\rho^{AB}) = -\\operatorname{Tr}(\\rho^{AB}\\log(\\rho^{AB}))."
},
{
"math_id": 14,
"text": "\\left| \\Psi \\right\\rangle = \\frac{1}{\\sqrt{2}}\\left(|00\\rangle + |11\\rangle\\right),"
},
{
"math_id": 15,
"text": "\\log 2 = 1"
},
{
"math_id": 16,
"text": "S(\\rho^{AB})"
},
{
"math_id": 17,
"text": "S(\\rho^A|\\rho^B) \\ \\stackrel{\\mathrm{def}}{=}\\ S(\\rho^A,\\rho^B) - S(\\rho^B)"
},
{
"math_id": 18,
"text": "I(\\rho^A:\\rho^B) \\ \\stackrel{\\mathrm{def}}{=}\\ S(\\rho^A) + S(\\rho^B) - S(\\rho^A,\\rho^B)"
}
] | https://en.wikipedia.org/wiki?curid=978611 |
9786283 | Round-trip gain | Round-trip gain refers to the laser physics, and laser cavities (or laser resonators). It is gain, integrated along a ray, which makes a round-trip in the cavity.
At the continuous-wave operation, the round-trip gain exactly compensates both the output coupling of the cavity and its background loss.
Round-trip gain in geometric optics.
Generally, the Round-trip gain may depend on the frequency, on the position and tilt of the ray, and even on the polarization of light. Usually, we may assume that at some moment of time, at reasonable frequency of operation, the gain formula_0 is function of the Cartesian coordinates formula_1, formula_2, and formula_3. Then, assuming that the geometrical optics is applicable the round-trip gain formula_4 can be expressed as follows:
formula_5,
where formula_6 is path along the ray, parametrized with functions formula_7, formula_8, formula_9; the integration is performed along the whole ray, which is supposed to form the closed loop.
In simple models, the flat-top distribution of pump and gain formula_10 is assumed to be constant. In the case of simplest cavity, the round-trip gain formula_11, where formula_12 is length of the cavity; the laser light is supposed to go forward and back, this leads to the coefficient 2 in the estimate.
In the steady-state continuous wave operation of a laser, the round-trip gain is determined by the reflectivity of the mirrors (in the case of stable cavity) and the magnification coefficient in the case of unstable resonator (unstable cavity).
Coupling parameter.
The coupling parameter formula_13 of a laser resonator determines, what part of the energy of the laser field in the cavity goes out at each round-trip. This output can be determined by the transmitivity of the output coupler, or the magnification coefficient in the case of unstable cavity.
Round-trip loss (background loss).
The background loss, of the round-trip loss formula_14 determines, what part of the energy of the laser field becomes unusable at each round-trip; it can be absorbed or scattered.
At the self-pulsation, the gain is late to respond the variation of number of photons in the cavity. Within the simple model, the round-trip loss and the output coupling determine the damping parameters of the equivalent oscillator Toda.
At the steady-state operation, the round-trip gain formula_4 exactly compensate both, the output coupling and losses:
formula_15.
Assuming, that the gain is small (formula_16), this relation can be written as follows:
formula_17
Such as relation is used in analytic estimates of the performance of lasers. In particular, the round-trip loss formula_14 may be one of important parameters which limit the output power of a disk laser; at the power scaling, the gain formula_10 should be decreased (in order to avoid the exponential growth of the amplified spontaneous emission), and the round-trip gain formula_4 should remain larger than the background loss formula_14; this requires to increase of the thickness of the slab of the gain medium; at certain thickness, the overheating prevents the efficient operation.
For the analysis of processes in active medium, the sum formula_18 can be also called "loss". This notation leads to confusions as soon as one is interested, which part of the energy is absorbed and scattered, and which part of such a "loss" is actually wanted and useful output of the laser. | [
{
"math_id": 0,
"text": "~G(x,y,z)~"
},
{
"math_id": 1,
"text": "~x~"
},
{
"math_id": 2,
"text": "~y~"
},
{
"math_id": 3,
"text": "~z~"
},
{
"math_id": 4,
"text": "~g~"
},
{
"math_id": 5,
"text": "~g=\\int G(x(a),y(a),z(a))~{\\rm d}a~"
},
{
"math_id": 6,
"text": "~a~"
},
{
"math_id": 7,
"text": "~x(a)~"
},
{
"math_id": 8,
"text": "~y(a)~"
},
{
"math_id": 9,
"text": "~z(a)~"
},
{
"math_id": 10,
"text": "~G~"
},
{
"math_id": 11,
"text": "~g=2Gh~"
},
{
"math_id": 12,
"text": "~h~"
},
{
"math_id": 13,
"text": "~\\theta~"
},
{
"math_id": 14,
"text": "~\\beta~"
},
{
"math_id": 15,
"text": "~\\exp(g)~(1-\\beta-\\theta)=1~"
},
{
"math_id": 16,
"text": "~g~\\ll 1~"
},
{
"math_id": 17,
"text": "~g=\\beta+\\theta~"
},
{
"math_id": 18,
"text": "~\\beta+\\theta~"
}
] | https://en.wikipedia.org/wiki?curid=9786283 |
9786372 | Conic optimization | Subfield of convex optimization
Conic optimization is a subfield of convex optimization that studies problems consisting of minimizing a convex function over the intersection of an affine subspace and a convex cone.
The class of conic optimization problems includes some of the most well known classes of convex optimization problems, namely linear and semidefinite programming.
Definition.
Given a real vector space "X", a convex, real-valued function
formula_0
defined on a convex cone formula_1, and an affine subspace formula_2 defined by a set of affine constraints formula_3, a conic optimization problem is to find the point formula_4 in formula_5 for which the number formula_6 is smallest.
Examples of formula_7 include the positive orthant formula_8, positive semidefinite matrices formula_9, and the second-order cone formula_10. Often formula_11 is a linear function, in which case the conic optimization problem reduces to a linear program, a semidefinite program, and a second order cone program, respectively.
Duality.
Certain special cases of conic optimization problems have notable closed-form expressions of their dual problems.
Conic LP.
The dual of the conic linear program
minimize formula_12
subject to formula_13
is
maximize formula_14
subject to formula_15
where formula_16 denotes the dual cone of formula_17.
Whilst weak duality holds in conic linear programming, strong duality does not necessarily hold.
Semidefinite Program.
The dual of a semidefinite program in inequality form
minimize formula_12
subject to formula_18
is given by
maximize formula_19
subject to formula_20
formula_21
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "f:C \\to \\mathbb R"
},
{
"math_id": 1,
"text": "C \\subset X"
},
{
"math_id": 2,
"text": "\\mathcal{H}"
},
{
"math_id": 3,
"text": "h_i(x) = 0 \\ "
},
{
"math_id": 4,
"text": "x"
},
{
"math_id": 5,
"text": "C \\cap \\mathcal{H} "
},
{
"math_id": 6,
"text": "f(x)"
},
{
"math_id": 7,
"text": " C "
},
{
"math_id": 8,
"text": "\\mathbb{R}_+^n = \\left\\{ x \\in \\mathbb{R}^n : \\, x \\geq \\mathbf{0}\\right\\} "
},
{
"math_id": 9,
"text": "\\mathbb{S}^n_{+}"
},
{
"math_id": 10,
"text": "\\left \\{ (x,t) \\in \\mathbb{R}^{n}\\times \\mathbb{R} : \\lVert x \\rVert \\leq t \\right \\} "
},
{
"math_id": 11,
"text": "f \\ "
},
{
"math_id": 12,
"text": "c^T x \\ "
},
{
"math_id": 13,
"text": "Ax = b, x \\in C \\ "
},
{
"math_id": 14,
"text": "b^T y \\ "
},
{
"math_id": 15,
"text": "A^T y + s= c, s \\in C^* \\ "
},
{
"math_id": 16,
"text": "C^*"
},
{
"math_id": 17,
"text": "C \\ "
},
{
"math_id": 18,
"text": "x_1 F_1 + \\cdots + x_n F_n + G \\leq 0"
},
{
"math_id": 19,
"text": "\\mathrm{tr}\\ (GZ)\\ "
},
{
"math_id": 20,
"text": "\\mathrm{tr}\\ (F_i Z) +c_i =0,\\quad i=1,\\dots,n"
},
{
"math_id": 21,
"text": "Z \\geq0"
}
] | https://en.wikipedia.org/wiki?curid=9786372 |
978650 | Triple product | Ternary operation on vectors
In geometry and algebra, the triple product is a product of three 3-dimensional vectors, usually Euclidean vectors. The name "triple product" is used for two different products, the scalar-valued scalar triple product and, less often, the vector-valued vector triple product.
Scalar triple product.
The scalar triple product (also called the mixed product, box product, or triple scalar product) is defined as the dot product of one of the vectors with the cross product of the other two.
Geometric interpretation.
Geometrically, the scalar triple product
formula_0
is the (signed) volume of the parallelepiped defined by the three vectors given.
Scalar or pseudoscalar.
Although the scalar triple product gives the volume of the parallelepiped, it is the signed volume, the sign depending on the orientation of the frame or the parity of the permutation of the vectors. This means the product is negated if the orientation is reversed, for example by a parity transformation, and so is more properly described as a pseudoscalar if the orientation can change.
This also relates to the handedness of the cross product; the cross product transforms as a pseudovector under parity transformations and so is properly described as a pseudovector. The dot product of two vectors is a scalar but the dot product of a pseudovector and a vector is a pseudoscalar, so the scalar triple product (of vectors) must be pseudoscalar-valued.
If T is a proper rotation then
formula_9
but if T is an improper rotation then
formula_10
Scalar or scalar density.
Strictly speaking, a scalar does not change at all under a coordinate transformation. (For example, the factor of 2 used for doubling a vector does not change if the vector is in spherical vs. rectangular coordinates.) However, if each vector is transformed by a matrix then the triple product ends up being multiplied by the determinant of the transformation matrix, which could be quite arbitrary for a non-rotation. That is, the triple product is more properly described as a scalar density.
As an exterior product.
In exterior algebra and geometric algebra the exterior product of two vectors is a bivector, while the exterior product of three vectors is a trivector. A bivector is an oriented plane element and a trivector is an oriented volume element, in the same way that a vector is an oriented line element.
Given vectors a, b and c, the product
formula_11
is a trivector with magnitude equal to the scalar triple product, i.e.
formula_12,
and is the Hodge dual of the scalar triple product. As the exterior product is associative brackets are not needed as it does not matter which of a ∧ b or b ∧ c is calculated first, though the order of the vectors in the product does matter. Geometrically the trivector a ∧ b ∧ c corresponds to the parallelepiped spanned by a, b, and c, with bivectors a ∧ b, b ∧ c and a ∧ c matching the parallelogram faces of the parallelepiped.
As a trilinear function.
The triple product is identical to the volume form of the Euclidean 3-space applied to the vectors via interior product. It also can be expressed as a contraction of vectors with a rank-3 tensor equivalent to the form (or a pseudotensor equivalent to the volume pseudoform); see below.
Vector triple product.
The vector triple product is defined as the cross product of one vector with the cross product of the other two. The following relationship holds:
formula_13.
This is known as triple product expansion, or Lagrange's formula, although the latter name is also used for several other formulas. Its right hand side can be remembered by using the mnemonic "ACB − ABC", provided one keeps in mind which vectors are dotted together. A proof is provided below. Some textbooks write the identity as formula_14 such that a more familiar mnemonic "BAC − CAB" is obtained, as in “back of the cab”.
Since the cross product is anticommutative, this formula may also be written (up to permutation of the letters) as:
formula_15
From Lagrange's formula it follows that the vector triple product satisfies:
formula_16
which is the Jacobi identity for the cross product. Another useful formula follows:
formula_17
These formulas are very useful in simplifying vector calculations in physics. A related identity regarding gradients and useful in vector calculus is Lagrange's formula of vector cross-product identity:
formula_18
This can be also regarded as a special case of the more general Laplace–de Rham operator formula_19.
Proof.
The formula_20 component of formula_21 is given by:
formula_22
Similarly, the formula_23 and formula_24 components of formula_25 are given by:
formula_26
By combining these three components we obtain:
formula_27
Using geometric algebra.
If geometric algebra is used the cross product b × c of vectors is expressed as their exterior product b∧c, a bivector. The second cross product cannot be expressed as an exterior product, otherwise the scalar triple product would result. Instead a left contraction can be used, so the formula becomes
formula_28
The proof follows from the properties of the contraction. The result is the same vector as calculated using a × (b × c).
Interpretations.
Tensor calculus.
In tensor notation, the triple product is expressed using the Levi-Civita symbol:
formula_29
and
formula_30
referring to the formula_31-th component of the resulting vector. This can be simplified by performing a contraction on the Levi-Civita symbols, formula_32
where formula_33 is the Kronecker delta function (formula_34 when formula_35 and formula_36 when formula_37) and formula_38 is the generalized Kronecker delta function. We can reason out this identity by recognizing that the index formula_39 will be summed out leaving only formula_31 and formula_40. In the first term, we fix formula_41 and thus formula_42. Likewise, in the second term, we fix formula_43 and thus formula_44.
Returning to the triple cross product,
formula_45
Vector calculus.
Consider the flux integral of the vector field formula_46 across the parametrically-defined surface formula_47: formula_48. The unit normal vector formula_49 to the surface is given by formula_50, so the integrand formula_51 is a scalar triple product.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathbf{a}\\cdot(\\mathbf{b}\\times \\mathbf{c}) "
},
{
"math_id": 1,
"text": "\n \\mathbf{a}\\cdot(\\mathbf{b}\\times \\mathbf{c})=\n \\mathbf{b}\\cdot(\\mathbf{c}\\times \\mathbf{a})=\n \\mathbf{c}\\cdot(\\mathbf{a}\\times \\mathbf{b})\n"
},
{
"math_id": 2,
"text": "\n \\mathbf{a}\\cdot (\\mathbf{b}\\times \\mathbf{c}) =\n (\\mathbf{a}\\times \\mathbf{b})\\cdot \\mathbf{c}\n"
},
{
"math_id": 3,
"text": "\\begin{align}\n \\mathbf{a}\\cdot(\\mathbf{b}\\times \\mathbf{c}) \n &= -\\mathbf{a}\\cdot(\\mathbf{c}\\times \\mathbf{b}) \\\\\n &= -\\mathbf{b}\\cdot(\\mathbf{a}\\times \\mathbf{c}) \\\\\n &= -\\mathbf{c}\\cdot(\\mathbf{b}\\times \\mathbf{a})\n\\end{align}"
},
{
"math_id": 4,
"text": "\\mathbf{a}\\cdot(\\mathbf{b}\\times \\mathbf{c})\n= \\det \\begin{bmatrix}\n a_1 & a_2 & a_3 \\\\\n b_1 & b_2 & b_3 \\\\\n c_1 & c_2 & c_3 \\\\\n\\end{bmatrix}\n= \\det \\begin{bmatrix} a_1 & b_1 & c_1 \\\\ a_2 & b_2 & c_2 \\\\ a_3 & b_3 & c_3 \\end{bmatrix}\n= \\det \\begin{bmatrix} \\mathbf{a} & \\mathbf{b} & \\mathbf{c} \\end{bmatrix} ."
},
{
"math_id": 5,
"text": "\n \\mathbf{a} \\cdot (\\mathbf{a} \\times \\mathbf{b}) =\n \\mathbf{a} \\cdot (\\mathbf{b} \\times \\mathbf{a}) =\n \\mathbf{b} \\cdot (\\mathbf{a} \\times \\mathbf{a}) = 0\n"
},
{
"math_id": 6,
"text": "\n (\\mathbf{a}\\cdot(\\mathbf{b}\\times \\mathbf{c}))\\, \\mathbf{a} =\n (\\mathbf{a}\\times \\mathbf{b})\\times (\\mathbf{a}\\times \\mathbf{c})\n"
},
{
"math_id": 7,
"text": "((\\mathbf{a}\\times \\mathbf{b}) \\cdot \\mathbf{c})\\;((\\mathbf{d} \\times \\mathbf{e}) \\cdot \\mathbf{f})\n = \\det \\begin{bmatrix}\n \\mathbf{a}\\cdot \\mathbf{d} & \\mathbf{a}\\cdot \\mathbf{e} & \\mathbf{a}\\cdot \\mathbf{f} \\\\\n \\mathbf{b}\\cdot \\mathbf{d} & \\mathbf{b}\\cdot \\mathbf{e} & \\mathbf{b}\\cdot \\mathbf{f} \\\\\n \\mathbf{c}\\cdot \\mathbf{d} & \\mathbf{c}\\cdot \\mathbf{e} & \\mathbf{c}\\cdot \\mathbf{f}\n \\end{bmatrix}\n"
},
{
"math_id": 8,
"text": "\n\\frac{\\mathbf{a}\\cdot(\\mathbf{b}\\times \\mathbf{c})}{\\|{\\mathbf{a}}\\| \\|{\\mathbf{b}}\\| \\|{\\mathbf{c}}\\|} = \\operatorname{psin}(\\mathbf{a},\\mathbf{b},\\mathbf{c}) \n"
},
{
"math_id": 9,
"text": "\n\\mathbf{Ta} \\cdot (\\mathbf{Tb} \\times \\mathbf{Tc}) =\n\\mathbf{a} \\cdot (\\mathbf{b} \\times \\mathbf{c}),\n"
},
{
"math_id": 10,
"text": "\n\\mathbf{Ta} \\cdot (\\mathbf{Tb} \\times \\mathbf{Tc}) =\n-\\mathbf{a} \\cdot (\\mathbf{b} \\times \\mathbf{c}).\n"
},
{
"math_id": 11,
"text": "\\mathbf{a} \\wedge \\mathbf{b} \\wedge \\mathbf{c}"
},
{
"math_id": 12,
"text": "|\\mathbf{a} \\wedge \\mathbf{b} \\wedge \\mathbf{c}| = |\\mathbf{a}\\cdot(\\mathbf{b} \\times\\mathbf{c})|"
},
{
"math_id": 13,
"text": "\\mathbf{a}\\times (\\mathbf{b}\\times \\mathbf{c}) = (\\mathbf{a}\\cdot\\mathbf{c})\\mathbf{b} -(\\mathbf{a}\\cdot\\mathbf{b}) \\mathbf{c}"
},
{
"math_id": 14,
"text": "\\mathbf{a}\\times (\\mathbf{b}\\times \\mathbf{c}) = \\mathbf{b}(\\mathbf{a}\\cdot\\mathbf{c})-\\mathbf{c}(\\mathbf{a}\\cdot\\mathbf{b})"
},
{
"math_id": 15,
"text": "(\\mathbf{a}\\times \\mathbf{b})\\times \\mathbf{c} = -\\mathbf{c}\\times(\\mathbf{a}\\times \\mathbf{b}) = -(\\mathbf{c}\\cdot\\mathbf{b})\\mathbf{a} + (\\mathbf{c}\\cdot\\mathbf{a})\\mathbf{b}"
},
{
"math_id": 16,
"text": "\\mathbf{a}\\times (\\mathbf{b}\\times \\mathbf{c}) + \\mathbf{b}\\times (\\mathbf{c}\\times \\mathbf{a}) + \\mathbf{c} \\times (\\mathbf{a}\\times \\mathbf{b}) = \\mathbf{0}"
},
{
"math_id": 17,
"text": "(\\mathbf{a}\\times \\mathbf{b}) \\times \\mathbf{c} = \\mathbf{a}\\times (\\mathbf{b}\\times \\mathbf{c}) - \\mathbf{b} \\times (\\mathbf{a} \\times \\mathbf{c})"
},
{
"math_id": 18,
"text": "\\boldsymbol{\\nabla} \\times (\\boldsymbol{\\nabla} \\times \\mathbf{A}) = \\boldsymbol{\\nabla} (\\boldsymbol{\\nabla} \\cdot \\mathbf{A}) - (\\boldsymbol{\\nabla} \\cdot \\boldsymbol{\\nabla}) \\mathbf{A}"
},
{
"math_id": 19,
"text": "\\Delta = d \\delta + \\delta d"
},
{
"math_id": 20,
"text": "x"
},
{
"math_id": 21,
"text": "\\mathbf{u} \\times (\\mathbf{v}\\times \\mathbf{w})"
},
{
"math_id": 22,
"text": "\\begin{align}\n(\\mathbf{u} \\times (\\mathbf{v} \\times \\mathbf{w}))_x\n &= \\mathbf{u}_y(\\mathbf{v}_x\\mathbf{w}_y - \\mathbf{v}_y\\mathbf{w}_x) - \\mathbf{u}_z(\\mathbf{v}_z\\mathbf{w}_x - \\mathbf{v}_x\\mathbf{w}_z) \\\\\n &= \\mathbf{v}_x(\\mathbf{u}_y\\mathbf{w}_y + \\mathbf{u}_z\\mathbf{w}_z) - \\mathbf{w}_x(\\mathbf{u}_y\\mathbf{v}_y + \\mathbf{u}_z\\mathbf{v}_z) \\\\\n &= \\mathbf{v}_x(\\mathbf{u}_y\\mathbf{w}_y + \\mathbf{u}_z\\mathbf{w}_z) - \\mathbf{w}_x(\\mathbf{u}_y\\mathbf{v}_y + \\mathbf{u}_z\\mathbf{v}_z) + (\\mathbf{u}_x\\mathbf{v}_x\\mathbf{w}_x - \\mathbf{u}_x\\mathbf{v}_x\\mathbf{w}_x) \\\\\n &= \\mathbf{v}_x(\\mathbf{u}_x\\mathbf{w}_x + \\mathbf{u}_y\\mathbf{w}_y + \\mathbf{u}_z\\mathbf{w}_z) - \\mathbf{w}_x(\\mathbf{u}_x\\mathbf{v}_x + \\mathbf{u}_y\\mathbf{v}_y + \\mathbf{u}_z\\mathbf{v}_z) \\\\\n &= (\\mathbf{u}\\cdot\\mathbf{w})\\mathbf{v}_x - (\\mathbf{u}\\cdot\\mathbf{v})\\mathbf{w}_x\n\\end{align}"
},
{
"math_id": 23,
"text": "y"
},
{
"math_id": 24,
"text": "z"
},
{
"math_id": 25,
"text": "\\mathbf{u}\\times (\\mathbf{v} \\times \\mathbf{w})"
},
{
"math_id": 26,
"text": "\\begin{align}\n (\\mathbf{u} \\times (\\mathbf{v} \\times \\mathbf{w}))_y &= (\\mathbf{u}\\cdot\\mathbf{w})\\mathbf{v}_y - (\\mathbf{u}\\cdot\\mathbf{v})\\mathbf{w}_y \\\\\n (\\mathbf{u} \\times (\\mathbf{v} \\times \\mathbf{w}))_z &= (\\mathbf{u}\\cdot\\mathbf{w})\\mathbf{v}_z - (\\mathbf{u}\\cdot\\mathbf{v})\\mathbf{w}_z\n\\end{align}"
},
{
"math_id": 27,
"text": "\\mathbf{u}\\times (\\mathbf{v}\\times \\mathbf{w}) = (\\mathbf{u}\\cdot\\mathbf{w})\\ \\mathbf{v} - (\\mathbf{u}\\cdot\\mathbf{v})\\ \\mathbf{w}"
},
{
"math_id": 28,
"text": "\\begin{align}\n -\\mathbf{a} \\;\\big\\lrcorner\\; (\\mathbf{b} \\wedge \\mathbf{c}) &= \\mathbf{b} \\wedge (\\mathbf{a} \\;\\big\\lrcorner\\; \\mathbf{c}) - (\\mathbf{a} \\;\\big\\lrcorner\\; \\mathbf{b}) \\wedge \\mathbf{c} \\\\\n &= (\\mathbf{a} \\cdot \\mathbf{c}) \\mathbf{b} - (\\mathbf{a} \\cdot \\mathbf{b}) \\mathbf{c}\n\\end{align}"
},
{
"math_id": 29,
"text": "\\mathbf{a} \\cdot [\\mathbf{b}\\times \\mathbf{c}] = \\varepsilon_{ijk} a^i b^j c^k"
},
{
"math_id": 30,
"text": "(\\mathbf{a} \\times [\\mathbf{b}\\times \\mathbf{c}])_i = \\varepsilon_{ijk} a^j \\varepsilon^{k\\ell m} b_\\ell c_m = \\varepsilon_{ijk}\\varepsilon^{k\\ell m} a^j b_\\ell c_m,"
},
{
"math_id": 31,
"text": "i"
},
{
"math_id": 32,
"text": "\\varepsilon_{ijk} \\varepsilon^{k\\ell m} = \\delta^{\\ell m}_{ij} = \\delta^{\\ell}_{i} \\delta^{m}_{j} - \\delta^{m}_{i} \\delta^{\\ell}_{j}\\,,"
},
{
"math_id": 33,
"text": "\\delta^{i}_{j}"
},
{
"math_id": 34,
"text": "\\delta^{i}_{j} = 0"
},
{
"math_id": 35,
"text": "i \\neq j"
},
{
"math_id": 36,
"text": "\\delta^{i}_{j} = 1"
},
{
"math_id": 37,
"text": "i = j"
},
{
"math_id": 38,
"text": "\\delta^{\\ell m}_{ij}"
},
{
"math_id": 39,
"text": "k"
},
{
"math_id": 40,
"text": "j"
},
{
"math_id": 41,
"text": "i=l"
},
{
"math_id": 42,
"text": "j=m"
},
{
"math_id": 43,
"text": "i=m"
},
{
"math_id": 44,
"text": "l=j"
},
{
"math_id": 45,
"text": "(\\mathbf{a} \\times [\\mathbf{b}\\times \\mathbf{c}])_i = (\\delta^{\\ell}_{i}\\delta^{m}_{j} - \\delta^{m}_{i}\\delta^{\\ell}_{j}) a^j b_\\ell c_m = a^j b_i c_j - a^j b_j c_i = b_i(\\mathbf{a}\\cdot\\mathbf{c}) - c_i(\\mathbf{a}\\cdot\\mathbf{b})\\,."
},
{
"math_id": 46,
"text": "\\mathbf{F}"
},
{
"math_id": 47,
"text": "S = \\mathbf{r}(u,v)"
},
{
"math_id": 48,
"text": "\\iint_S \\mathbf{F} \\cdot \\hat\\mathbf{n} \\, dS"
},
{
"math_id": 49,
"text": "\\hat\\mathbf{n}"
},
{
"math_id": 50,
"text": "\\frac{\\mathbf{r}_u \\times \\mathbf{r}_v }{|\\mathbf{r}_u \\times \\mathbf{r}_v |}"
},
{
"math_id": 51,
"text": "\\mathbf{F}\\cdot \\frac{(\\mathbf{r}_u \\times \\mathbf{r}_v )}{|\\mathbf{r}_u \\times \\mathbf{r}_v |}"
}
] | https://en.wikipedia.org/wiki?curid=978650 |
9787563 | Moulton plane | In incidence geometry, the Moulton plane is an example of an affine plane in which Desargues's theorem does not hold. It is named after the American astronomer Forest Ray Moulton. The points of the Moulton plane are simply the points in the real plane R2 and the lines are the regular lines as well with the exception that for lines with a negative slope, the slope doubles when they pass the "y"-axis.
Formal definition.
The Moulton plane is an incidence structure formula_0, where formula_1 denotes the set of points, formula_2 the set of lines and formula_3 the incidence relation "lies on":
formula_4
formula_5
formula_6 is just a formal symbol for an element formula_7. It is used to describe vertical lines, which you may think of as lines with an infinitely large slope.
The incidence relation is defined as follows:
For formula_8 and formula_9 we have
formula_10
Application.
The Moulton plane is an affine plane in which Desargues' theorem does not hold. The associated projective plane is consequently non-desarguesian as well. This means that there are projective planes not isomorphic to formula_11 for any (skew) field "F". Here formula_11 is the projective plane formula_12 determined by a 3-dimensional vector space over the (skew) field "F".
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathfrak M=\\langle P, G,\\textrm I\\rangle"
},
{
"math_id": 1,
"text": "P"
},
{
"math_id": 2,
"text": "G"
},
{
"math_id": 3,
"text": "\\textrm I"
},
{
"math_id": 4,
"text": " P:=\\mathbb R^2 \\,"
},
{
"math_id": 5,
"text": " G:=(\\mathbb R \\cup \\{\\infty\\}) \\times \\mathbb R,"
},
{
"math_id": 6,
"text": "\\infty"
},
{
"math_id": 7,
"text": "\\not\\in\\mathbb R"
},
{
"math_id": 8,
"text": "p = (x, y) \\in P"
},
{
"math_id": 9,
"text": "g = (m, b) \\in G"
},
{
"math_id": 10,
"text": "\np\\,\\textrm I\\,g\\iff\\begin{cases}\nx=b&\\text{if }m=\\infty\\\\\ny=\\frac{1}{2}mx+b&\\text{if }m\\leq 0, x\\leq 0\\\\\ny=mx+b&\\text{if }m\\geq 0 \\text{ or } x\\geq 0.\n\\end{cases}\n"
},
{
"math_id": 11,
"text": " PG(2,F) "
},
{
"math_id": 12,
"text": " P(F^3) "
}
] | https://en.wikipedia.org/wiki?curid=9787563 |
9789196 | Focal surface | For a surface in three dimension the focal surface, surface of centers or evolute is formed by taking the centers of the curvature spheres, which are the tangential spheres whose radii are the reciprocals of one of the principal curvatures at the point of tangency. Equivalently it is the surface formed by the centers of the circles which osculate the curvature lines.
As the principal curvatures are the eigenvalues of the second fundamental form, there are two at each point, and these give rise to two points of the focal surface on each normal direction to the surface. Away from umbilical points, these two points of the focal surface are distinct; at umbilical points the two sheets come together. When the surface has a ridge the focal surface has a cuspidal edge, three such edges pass through an elliptical umbilic and only one through a hyperbolic umbilic. At points where the Gaussian curvature is zero, one sheet of the focal surface will have a point at infinity corresponding to the zero principal curvature.
If formula_0 is a point of the given surface, formula_1 the unit normal and formula_2 the principal curvatures at formula_0, then
formula_3 and formula_4
are the corresponding two points of the focal surface.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\vec p"
},
{
"math_id": 1,
"text": "\\vec n"
},
{
"math_id": 2,
"text": "k_1,k_2"
},
{
"math_id": 3,
"text": "\\vec b_1(\\vec p)=\\vec p+\\frac{\\vec n}{k_1} \\quad "
},
{
"math_id": 4,
"text": "\\quad \\vec b_2(\\vec p)=\\vec p+\\frac{\\vec n}{k_2}\\ "
}
] | https://en.wikipedia.org/wiki?curid=9789196 |
978951 | Record linkage | Task of finding records in a data set that refer to same entity across different sources
Record linkage (also known as data matching, data linkage, entity resolution, and many other terms) is the task of finding records in a data set that refer to the same entity across different data sources (e.g., data files, books, websites, and databases). Record linkage is necessary when joining different data sets based on entities that may or may not share a common identifier (e.g., database key, URI, National identification number), which may be due to differences in record shape, storage location, or curator style or preference. A data set that has undergone RL-oriented reconciliation may be referred to as being "cross-linked".
Naming conventions.
"Record linkage" is the term used by statisticians, epidemiologists, and historians, among others, to describe the process of joining records from one data source with another that describe the same entity. However, many other terms are used for this process. Unfortunately, this profusion of terminology has led to few cross-references between these research communities.
Computer scientists often refer to it as "data matching" or as the "object identity problem". Commercial mail and database applications refer to it as "merge/purge processing" or "list washing". Other names used to describe the same concept include: "coreference/entity/identity/name/record resolution", "entity disambiguation/linking", "fuzzy matching", "duplicate detection", "deduplication", "record matching", "(reference) reconciliation", "object identification", "data/information integration" and "conflation".
While they share similar names, record linkage and Linked Data are two separate approaches to processing and structuring data. Although both involve identifying matching entities across different data sets, record linkage standardly equates "entities" with human individuals; by contrast, Linked Data is based on the possibility of interlinking any web resource across data sets, using a correspondingly broader concept of identifier, namely a URI.
History.
The initial idea of record linkage goes back to Halbert L. Dunn in his 1946 article titled "Record Linkage" published in the "American Journal of Public Health".
Howard Borden Newcombe then laid the probabilistic foundations of modern record linkage theory in a 1959 article in "Science". These were formalized in 1969 by Ivan Fellegi and Alan Sunter, in their pioneering work "A Theory For Record Linkage", where they proved that the probabilistic decision rule they described was optimal when the comparison attributes were conditionally independent. In their work they recognized the growing interest in applying advances in computing and automation to large collections of administrative data, and the "Fellegi-Sunter theory" remains the mathematical foundation for many record linkage applications.
Since the late 1990s, various machine learning techniques have been developed that can, under favorable conditions, be used to estimate the conditional probabilities required by the Fellegi-Sunter theory. Several researchers have reported that the conditional independence assumption of the Fellegi-Sunter algorithm is often violated in practice; however, published efforts to explicitly model the conditional dependencies among the comparison attributes have not resulted in an improvement in record linkage quality. On the other hand, machine learning or neural network algorithms that do not rely on these assumptions often provide far higher accuracy, when sufficient labeled training data is available.
Record linkage can be done entirely without the aid of a computer, but the primary reasons computers are often used to complete record linkages are to reduce or eliminate manual review and to make results more easily reproducible. Computer matching has the advantages of allowing central supervision of processing, better quality control, speed, consistency, and better reproducibility of results.
Methods.
Data preprocessing.
Record linkage is highly sensitive to the quality of the data being linked, so all data sets under consideration (particularly their key identifier fields) should ideally undergo a data quality assessment prior to record linkage. Many key identifiers for the same entity can be presented quite differently between (and even within) data sets, which can greatly complicate record linkage unless understood ahead of time. For example, key identifiers for a man named William J. Smith might appear in three different data sets as so:
In this example, the different formatting styles lead to records that look different but in fact all refer to the same entity with the same logical identifier values. Most, if not all, record linkage strategies would result in more accurate linkage if these values were first "normalized" or "standardized" into a consistent format (e.g., all names are "Surname, Given name", and all dates are "YYYY/MM/DD"). Standardization can be accomplished through simple rule-based data transformations or more complex procedures such as lexicon-based tokenization and probabilistic hidden Markov models. Several of the packages listed in the "Software Implementations" section provide some of these features to simplify the process of data standardization.
Entity resolution.
Entity resolution is an operational intelligence process, typically powered by an entity resolution engine or middleware, whereby organizations can connect disparate data sources with a view to understanding possible entity matches and non-obvious relationships across multiple data silos. It analyzes all of the information relating to individuals and/or entities from multiple sources of data, and then applies likelihood and probability scoring to determine which identities are a match and what, if any, non-obvious relationships exist between those identities.
Entity resolution engines are typically used to uncover risk, fraud, and conflicts of interest, but are also useful tools for use within customer data integration (CDI) and master data management (MDM) requirements. Typical uses for entity resolution engines include terrorist screening, insurance fraud detection, USA Patriot Act compliance, organized retail crime ring detection and applicant screening.
For example: Across different data silos – employee records, vendor data, watch lists, etc. – an organization may have several variations of an entity named ABC, which may or may not be the same individual. These entries may, in fact, appear as ABC1, ABC2, or ABC3 within those data sources. By comparing similarities between underlying attributes such as address, date of birth, or social security number, the user can eliminate some possible matches and confirm others as very likely matches.
Entity resolution engines then apply rules, based on common sense logic, to identify hidden relationships across the data. In the example above, perhaps ABC1 and ABC2 are not the same individual, but rather two distinct people who share common attributes such as address or phone number.
Data matching.
While entity resolution solutions include data matching technology, many data matching offerings do not fit the definition of entity resolution. Here are four factors that distinguish entity resolution from data matching, according to John Talburt, director of the UALR Center for Advanced Research in Entity Resolution and Information Quality:
In contrast to data quality products, more powerful identity resolution engines also include a rules engine and workflow process, which apply business intelligence to the resolved identities and their relationships. These advanced technologies make automated decisions and impact business processes in real time, limiting the need for human intervention.
Deterministic record linkage.
The simplest kind of record linkage, called "deterministic" or "rules-based record linkage", generates links based on the number of individual identifiers that match among the available data sets. Two records are said to match via a deterministic record linkage procedure if all or some identifiers (above a certain threshold) are identical. Deterministic record linkage is a good option when the entities in the data sets are identified by a common identifier, or when there are several representative identifiers (e.g., name, date of birth, and sex when identifying a person) whose quality of data is relatively high.
As an example, consider two standardized data sets, Set A and Set B, that contain different bits of information about patients in a hospital system. The two data sets identify patients using a variety of identifiers: Social Security Number (SSN), name, date of birth (DOB), sex, and ZIP code (ZIP). The records in two data sets (identified by the "#" column) are shown below:
The most simple deterministic record linkage strategy would be to pick a single identifier that is assumed to be uniquely identifying, say SSN, and declare that records sharing the same value identify the same person while records not sharing the same value identify different people. In this example, deterministic linkage based on SSN would create entities based on A1 and A2; A3 and B1; and A4. While A1, A2, and B2 appear to represent the same entity, B2 would not be included into the match because it is missing a value for SSN.
Handling exceptions such as missing identifiers involves the creation of additional record linkage rules. One such rule in the case of missing SSN might be to compare name, date of birth, sex, and ZIP code with other records in hopes of finding a match. In the above example, this rule would still not match A1/A2 with B2 because the names are still slightly different: standardization put the names into the proper (Surname, Given name) format but could not discern "Bill" as a nickname for "William". Running names through a phonetic algorithm such as Soundex, NYSIIS, or metaphone, can help to resolve these types of problems. However, they may still stumble over surname changes as the result of marriage or divorce, but then B2 would be matched only with A1 since the ZIP code in A2 is different. Thus, another rule would need to be created to determine whether differences in particular identifiers are acceptable (such as ZIP code) and which are not (such as date of birth).
As this example demonstrates, even a small decrease in data quality or small increase in the complexity of the data can result in a very large increase in the number of rules necessary to link records properly. Eventually, these linkage rules will become too numerous and interrelated to build without the aid of specialized software tools. In addition, linkage rules are often specific to the nature of the data sets they are designed to link together. One study was able to link the Social Security Death Master File with two hospital registries from the Midwestern United States using SSN, NYSIIS-encoded first name, birth month, and sex, but these rules may not work as well with data sets from other geographic regions or with data collected on younger populations. Thus, continuous maintenance testing of these rules is necessary to ensure they continue to function as expected as new data enter the system and need to be linked. New data that exhibit different characteristics than was initially expected could require a complete rebuilding of the record linkage rule set, which could be a very time-consuming and expensive endeavor.
Probabilistic record linkage.
"Probabilistic record linkage", sometimes called "fuzzy matching" (also "probabilistic merging" or "fuzzy merging" in the context of merging of databases), takes a different approach to the record linkage problem by taking into account a wider range of potential identifiers, computing weights for each identifier based on its estimated ability to correctly identify a match or a non-match, and using these weights to calculate the probability that two given records refer to the same entity. Record pairs with probabilities above a certain threshold are considered to be matches, while pairs with probabilities below another threshold are considered to be non-matches; pairs that fall between these two thresholds are considered to be "possible matches" and can be dealt with accordingly (e.g., human reviewed, linked, or not linked, depending on the requirements). Whereas deterministic record linkage requires a series of potentially complex rules to be programmed ahead of time, probabilistic record linkage methods can be "trained" to perform well with much less human intervention.
Many probabilistic record linkage algorithms assign match/non-match weights to identifiers by means of two probabilities called formula_0 and formula_1. The formula_0 probability is the probability that an identifier in two "non-matching" records will agree purely by chance. For example, the formula_0 probability for birth month (where there are twelve values that are approximately uniformly distributed) is formula_2; identifiers with values that are not uniformly distributed will have different formula_0 probabilities for different values (possibly including missing values). The formula_1 probability is the probability that an identifier in "matching" pairs will agree (or be sufficiently similar, such as strings with low Jaro-Winkler or Levenshtein distance). This value would be formula_3 in the case of perfect data, but given that this is rarely (if ever) true, it can instead be estimated. This estimation may be done based on prior knowledge of the data sets, by manually identifying a large number of matching and non-matching pairs to "train" the probabilistic record linkage algorithm, or by iteratively running the algorithm to obtain closer estimations of the formula_1 probability. If a value of formula_4 were to be estimated for the formula_1 probability, then the match/non-match weights for the birth month identifier would be:
The same calculations would be done for all other identifiers under consideration to find their match/non-match weights. Then, every identifier of one record would be compared with the corresponding identifier of another record to compute the total weight of the pair: the "match" weight is added to the running total whenever a pair of identifiers agree, while the "non-match" weight is added (i.e. the running total decreases) whenever the pair of identifiers disagrees. The resulting total weight is then compared to the aforementioned thresholds to determine whether the pair should be linked, non-linked, or set aside for special consideration (e.g. manual validation).
Blocking.
Determining where to set the match/non-match thresholds is a balancing act between obtaining an acceptable sensitivity (or "recall", the proportion of truly matching records that are linked by the algorithm) and positive predictive value (or "precision", the proportion of records linked by the algorithm that truly do match). Various manual and automated methods are available to predict the best thresholds, and some record linkage software packages have built-in tools to help the user find the most acceptable values. Because this can be a very computationally demanding task, particularly for large data sets, a technique known as "blocking" is often used to improve efficiency. Blocking attempts to restrict comparisons to just those records for which one or more particularly discriminating identifiers agree, which has the effect of increasing the positive predictive value (precision) at the expense of sensitivity (recall). For example, blocking based on a phonetically coded surname and ZIP code would reduce the total number of comparisons required and would improve the chances that linked records would be correct (since two identifiers already agree), but would potentially miss records referring to the same person whose surname or ZIP code was different (due to marriage or relocation, for instance). Blocking based on birth month, a more stable identifier that would be expected to change only in the case of data error, would provide a more modest gain in positive predictive value and loss in sensitivity, but would create only twelve distinct groups which, for extremely large data sets, may not provide much net improvement in computation speed. Thus, robust record linkage systems often use multiple blocking passes to group data in various ways in order to come up with groups of records that should be compared to each other.
Machine learning.
In recent years, a variety of machine learning techniques have been used in record linkage. It has been recognized that the classic Fellegi-Sunter algorithm for probabilistic record linkage outlined above is equivalent to the Naive Bayes algorithm in the field of machine learning, and suffers from the same assumption of the independence of its features (an assumption that is typically not true). Higher accuracy can often be achieved by using various other machine learning techniques, including a single-layer perceptron, random forest, and SVM. In conjunction with distributed technologies, accuracy and scale for record linkage can be improved further.
Human-machine hybrid record linkage.
High quality record linkage often requires a human–machine hybrid system to safely manage uncertainty in the ever changing streams of chaotic big data. Recognizing that linkage errors propagate into the linked data and its analysis, interactive record linkage systems have been proposed. Interactive record linkage is defined as people iteratively fine tuning the results from the automated methods and managing the uncertainty and its propagation to subsequent analyses. The main objectives of interactive record linkage systems is to manually resolve uncertain linkages and validate the results until it is at acceptable levels for the given application. Variations of interactive record linkage that enhance privacy during the human interaction steps have also been proposed.
Privacy-preserving record linkage.
Record linkage is increasingly required across databases held by different organisations, where the complementary data held by these organisations can, for example, help to identify patients that are susceptible to certain adverse drug reactions (linking hospital, doctor, pharmacy databases). In many such applications, however, the databases to be linked contain sensitive information about people which cannot be shared between the organisations.
Privacy-preserving record linkage (PPRL) methods have been developed with the aim to link databases without the need of sharing the original sensitive values between the organisations that participate in a linkage. In PPRL, generally the attribute values of records to be compared are encoded or encrypted in some form. A popular such encoding technique used are Bloom filter, which allows approximate similarities to be calculated between encoded values without the need for sharing the corresponding sensitive plain-text values. At the end of the PPRL process only limited information about the record pairs classified as matches is revealed to the organisations that participate in the linkage process. The techniques used in PPRL must guarantee that no participating organisation, nor any external adversary, can compromise the privacy of the entities that are represented by records in the databases being linked.
Mathematical model.
In an application with two files, A and B, denote the rows ("records") by formula_5 in file A and formula_6 in file B. Assign formula_7 "characteristics" to each record. The set of records that represent identical entities is defined by
formula_8
and the complement of set formula_9, namely set formula_10 representing different entities is defined as
formula_11.
A vector, formula_12 is defined, that contains the coded agreements and disagreements on each characteristic:
formula_13
where formula_7 is a subscript for the characteristics (sex, age, marital status, etc.) in the files. The conditional probabilities of observing a specific vector formula_12 given formula_14, formula_15 are defined as
formula_16
and
formula_17
respectively.
Applications.
Master data management.
Most Master data management (MDM) products use a record linkage process to identify records from different sources representing the same real-world entity. This linkage is used to create a "golden master record" containing the cleaned, reconciled data about the entity. The techniques used in MDM are the same as for record linkage generally. MDM expands this matching not only to create a "golden master record" but to infer relationships also. (i.e. a person has a same/similar surname and same/similar address, this might imply they share a household relationship).
Data warehousing and business intelligence.
Record linkage plays a key role in data warehousing and business intelligence. Data warehouses serve to combine data from many different operational source systems into one logical data model, which can then be subsequently fed into a business intelligence system for reporting and analytics. Each operational source system may have its own method of identifying the same entities used in the logical data model, so record linkage between the different sources becomes necessary to ensure that the information about a particular entity in one source system can be seamlessly compared with information about the same entity from another source system. Data standardization and subsequent record linkage often occur in the "transform" portion of the extract, transform, load (ETL) process.
Historical research.
Record linkage is important to social history research since most data sets, such as census records and parish registers were recorded long before the invention of National identification numbers. When old sources are digitized, linking of data sets is a prerequisite for longitudinal study. This process is often further complicated by lack of standard spelling of names, family names that change according to place of dwelling, changing of administrative boundaries, and problems of checking the data against other sources. Record linkage was among the most prominent themes in the History and computing field in the 1980s, but has since been subject to less attention in research.
Medical practice and research.
Record linkage is an important tool in creating data required for examining the health of the public and of the health care system itself. It can be used to improve data holdings, data collection, quality assessment, and the dissemination of information. Data sources can be examined to eliminate duplicate records, to identify under-reporting and missing cases (e.g., census population counts), to create person-oriented health statistics, and to generate disease registries and health surveillance systems. Some cancer registries link various data sources (e.g., hospital admissions, pathology and clinical reports, and death registrations) to generate their registries. Record linkage is also used to create health indicators. For example, fetal and infant mortality is a general indicator of a country's socioeconomic development, public health, and maternal and child services. If infant death records are matched to birth records, it is possible to use birth variables, such as birth weight and gestational age, along with mortality data, such as cause of death, in analyzing the data. Linkages can help in follow-up studies of cohorts or other groups to determine factors such as vital status, residential status, or health outcomes. Tracing is often needed for follow-up of industrial cohorts, clinical trials, and longitudinal surveys to obtain the cause of death and/or cancer. An example of a successful and long-standing record linkage system allowing for population-based medical research is the Rochester Epidemiology Project based in Rochester, Minnesota.
Criticism of existing software implementations.
The main reasons cited are:
See also.
<templatestyles src="Div col/styles.css"/>
Notes and references.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "u"
},
{
"math_id": 1,
"text": "m"
},
{
"math_id": 2,
"text": "1/12 \\approx 0.083"
},
{
"math_id": 3,
"text": "1.0"
},
{
"math_id": 4,
"text": "0.95"
},
{
"math_id": 5,
"text": "\\alpha (a)"
},
{
"math_id": 6,
"text": "\\beta (b)"
},
{
"math_id": 7,
"text": "K"
},
{
"math_id": 8,
"text": " M = \\left\\{ (a,b); a=b; a \\in A; b \\in B \\right\\} "
},
{
"math_id": 9,
"text": "M"
},
{
"math_id": 10,
"text": "U"
},
{
"math_id": 11,
"text": " U = \\{ (a,b); a \\neq b; a \\in A; b \\in B \\} "
},
{
"math_id": 12,
"text": "\\gamma"
},
{
"math_id": 13,
"text": " \\gamma \\left[ \\alpha ( a ), \\beta ( b ) \\right] = \\{ \\gamma^{1} \\left[ \\alpha ( a ) , \\beta ( b ) \\right] ,..., \\gamma^{K} \\left[ \\alpha ( a ), \\beta ( b ) \\right] \\} "
},
{
"math_id": 14,
"text": "(a, b) \\in M"
},
{
"math_id": 15,
"text": "(a, b) \\in U"
},
{
"math_id": 16,
"text": "\n m(\\gamma) = P \\left\\{ \\gamma \\left[ \\alpha (a), \\beta (b) \\right] | (a,b) \\in M \\right\\} =\n \\sum_{(a, b) \\in M} P \\left\\{\\gamma\\left[ \\alpha(a), \\beta(b) \\right] \\right\\} \\cdot\n P \\left[ (a, b) | M\\right]\n"
},
{
"math_id": 17,
"text": "\n u(\\gamma) = P \\left\\{ \\gamma \\left[ \\alpha (a), \\beta (b) \\right] | (a,b) \\in U \\right\\} =\n \\sum_{(a, b) \\in U} P \\left\\{\\gamma\\left[ \\alpha(a), \\beta(b) \\right] \\right\\} \\cdot\n P \\left[ (a, b) | U\\right],\n"
}
] | https://en.wikipedia.org/wiki?curid=978951 |
9790950 | Regular category | Mathematical category with finite limits and coequalizers
In category theory, a regular category is a category with finite limits and coequalizers of a pair of morphisms called kernel pairs, satisfying certain "exactness" conditions. In that way, regular categories recapture many properties of abelian categories, like the existence of "images", without requiring additivity. At the same time, regular categories provide a foundation for the study of a fragment of first-order logic, known as regular logic.
Definition.
A category "C" is called regular if it satisfies the following three properties:
is a pullback, then the coequalizer of "p"0, "p"1 exists. The pair ("p"0, "p"1) is called the kernel pair of "f". Being a pullback, the kernel pair is unique up to a unique isomorphism.
is a pullback, and if "f" is a regular epimorphism, then "g" is a regular epimorphism as well. A regular epimorphism is an epimorphism that appears as a coequalizer of some pair of morphisms.
Examples.
Examples of regular categories include:
The following categories are "not" regular:
Epi-mono factorization.
In a regular category, the regular-epimorphisms and the monomorphisms form a factorization system. Every morphism "f:X→Y" can be factorized into a regular epimorphism "e:X→E" followed by a monomorphism "m:E→Y", so that "f=me". The factorization is unique in the sense that if "e':X→E' "is another regular epimorphism and "m':E'→Y" is another monomorphism such that "f=m'e, then there exists an isomorphism "h:E→E' " such that "he=e' "and "m'h=m". The monomorphism "m" is called the image"' of "f".
Exact sequences and regular functors.
In a regular category, a diagram of the form formula_0 is said to be an exact sequence if it is both a coequalizer and a kernel pair. The terminology is a generalization of exact sequences in homological algebra: in an abelian category, a diagram
formula_1
is exact in this sense if and only if formula_2 is a short exact sequence in the usual sense.
A functor between regular categories is called regular, if it preserves finite limits and coequalizers of kernel pairs. A functor is regular if and only if it preserves finite limits and exact sequences. For this reason, regular functors are sometimes called exact functors. Functors that preserve finite limits are often said to be left exact.
Regular logic and regular categories.
Regular logic is the fragment of first-order logic that can express statements of the form
formula_3,
where formula_4 and formula_5 are regular formulae i.e. formulae built up from atomic formulae, the truth constant, binary meets (conjunction) and existential quantification. Such formulae can be interpreted in a regular category, and the interpretation is a model of a sequent formula_3, if the interpretation of formula_6 factors through the interpretation of formula_7. This gives for each theory (set of sequents) "T" and for each regular category "C" a category Mod("T",C) of models of "T" in "C". This construction gives a functor Mod("T",-):RegCat→Cat from the category RegCat of small regular categories and regular functors to small categories. It is an important result that for each theory "T" there is a regular category "R(T)", such that for each regular category "C" there is an equivalence
formula_8,
which is natural in "C". Here, "R(T)" is called the "classifying" category of the regular theory "T." Up to equivalence any small regular category arises in this way as the classifying category of some regular theory.
Exact (effective) categories.
The theory of equivalence relations is a regular theory. An equivalence relation on an object formula_9 of a regular category is a monomorphism into formula_10 that satisfies the interpretations of the conditions for reflexivity, symmetry and transitivity.
Every kernel pair formula_11 defines an equivalence relation formula_12. Conversely, an equivalence relation is said to be effective if it arises as a kernel pair. An equivalence relation is effective if and only if it has a coequalizer and it is the kernel pair of this.
A regular category is said to be exact, or exact in the sense of Barr, or effective regular, if every equivalence relation is effective. (Note that the term "exact category" is also used differently, for the exact categories in the sense of Quillen.)
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "R\\rightrightarrows X\\to Y"
},
{
"math_id": 1,
"text": "R\\;\\overset r{\\underset s\\rightrightarrows}\\; X\\xrightarrow{f} Y"
},
{
"math_id": 2,
"text": "0\\to R\\xrightarrow{(r,s)}X\\oplus X\\xrightarrow{(f,-f)} Y\\to 0"
},
{
"math_id": 3,
"text": "\\forall x (\\phi (x) \\to \\psi (x))"
},
{
"math_id": 4,
"text": "\\phi"
},
{
"math_id": 5,
"text": "\\psi"
},
{
"math_id": 6,
"text": "\\phi "
},
{
"math_id": 7,
"text": " \\psi"
},
{
"math_id": 8,
"text": "\\mathbf{Mod}(T,C)\\cong \\mathbf{RegCat}(R(T),C)"
},
{
"math_id": 9,
"text": "X"
},
{
"math_id": 10,
"text": "X \\times X"
},
{
"math_id": 11,
"text": "p_0, p_1: R \\rightarrow X"
},
{
"math_id": 12,
"text": "R \\rightarrow X \\times X"
}
] | https://en.wikipedia.org/wiki?curid=9790950 |
9790951 | Hydraulic pump | Mechanical power source
A hydraulic pump is a mechanical source of power that converts mechanical power into hydraulic energy (hydrostatic energy i.e. flow, pressure). Hydraulic pumps are used in hydraulic drive systems and can be hydrostatic or hydrodynamic. They generate flow with enough power to overcome pressure induced by a load at the pump outlet. When a hydraulic pump operates, it creates a vacuum at the pump inlet, which forces liquid from the reservoir into the inlet line to the pump and by mechanical action delivers this liquid to the pump outlet and forces it into the hydraulic system.
Hydrostatic pumps are positive displacement pumps while hydrodynamic pumps can be fixed displacement pumps, in which the displacement (flow through the pump per rotation of the pump) cannot be adjusted, or variable displacement pumps, which have a more complicated construction that allows the displacement to be adjusted. Hydrodynamic pumps are more frequent in day-to-day life. Hydrostatic pumps of various types all work on the principle of Pascal's law.
Types of hydraulic pump.
Gear pumps.
Gear pumps (with external teeth) (fixed displacement) are simple and economical pumps. The swept volume or displacement of gear pumps for hydraulics will be between about 1 to 200 milliliters. They have the lowest volumetric efficiency (formula_0 ) of all three basic pump types (gear, vane and piston pumps) These pumps create pressure through the meshing of the gear teeth, which forces fluid around the gears to pressurize the outlet side. Some gear pumps can be quite noisy, compared to other types, but modern gear pumps are highly reliable and much quieter than older models. This is in part due to designs incorporating split gears, helical gear teeth and higher precision/quality tooth profiles that mesh and unmesh more smoothly, reducing pressure ripple and related detrimental problems. Another positive attribute of the gear pump, is that catastrophic breakdown is a lot less common than in most other types of hydraulic pumps. This is because the gears gradually wear down the housing and/or main bushings, reducing the volumetric efficiency of the pump gradually until it is all but useless. This often happens long before wear and causes the unit to seize or break down. Hydraulic gear pumps are used in various applications where there are different requirements such as lifting, lowering, opening, closing, or rotating, and they are expected to be safe and long-lasting.
Rotary vane pumps.
A rotary vane pump is a positive-displacement pump that consists of vanes mounted to a rotor that rotates inside a cavity. In some cases these vanes can have variable length and/or be tensioned to maintain contact with the walls as the pump rotates. A critical element in vane pump design is how the vanes are pushed into contact with the pump housing, and how the vane tips are machined at this very point. Several type of "lip" designs are used, and the main objective is to provide a tight seal between the inside of the housing and the vane, and at the same time to minimize wear and metal-to-metal contact. Forcing the vane out of the rotating centre and towards the pump housing is accomplished using spring-loaded vanes, or more traditionally, vanes loaded hydrodynamically (via the pressurized system fluid).
Screw pumps.
Screw pumps (fixed displacement) consist of two Archimedes' screws that intermesh and are enclosed within the same chamber. These pumps are used for high flows at relatively low pressure (max ). They were used on board ships where a constant pressure hydraulic system extended through the whole ship, especially to control ball valves but also to help drive the steering gear and other systems. The advantage of the screw pumps is the low sound level of these pumps; however, the efficiency is not high.
The major problem of screw pumps is that the hydraulic reaction force is transmitted in a direction that's axially opposed to the direction of the flow.
There are two ways to overcome this problem:
Types of screw pumps:
Bent axis pumps.
Bent axis pumps, axial piston pumps and motors using the bent axis principle, fixed or adjustable displacement, exists in two different basic designs. The Thoma-principle (engineer Hans Thoma, Germany, patent 1935) with max 25 degrees angle and the Wahlmark-principle (Gunnar Axel Wahlmark, patent 1960) with spherical-shaped pistons in one piece with the piston rod, piston rings, and maximum 40 degrees between the driveshaft centerline and pistons (Volvo Hydraulics Co.). These have the best efficiency of all pumps. Although in general, the largest displacements are approximately one litre per revolution, if necessary a two-liter swept volume pump can be built. Often variable-displacement pumps are used so that the oil flow can be adjusted carefully. These pumps can in general work with a working pressure of up to 350–420 bars in continuous work.
Inline axial piston pumps.
By using different compensation techniques, the variable displacement type of these pumps can continuously alter fluid discharge per revolution and system pressure based on load requirements, maximum pressure cut-off settings, horsepower/ratio control, and even fully electro proportional systems, requiring no other input than electrical signals. This makes them potentially hugely power saving compared to other constant flow pumps in systems where prime mover/diesel/electric motor rotational speed is constant and required fluid flow is non-constant.
Radial piston pumps.
A radial piston pump is a form of hydraulic pump. The working pistons extend in a radial direction symmetrically around the drive shaft, in contrast to the axial piston pump.
Hydraulic pumps, calculation formulas.
Flow.
formula_1
where
Power.
formula_6
where
Mechanical efficiency.
formula_10
where
Hydraulic efficiency.
formula_14
where | [
{
"math_id": 0,
"text": " \\eta_v \\approx 90 \\% "
},
{
"math_id": 1,
"text": "Q = n \\cdot V_\\text{stroke} \\cdot \\eta_\\text{vol}"
},
{
"math_id": 2,
"text": "\\scriptstyle Q"
},
{
"math_id": 3,
"text": "\\scriptstyle n"
},
{
"math_id": 4,
"text": "\\scriptstyle V_\\text{stroke}"
},
{
"math_id": 5,
"text": "\\scriptstyle \\eta_\\text{vol}"
},
{
"math_id": 6,
"text": "P = {n \\cdot V_\\text{stroke} \\cdot \\Delta p \\over \\eta_\\text{mech}}"
},
{
"math_id": 7,
"text": "\\scriptstyle P"
},
{
"math_id": 8,
"text": "\\scriptstyle \\Delta p"
},
{
"math_id": 9,
"text": "\\scriptstyle \\eta_\\text{mech,hydr}"
},
{
"math_id": 10,
"text": " n_\\text{mech} = {T_\\text{theoretical} \\over T_\\text{actual}} \\cdot 100\\%"
},
{
"math_id": 11,
"text": "\\scriptstyle n_\\text{mech}"
},
{
"math_id": 12,
"text": "\\scriptstyle T_\\text{theoretical}"
},
{
"math_id": 13,
"text": "\\scriptstyle T_\\text{actual}"
},
{
"math_id": 14,
"text": " n_{hydr} = {Q_{actual} \\over Q_{theoretical}} \\cdot 100\\%"
},
{
"math_id": 15,
"text": "\\scriptstyle n_{hydr}"
},
{
"math_id": 16,
"text": "\\scriptstyle Q_{theoretical}"
},
{
"math_id": 17,
"text": "\\scriptstyle Q_{actual}"
}
] | https://en.wikipedia.org/wiki?curid=9790951 |
97911 | Size-exclusion chromatography | Chromatographic method in which dissolved molecules are separated by their size & molecular weight
Size-exclusion chromatography, also known as molecular sieve chromatography, is a chromatographic method in which molecules in solution are separated by their shape, and in some cases size. It is usually applied to large molecules or macromolecular complexes such as proteins and industrial polymers. Typically, when an aqueous solution is used to transport the sample through the column, the technique is known as gel-filtration chromatography, versus the name gel permeation chromatography, which is used when an organic solvent is used as a mobile phase. The chromatography column is packed with fine, porous beads which are commonly composed of dextran, agarose, or polyacrylamide polymers. The pore sizes of these beads are used to estimate the dimensions of macromolecules. SEC is a widely used polymer characterization method because of its ability to provide good molar mass distribution (Mw) results for polymers.
Size exclusion chromatography (SEC) is fundamentally different from all other chromatographic techniques in that separation is based on a simple procedure of classifying molecule sizes rather than any type of interaction.
Applications.
The main application of size-exclusion chromatography is the fractionation of proteins and other water-soluble polymers, while gel permeation chromatography is used to analyze the molecular weight distribution of organic-soluble polymers. Either technique should not be confused with gel electrophoresis, where an electric field is used to "pull" molecules through the gel depending on their electrical charges. The amount of time a solute remains within a pore is dependent on the size of the pore. Larger solutes will have access to a smaller volume and vice versa. Therefore, a smaller solute will remain within the pore for a longer period of time compared to a larger solute.
Even though size exclusion chromatography is widely utilized to study natural organic material, there are limitations. One of these limitations include that there is no standard molecular weight marker; thus, there is nothing to compare the results back to. If precise molecular weight is required, other methods should be used.
Advantages.
The advantages of this method include good separation of large molecules from the small molecules with a minimal volume of eluate, and that various solutions can be applied without interfering with the filtration process, all while preserving the biological activity of the particles to separate. The technique is generally combined with others that further separate molecules by other characteristics, such as acidity, basicity, charge, and affinity for certain compounds. With size exclusion chromatography, there are short and well-defined separation times and narrow bands, which lead to good sensitivity. There is also no sample loss because solutes do not interact with the stationary phase.
The other advantage to this experimental method is that in certain cases, it is feasible to determine the approximate molecular weight of a compound. The shape and size of the compound (eluent) determine how the compound interacts with the gel (stationary phase). To determine approximate molecular weight, the elution volumes of compounds with their corresponding molecular weights are obtained and then a plot of “Kav” vs “log(Mw)” is made, where formula_0 and Mw is the molecular mass. This plot acts as a calibration curve, which is used to approximate the desired compound's molecular weight. The Ve component represents the volume at which the intermediate molecules elute such as molecules that have partial access to the beads of the column. In addition, Vt is the sum of the total volume between the beads and the volume within the beads. The Vo component represents the volume at which the larger molecules elute, which elute in the beginning. Disadvantages are, for example, that only a limited number of bands can be accommodated because the time scale of the chromatogram is short, and, in general, there must be a 10% difference in molecular mass to have a good resolution.
Discovery.
The technique was invented in 1955 by Grant Henry Lathe and Colin R Ruthven, working at Queen Charlotte's Hospital, London. They later received the John Scott Award for this invention. While Lathe and Ruthven used starch gels as the matrix, Jerker Porath and Per Flodin later introduced dextran gels; other gels with size fractionation properties include agarose and polyacrylamide. A short review of these developments has appeared.
There were also attempts to fractionate synthetic high polymers; however, it was not until 1964, when J. C. Moore of the Dow Chemical Company published his work on the preparation of gel permeation chromatography (GPC) columns based on cross-linked polystyrene with controlled pore size, that a rapid increase of research activity in this field began. It was recognized almost immediately that with proper calibration, GPC was capable to provide molar mass and molar mass distribution information for synthetic polymers. Because the latter information was difficult to obtain by other methods, GPC came rapidly into extensive use.
Theory and method.
SEC is used primarily for the analysis of large molecules such as proteins or polymers. SEC works by trapping smaller molecules in the pores of the adsorbent ("stationary phase"). This process is usually performed within a column, which typically consists of a hollow tube tightly packed with micron-scale polymer beads containing pores of different sizes. These pores may be depressions on the surface or channels through the bead. As the solution travels down the column some particles enter into the pores. Larger particles cannot enter into as many pores. The larger the particles, the faster the elution. The larger molecules simply pass by the pores because those molecules are too large to enter the pores. Larger molecules therefore flow through the column more quickly than smaller molecules, that is, the smaller the molecule, the longer the retention time.
One requirement for SEC is that the analyte does not interact with the surface of the stationary phases, with differences in elution time between analytes ideally being based solely on the solute volume the analytes can enter, rather than chemical or electrostatic interactions with the stationary phases. Thus, a small molecule that can penetrate every region of the stationary phase pore system can enter a total volume equal to the sum of the entire pore volume and the interparticle volume. This small molecule elutes late (after the molecule has penetrated all of the pore- and interparticle volume—approximately 80% of the column volume). At the other extreme, a very large molecule that cannot penetrate any the smaller pores can enter only the interparticle volume (~35% of the column volume) and elutes earlier when this volume of mobile phase has passed through the column. The underlying principle of SEC is that particles of different sizes elute (filter) through a stationary phase at different rates. This results in the separation of a solution of particles based on size. Provided that all the particles are loaded simultaneously or near-simultaneously, particles of the same size should elute together.
However, as there are various measures of the size of a macromolecule (for instance, the radius of gyration and the hydrodynamic radius), a fundamental problem in the theory of SEC has been the choice of a proper molecular size parameter by which molecules of different kinds are separated. Experimentally, Benoit and co-workers found an excellent correlation between elution volume and a dynamically based molecular size, the hydrodynamic volume, for several different chain architecture and chemical compositions. The observed correlation based on the hydrodynamic volume became accepted as the basis of universal SEC calibration.
Still, the use of the hydrodynamic volume, a size based on dynamical properties, in the interpretation of SEC data is not fully understood. This is because SEC is typically run under low flow rate conditions where hydrodynamic factor should have little effect on the separation. In fact, both theory and computer simulations assume a thermodynamic separation principle: the separation process is determined by the equilibrium distribution (partitioning) of solute macromolecules between two phases: a dilute bulk solution phase located at the interstitial space and confined solution phases within the pores of column packing material. Based on this theory, it has been shown that the relevant size parameter to the partitioning of polymers in pores is the mean span dimension (mean maximal projection onto a line). Although this issue has not been fully resolved, it is likely that the mean span dimension and the hydrodynamic volume are strongly correlated.
Each size exclusion column has a range of molecular weights that can be separated. The exclusion limit defines the molecular weight at the upper end of the column 'working' range and is where molecules are too large to get trapped in the stationary phase. The lower end of the range is defined by the permeation limit, which defines the molecular weight of a molecule that is small enough to penetrate all pores of the stationary phase. All molecules below this molecular mass are so small that they elute as a single band.
The filtered solution that is collected at the end is known as the eluate. The void volume includes any particles too large to enter the medium, and the solvent volume is known as the column volume.
Following are the materials which are commonly used for porous gel beads in size exclusion chromatography
Factors affecting filtration.
In real-life situations, particles in solution do not have a fixed size, resulting in the probability that a particle that would otherwise be hampered by a pore passing right by it. Also, the stationary-phase particles are not ideally defined; both particles and pores may vary in size. Elution curves, therefore, resemble Gaussian distributions. The stationary phase may also interact in undesirable ways with a particle and influence retention times, though great care is taken by column manufacturers to use stationary phases that are inert and minimize this issue.
Like other forms of chromatography, increasing the column length enhances resolution, and increasing the column diameter increases column capacity. Proper column packing is important for maximum resolution: An over-packed column can collapse the pores in the beads, resulting in a loss of resolution. An under-packed column can reduce the relative surface area of the stationary phase accessible to smaller species, resulting in those species spending less time trapped in pores. Unlike affinity chromatography techniques, a solvent head at the top of the column can drastically diminish resolution as the sample diffuses prior to loading, broadening the downstream elution.
Analysis.
In simple manual columns, the eluent is collected in constant volumes, known as fractions. The more similar the particles are in size the more likely they are in the same fraction and not detected separately. More advanced columns overcome this problem by constantly monitoring the eluent.
The collected fractions are often examined by spectroscopic techniques to determine the concentration of the particles eluted. Common spectroscopy detection techniques are refractive index (RI) and ultraviolet (UV). When eluting spectroscopically similar species (such as during biological purification), other techniques may be necessary to identify the contents of each fraction. It is also possible to analyze the eluent flow continuously with RI, LALLS, Multi-Angle Laser Light Scattering MALS, UV, and/or viscosity measurements.
The elution volume (Ve) decreases roughly linear with the logarithm of the molecular hydrodynamic volume. Columns are often calibrated using 4-5 standard samples (e.g., folded proteins of known molecular weight), and a sample containing a very large molecule such as thyroglobulin to determine the void volume. (Blue dextran is not recommended for Vo determination because it is heterogeneous and may give variable results) The elution volumes of the standards are divided by the elution volume of the thyroglobulin (Ve/Vo) and plotted against the log of the standards' molecular weights.
Applications.
Biochemical applications.
In general, SEC is considered a low-resolution chromatography as it does not discern similar species very well, and is therefore often reserved for the final step of a purification. The technique can determine the quaternary structure of purified proteins that have slow exchange times, since it can be carried out under native solution conditions, preserving macromolecular interactions. SEC can also assay protein tertiary structure, as it measures the hydrodynamic volume (not molecular weight), allowing folded and unfolded versions of the same protein to be distinguished. For example, the apparent hydrodynamic radius of a typical protein domain might be 14 Å and 36 Å for the folded and unfolded forms, respectively. SEC allows the separation of these two forms, as the folded form elutes much later due to its smaller size.
Polymer synthesis.
SEC can be used as a measure of both the size and the polydispersity of a synthesized polymer, that is, the ability to find the distribution of the sizes of polymer molecules. If standards of a known size are run previously, then a calibration curve can be created to determine the sizes of polymer molecules of interest in the solvent chosen for analysis (often THF). In alternative fashion, techniques such as light scattering and/or viscometry can be used online with SEC to yield absolute molecular weights that do not rely on calibration with standards of known molecular weight. Due to the difference in size of two polymers with identical molecular weights, the absolute determination methods are, in general, more desirable. A typical SEC system can quickly (in about half an hour) give polymer chemists information on the size and polydispersity of the sample. The preparative SEC can be used for polymer fractionation on an analytical scale.
Drawbacks.
In SEC, mass is not measured so much as the hydrodynamic volume of the polymer molecules, that is, how much space a particular polymer molecule takes up when it is in solution. However, the approximate molecular weight can be calculated from SEC data because the exact relationship between molecular weight and hydrodynamic volume for polystyrene can be found. For this, polystyrene is used as a standard. But the relationship between hydrodynamic volume and molecular weight is not the same for all polymers, so only an approximate measurement can be obtained.
Another drawback is the possibility of interaction between the stationary phase and the analyte. Any interaction leads to a later elution time and thus mimics a smaller analyte size.
When performing this method, the bands of the eluting molecules may be broadened. This can occur by turbulence caused by the flow of the mobile phase molecules passing through the molecules of the stationary phase. In addition, molecular thermal diffusion and friction between the molecules of the glass walls and the molecules of the eluent contribute to the broadening of the bands. Besides broadening, the bands also overlap with each other. As a result, the eluent usually gets considerably diluted. A few precautions can be taken to prevent the likelihood of the bands broadening. For instance, one can apply the sample in a narrow, highly concentrated band on the top of the column. The more concentrated the eluent is, the more efficient the procedure would be. However, it is not always possible to concentrate the eluent, which can be considered as one more disadvantage.
Absolute size-exclusion chromatography.
Absolute size-exclusion chromatography (ASEC) is a technique that couples a light scattering instrument, most commonly multi-angle light scattering (MALS) or another form of static light scattering (SLS), but possibly a dynamic light scattering (DLS) instrument, to a size-exclusion chromatography system for absolute molar mass and/or size measurements of proteins and macromolecules as they elute from the chromatography system.
The definition of “absolute” in this case is that calibration of retention time on the column with a set of reference standards is not required to obtain molar mass or the hydrodynamic size, often referred to as hydrodynamic diameter (DH in units of nm). Non-ideal column interactions, such as electrostatic or hydrophobic surface interactions that modulate retention time relative to standards, do not impact the final result. Likewise, differences between conformation of the analyte and the standard have no effect on an absolute measurement; for example, with MALS analysis, the molar mass of inherently disordered proteins are characterized accurately even though they elute at much earlier times than globular proteins with the same molar mass, and the same is true of branched polymers which elute late compared to linear reference standards with the same molar mass. Another benefit of ASEC is that the molar mass and/or size is determined at each point in an eluting peak, and therefore indicates homogeneity or polydispersity within the peak. For example, SEC-MALS analysis of a monodisperse protein will show that the entire peak consists of molecules with the same molar mass, something that is not possible with standard SEC analysis.
Determination of molar mass with SLS requires combining the light scattering measurements with concentration measurements. Therefore SEC-MALS typically includes the light scattering detector and either a differential refractometer or UV/Vis absorbance detector. In addition, MALS determines the rms radius Rg of molecules above a certain size limit, typically 10 nm. SEC-MALS can therefore analyze the conformation of polymers via the relationship of molar mass to Rg. For smaller molecules, either DLS or, more commonly, a differential viscometer is added to determine hydrodynamic radius and evaluate molecular conformation in the same manner.
In SEC-DLS, the sizes of the macromolecules are measured as they elute into the flow cell of the DLS instrument from the size exclusion column set. The hydrodynamic size of the molecules or particles are measured and not their molecular weights. For proteins a Mark-Houwink type of calculation can be used to estimate the molecular weight from the hydrodynamic size.
A major advantage of DLS coupled with SEC is the ability to obtain enhanced DLS resolution. Batch DLS is quick and simple and provides a direct measure of the average size, but the baseline resolution of DLS is a ratio of 3:1 in diameter. Using SEC, the proteins and protein oligomers are separated, allowing oligomeric resolution. Aggregation studies can also be done using ASEC. Though the aggregate concentration may not be calculated with light scattering (an online concentration detector such as that used in SEC-MALS for molar mass measurement also determines aggregate concentration), the size of the aggregate can be measured, only limited by the maximum size eluting from the SEC columns.
Limitations of ASEC with DLS detection include flow-rate, concentration, and precision. Because a correlation function requires anywhere from 3–7 seconds to properly build, a limited number of data points can be collected across the peak. ASEC with SLS detection is not limited by flow rate and measurement time is essentially instantaneous, and the range of concentration is several orders of magnitude larger than for DLS. However, molar mass analysis with SEC-MALS does require accurate concentration measurements. MALS and DLS detectors are often combined in a single instrument for more comprehensive absolute analysis following separation by SEC.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "K_{av} = (V_e-V_o)/(V_t-V_o)"
}
] | https://en.wikipedia.org/wiki?curid=97911 |
9791309 | Rename (relational algebra) | In relational algebra, a rename is a unary operation written as formula_0 where:
The result is identical to R except that the b attribute in all tuples is renamed to a. For an example, consider the following invocation of ρ on an Employee relation and the result of that invocation:
Formally, the semantics of the rename operator is defined as follows:
formula_1
where formula_2 is defined as the tuple t, with the b attribute renamed to a, so that:
formula_3
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rho_{a/b}(R)"
},
{
"math_id": 1,
"text": "\\rho_{a/b}(R) = \\{ \\ t[a/b] : t \\in R \\ \\},"
},
{
"math_id": 2,
"text": "t[a/b]"
},
{
"math_id": 3,
"text": "t[a/b] = \\{ \\ (c, v) \\ | \\ ( c, v ) \\in t, \\ c \\ne b \\ \\} \\cup \\{ \\ (a, \\ t(b) ) \\ \\}."
}
] | https://en.wikipedia.org/wiki?curid=9791309 |
97914 | Differential scanning calorimetry | Thermoanalytical technique
Differential scanning calorimetry (DSC) is a thermoanalytical technique in which the difference in the amount of heat required to increase the temperature of a sample and reference is measured as a function of temperature. Both the sample and reference are maintained at nearly the same temperature throughout the experiment.
Generally, the temperature program for a DSC analysis is designed such that the sample holder temperature increases linearly as a function of time. The reference sample should have a well-defined heat capacity over the range of temperatures to be scanned.
Additionally, the reference sample must be stable, of high purity, and must not experience much change across the temperature scan. Typically, reference standards have been metals such as indium, tin, bismuth, and lead, but other standards such as polyethylene and fatty acids have been proposed to study polymers and organic compounds, respectively.
The technique was developed by E. S. Watson and M. J. O'Neill in 1962, and introduced commercially at the 1963 Pittsburgh Conference on Analytical Chemistry and Applied Spectroscopy.
The first adiabatic differential scanning calorimeter that could be used in biochemistry was developed by P. L. Privalov and D. R. Monaselidze in 1964 at Institute of Physics in Tbilisi, Georgia. The term DSC was coined to describe this instrument, which measures energy directly and allows precise measurements of heat capacity.
Types.
There are two main types of DSC: "Heat-flux DSC" which measures the difference in heat flux between the sample and a reference (which gives it the alternative name "Multi-Cell DSC") and "Power differential DSC" which measures the difference in power supplied to the sample and a reference.
Heat-flux DSC.
With Heat-flux DSC, the changes in heat flow are calculated by integrating the ΔTref- curve. For this kind of experiment, a sample and a reference crucible are placed on a sample holder with integrated temperature sensors for temperature measurement of the crucibles. This arrangement is located in a temperature-controlled oven. Unlike the traditional design, the special feature of heat-flux DSC is that it uses flat temperature sensors placed vertically around a flat heater. This setup makes it possible to have a small, light, and low-heat capacity structure while still working like a regular DSC oven.
Power differential DSC.
For this kind of setup, also known as "Power compensating DSC", the sample and reference crucible are placed in thermally insulated furnaces and not next to each other in the same furnace as in heat-flux-DSC experiments. Then the temperature of both chambers is controlled so that the same temperature is always present on both sides. The electrical power that is required to obtain and maintain this state is then recorded rather than the temperature difference between the two crucibles.
Fast-scan DSC.
The 2000s have witnessed the rapid development of Fast-scan DSC (FSC), a novel calorimetric technique that employs micromachined sensors. The key advances of this technique are the ultrahigh scanning rate, which can be as high as 106 K/s, and the ultrahigh sensitivity, with a heat capacity resolution typically better than 1 nJ/K.
Nanocalorimetry has attracted much attention in materials science, where it is applied to perform quantitative analysis of rapid phase transitions, particularly on fast cooling. Another emerging area of application of FSC is physical chemistry, with a focus on the thermophysical properties of thermally labile compounds. Quantities like fusion temperature, fusion enthalpy, sublimation, and vaporization pressures, and enthalpies of such molecules became available.
Temperature Modulated DSC.
When performing Temperature Modulated DSC (TMDSC, MDSC), the underlying linear heating rate is superimposed by a sinusoidal temperature variation. The benefit of this procedure is the ability to separate overlapping DSC effects by calculating the reversing and the non-reversing signals. The reversing heat flow is related to the changes in specific heat capacity (→ glass transition) while the non-reversing heat flow corresponds to time-dependent phenomena such as curing, dehydration and relaxation.
Detection of phase transitions.
The basic principle underlying this technique is that when the sample undergoes a physical transformation such as phase transitions, more or less heat will need to flow to it than the reference to maintain both at the same temperature. Whether less or more heat must flow to the sample depends on whether the process is exothermic or endothermic.
For example, as a solid sample melts to a liquid, it will require more heat flowing to the sample to increase its temperature at the same rate as the reference. This is due to the absorption of heat by the sample as it undergoes the endothermic phase transition from solid to liquid. Likewise, as the sample undergoes exothermic processes (such as crystallization) less heat is required to raise the sample temperature. By observing the difference in heat flow between the sample and reference, differential scanning calorimeters are able to measure the amount of heat absorbed or released during such transitions. DSC may also be used to observe more subtle physical changes, such as glass transitions. It is widely used in industrial settings as a quality control instrument due to its applicability in evaluating sample purity and for studying polymer curing.
DTA.
An alternative technique, which shares much in common with DSC, is differential thermal analysis (DTA). In this technique it is the heat flow to the sample and reference that remains the same rather than the temperature. When the sample and reference are heated identically, phase changes and other thermal processes cause a difference in temperature between the sample and reference. Both DSC and DTA provide similar information. DSC measures the energy required to keep both the reference and the sample at the same temperature whereas DTA measures the difference in temperature between the sample and the reference when the same amount of energy has been introduced into both.
DSC curves.
The result of a DSC experiment is a curve of heat flux versus temperature or versus time. There are two different conventions: exothermic reactions in the sample shown with a positive or negative peak, depending on the kind of technology used in the experiment. This curve can be used to calculate enthalpies of transitions. This is done by integrating the peak corresponding to a given transition. It can be shown that the enthalpy of transition can be expressed using the following equation:
formula_0
where formula_1 is the enthalpy of transition, formula_2 is the calorimetric constant, and formula_3 is the area under the curve. The calorimetric constant will vary from instrument to instrument, and can be determined by analyzing a well-characterized sample with known enthalpies of transition.
Applications.
Differential scanning calorimetry can be used to measure a number of characteristic properties of a sample. Using this technique it is possible to observe fusion and crystallization events as well as glass transition temperatures "Tg". DSC can also be used to study oxidation, as well as other chemical reactions.
Glass transitions may occur as the temperature of an amorphous solid is increased. These transitions appear as a step in the baseline of the recorded DSC signal. This is due to the sample undergoing a change in heat capacity; no formal phase change occurs.
As the temperature increases, an amorphous solid will become less viscous. At some point the molecules may obtain enough freedom of motion to spontaneously arrange themselves into a crystalline form. This is known as the crystallization temperature ("Tc"). This transition from amorphous solid to crystalline solid is an exothermic process, and results in a peak in the DSC signal. As the temperature increases the sample eventually reaches its melting temperature ("Tm"). The melting process results in an endothermic peak in the DSC curve. The ability to determine transition temperatures and enthalpies makes DSC a valuable tool in producing phase diagrams for various chemical systems.
Differential scanning calorimetry can also be used to obtain valuable thermodynamics information about proteins. The thermodynamics analysis of proteins can reveal important information about the global structure of proteins, and protein/ligand interaction. For example, many mutations lower the stability of proteins, while ligand binding usually increases protein stability. Using DSC, this stability can be measured by obtaining Gibbs Free Energy values at any given temperature. This allows researchers to compare the free energy of unfolding between ligand-free protein and protein-ligand complex, or wild type and mutant proteins. DSC can also be used in studying protein/lipid interactions, nucleotides, drug-lipid interactions. In studying protein denaturation using DSC, the thermal melt should be at least to some degree reversible, as the thermodynamics calculations rely on chemical equilibrium.
Experimental considerations.
There are various experimental and environmental parameters to consider during DSC measurements. Exemplary potential issues are briefly discussed in the following sections. All statements in these paragraphs are based on the books of Gabbott and Brown.
Crucibles.
DSC measurements without crucibles promote the thermal transfer towards the sample and are possible if the DSC is designed for this purpose. Measurements without crucible should only be conducted with chemically stable materials at low temperatures, as otherwise there may be contamination or damage of the calorimeter. The safer way is to use a crucible, which is specified for the desired temperatures and does not react with the sample material (e.g. alumina, gold or platinum crucibles). If the sample is likely to evolve volatiles or is in the liquid state, the crucible should be sealed to prevent contamination. However, if the crucible is sealed, increasing pressure and possible measurement artefacts due to deformation of the crucible must be considered. In this case, crucibles with very small holes (∅~50 μm) or crucibles that can withstand very high pressures should be used.
Sample condition.
The sample should be in good contact with the crucible surface. Therefore, the contact surface of a solid bulk sample should be plane parallel. For DSC measurements with powders, stronger signal might be observed for finer powders due to the enlarged contact surface. The minimum sample mass depends on the transformation to be analyzed. A small sample mass (~10 mg) is sufficient if the released or consumed heat during the transformation is high enough. Heavier samples could be used to obtain transformation associated with low heat release or consumption, as larger samples also enlarge the obtained peaks. However, the increasing sample size might worsen the resolution due to thermal gradients which may evolve during heating.
Temperature and scan rates.
If the peaks are very small, it is possible to enlarge them by increasing the scan rate. Due to the faster scan rate, more energy is released or consumed in a shorter time which leads to higher and therefore more distinct peaks. However, faster scan rates lead to poor temperature resolution because of thermal lag. Due to this thermal lag, two phase transformations (or chemical reactions) occurring in a narrow temperature range might overlap. Generally, heating or cooling rates are too high to detect equilibrium transitions, so there is always a shift to higher or lower temperatures compared to phase diagrams representing equilibrium conditions.
Purge gas.
Purge gas is used to control the sample environment, in order to reduce signal noise and to prevent contamination. Mostly nitrogen is used and for temperatures above 600 °C, argon can be utilized to minimize heat loss due to the low thermal conductivity of argon. Air or pure oxygen can be used for oxidative tests like oxidative induction time and He is used for very low temperatures due to the low boiling temperature (~4.2K at 101.325 kPa ).
Examples.
The technique is widely used across a range of applications, both as a routine quality test and as a research tool. The equipment is easy to calibrate, using low melting indium at 156.5985 °C for example, and is a rapid and reliable method of thermal analysis.
Polymers.
DSC is used widely for examining polymeric materials to determine their thermal transitions. Important thermal transitions include the glass transition temperature ("T"g), crystallization temperature ("T"c), and melting temperature ("T"m). The observed thermal transitions can be utilized to compare materials, although the transitions alone do not uniquely identify composition. The composition of unknown materials may be completed using complementary techniques such as IR spectroscopy. Melting points and glass transition temperatures for most polymers are available from standard compilations, and the method can show polymer degradation by the lowering of the expected melting temperature. "Tm" depends on the molecular weight of the polymer and thermal history.
The percent crystalline content of a polymer can be estimated from the crystallization/melting peaks of the DSC graph using reference heats of fusion found in the literature. DSC can also be used to study thermal degradation of polymers using an approach such as Oxidative Onset Temperature/Time (OOT); however, the user risks contamination of the DSC cell, which can be problematic. Thermogravimetric Analysis (TGA) may be more useful for decomposition behavior determination. Impurities in polymers can be determined by examining thermograms for anomalous peaks, and plasticisers can be detected at their characteristic boiling points. In addition, examination of minor events in first heat thermal analysis data can be useful as these apparently "anomalous peaks" can in fact also be representative of process or storage thermal history of the material or polymer physical aging. Comparison of first and second heat data collected at consistent heating rates can allow the analyst to learn about both polymer processing history and material properties. (see "J.H.Flynn.(1993) Analysis of DSC results by integration. Thermochimica Acta, 217, 129-149.)"
Liquid crystals.
DSC is used in the study of liquid crystals. As some forms of matter go from solid to liquid they go through a third state, which displays properties of both phases. This anisotropic liquid is known as a liquid crystalline or mesomorphous state. Using DSC, it is possible to observe the small energy changes that occur as matter transitions from a solid to a liquid crystal and from a liquid crystal to an isotropic liquid.
Oxidative stability.
Using differential scanning calorimetry to study the stability to oxidation of samples generally requires an airtight sample chamber. It can be used to determine the oxidative-induction time (OIT) of a sample. Such tests are usually done isothermally (at constant temperature) by changing the atmosphere of the sample. First, the sample is brought to the desired test temperature under an inert atmosphere, usually nitrogen. Oxygen is then added to the system. Any oxidation that occurs is observed as a deviation in the baseline. Such analysis can be used to determine the stability and optimum storage conditions for a material or compound. DSC equipment can also be used to determine the Oxidative-Onset Temperature (OOT) of a material. In this test a sample (and a reference) are exposed to an oxygen atmosphere and subjected to a constant rate of heating (typically from 50 to 300 °C). The DSC heat flow curve will deviate when the reaction with oxygen begins (the reaction being either exothermic or endothermic). Both OIT and OOT tests are used as a tools for determining the activity of antioxidants.
Safety screening.
DSC makes a reasonable initial safety screening tool. In this mode the sample will be housed in a non-reactive crucible (often gold or gold-plated steel), and which will be able to withstand pressure (typically up to 100 bar). The presence of an exothermic event can then be used to assess the stability of a substance to heat. However, due to a combination of relatively poor sensitivity, slower than normal scan rates (typically 2–3 °C/min, due to much heavier crucible) and unknown activation energy, it is necessary to deduct about 75–100 °C from the initial start of the observed exotherm to "suggest" a maximal temperature for the material. A much more accurate data set can be obtained from an adiabatic calorimeter, but such a test may take 2–3 days from ambient at a rate of a 3 °C increment per half-hour.
Drug analysis.
DSC is widely used in the pharmaceutical and polymer industries. For the polymer chemist, DSC is a handy tool for studying curing processes, which allows the fine tuning of polymer properties. The cross-linking of polymer molecules that occurs in the curing process is exothermic, resulting in a negative peak in the DSC curve that usually appears soon after the glass transition.
In the pharmaceutical industry it is necessary to have well-characterized drug compounds in order to define processing parameters. For instance, if it is necessary to deliver a drug in the amorphous form, it is desirable to process the drug at temperatures below those at which crystallization can occur.
General chemical analysis.
Freezing-point depression can be used as a purity analysis tool when analysed by differential scanning calorimetry. This is possible because the temperature range over which a mixture of compounds melts is dependent on their relative amounts. Consequently, less pure compounds will exhibit a broadened melting peak that begins at lower temperature than a pure compound.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\Delta H = K A"
},
{
"math_id": 1,
"text": "\\Delta H"
},
{
"math_id": 2,
"text": "K"
},
{
"math_id": 3,
"text": "A"
}
] | https://en.wikipedia.org/wiki?curid=97914 |
9791475 | Ornstein–Zernike equation | In statistical mechanics the Ornstein–Zernike (OZ) equation is an integral equation introduced by Leonard Ornstein and Frits Zernike that relates different correlation functions with each other. Together with a closure relation, it is used to compute the structure factor and thermodynamic state functions of amorphous matter like liquids or colloids.
Context.
The OZ equation has practical importance as a foundation for approximations for computing the
pair correlation function of molecules or ions in liquids, or of colloidal particles. The pair correlation function is related via Fourier transform to the static structure factor, which can be determined experimentally using X-ray diffraction or neutron diffraction.
The OZ equation relates the pair correlation function to the direct correlation function. The direct correlation function is only used in connection with the OZ equation, which can actually be seen as its definition.
Besides the OZ equation, other methods for the computation of the pair correlation function include the virial expansion at low densities, and the Bogoliubov–Born–Green–Kirkwood–Yvon (BBGKY) hierarchy. Any of these methods must be combined with a physical approximation: truncation in the case of the virial expansion, a closure relation for OZ or BBGKY.
The equation.
To keep notation simple, we only consider homogeneous fluids. Thus the pair correlation function only depends on distance, and therefore is also called the radial distribution function. It can be written
formula_0
where the first equality comes from homogeneity, the second from isotropy, and the equivalences introduce new notation.
It is convenient to define the total correlation function as:
formula_1
which expresses the influence of molecule 1 on molecule 2 at distance formula_2. The OZ equation
formula_3
splits this influence into two contributions, a direct and indirect one. The direct contribution defines the "direct correlation function", formula_4 The "indirect" part is due to the influence of molecule 1 on a third, labeled molecule 3, which in turn affects molecule 2, directly and indirectly. This indirect effect is weighted by the density and averaged over all the possible positions of molecule 3.
By eliminating the indirect influence, formula_5 is shorter-ranged than formula_6 and can be more easily modelled and approximated. The radius of formula_5 is determined by the radius of intermolecular forces, whereas the radius of formula_7 is of the order of the correlation length.
Fourier transform.
The integral in the OZ equation is a convolution. Therefore, the OZ equation can be resolved by Fourier transform.
If we denote the Fourier transforms of formula_8 and formula_9 by formula_10 and formula_11, respectively, and use the convolution theorem, we obtain
formula_12
which yields
formula_13
Closure relations.
As both functions, formula_14 and formula_15, are unknown, one needs an additional equation, known as a closure relation. While the OZ equation is purely formal, the closure must introduce some physically motivated approximation.
In the low-density limit, the pair correlation function is given by the Boltzmann factor,
formula_16
with formula_17 and with the pair potential formula_18.
Closure relations for higher densities modify this simple relation in different ways. The best known closure approximations are:
The latter two interpolate in different ways between the former two, and thereby achieve a satisfactory description of particles that have a hard core "and" attractive forces.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "g(\\mathbf{r}_1,\\mathbf{r}_2) = g(\\mathbf{r}_1 - \\mathbf{r}_2) \\equiv g(\\mathbf{r}_{12}) = g(|\\mathbf{r}_{12}|) \\equiv g(r_{12}) \\equiv g(12),"
},
{
"math_id": 1,
"text": " h(12)\\equiv g(12)-1"
},
{
"math_id": 2,
"text": "\\,r_{12}\\,"
},
{
"math_id": 3,
"text": "h(12) \\; = \\; c(12) \\; + \\; \\rho \\, \\int \\text{d}^3 \\mathbf{r}_{3}\\, c(13) \\, h(32)"
},
{
"math_id": 4,
"text": "c(r)."
},
{
"math_id": 5,
"text": "\\,c(r)\\,"
},
{
"math_id": 6,
"text": "h(r)"
},
{
"math_id": 7,
"text": "\\,h(r)\\,"
},
{
"math_id": 8,
"text": "h(\\mathbf{r})"
},
{
"math_id": 9,
"text": "c(\\mathbf{r})"
},
{
"math_id": 10,
"text": "\\hat{h}(\\mathbf{k})"
},
{
"math_id": 11,
"text": "\\hat{c}(\\mathbf{k})"
},
{
"math_id": 12,
"text": " \\hat{h}(\\mathbf{k}) \\; = \\; \\hat{c}(\\mathbf{k}) \\; + \\; \\rho \\; \\hat{h}(\\mathbf{k})\\;\\hat{c}(\\mathbf{k})~ , "
},
{
"math_id": 13,
"text": " \\hat{c}(\\mathbf{k}) \\; = \\; \\frac{\\hat{h}(\\mathbf{k})}{\\;1 \\;+\\;\\rho \\;\\hat{h}(\\mathbf{k})\\;} \\qquad \\text{ and } \\qquad \\hat{h}(\\mathbf{k}) \\; = \\; \\frac{\\hat{c}(\\mathbf{k})}{\\; 1 \\; - \\; \\rho \\; \\hat{c}(\\mathbf{k}) \\;} ~. "
},
{
"math_id": 14,
"text": " \\,h \\,"
},
{
"math_id": 15,
"text": " \\,c \\,"
},
{
"math_id": 16,
"text": "g(12)=\\text{e}^{-\\beta u(12)},\\quad \\rho\\to 0"
},
{
"math_id": 17,
"text": "\\beta=1/k_\\text{B} T"
},
{
"math_id": 18,
"text": "u(r)"
}
] | https://en.wikipedia.org/wiki?curid=9791475 |
979199 | Beta movement | The term beta movement is used for the optical illusion of apparent motion in which the very short projection of one figure and a subsequent very short projection of a more or less similar figure in a different location are experienced as one figure moving.
The illusion of motion caused by animation and film is sometimes believed to rely on beta movement, as an alternative to the older explanation known as persistence of vision. However, the human visual system can't distinguish between the short-range apparent motion of film and real motion, while the long-range apparent motion of beta movement is recognised as different and processed in a different way.
History.
Observations of apparent motion through quick succession of images go back to the 19th century. In 1833, Joseph Plateau introduced what became known as the phenakistiscope, an early animation device based on a stroboscopic effect. The principle of this "philosophical toy" would inspire the development of cinematography at the end of the century. Most authors who have since described the illusion of seeing motion in the fast succession of stationary images, maintained that the effect is due to persistence of vision, either in the form of afterimages on the retina or with a mental process filling in the intervals between the images.
In 1875, Sigmund Exner showed that, under the right conditions, people will see two quick, spatially separated but stationary electrical sparks as a single light moving from place to place, while quicker flashes were interpreted as motion between two stationary lights. Exner argued that the impression of the moving light was a perception (from a mental process) and the motion between the stationary lights as pure sense.
In 1912, Max Wertheimer wrote an influential article that would lead to the foundation of Gestalt psychology. In the discussed experiments, he asked test subjects what they saw when viewing successive tachistoscope projections of two similar shapes at two alternating locations on a screen. The results differed depending on the frequency of the flashes of the tachistoscope. At low frequencies, successive appearances of similar figures at different spots were perceived. At medium frequencies, it seemed like one figure moved from one position to the following position, regarded as "optimale Bewegung" (optimal motion) by Wertheimer. No shape was seen in between the two locations. At higher speeds, when test subjects believed to see both of the fast blinking figures more or less simultaneously, a moving objectless phenomenon was seen between and around the projected figures. Wertheimer used the Greek letter φ (phi) to designate illusions of motion and thought of the high-frequency objectless illusion as a "pure phi phenomenon", which he supposed was a more direct sensory experience of motion. Wertheimer's work became famous due to his demonstrations of the phi phenomenon, while the optimal motion illusion was regarded as the phenomenon well-known from movies.
In 1913, Friedrich Kenkel defined different types of the motion illusions found in the experiments of Wertheimer and subsequent experiments by Kurt Koffka (who had been one of Wertheimer's test subjects). Kenkel, a co-worker of Koffka, gave the optimal illusion of motion (with the appearance of one figure moving from one place to the next) the designation "β-Bewegung" (beta movement).
Confusion about phi phenomenon and beta movement.
Wertheimer's pure phi phenomenon and beta movement are often confused in explanations of film and animation, but they are quite different perceptually and neither really explains the short-range apparent motion seen in film.
In beta movement, two stimuli, formula_0 and formula_1, appear in succession, but are perceived as the motion of a single object, formula_0, into position formula_1. In phi movement, the two stimuli formula_0 and formula_1 appear in succession, but are perceived as the motion of a vague shadowy something passing over formula_0 and formula_1. There are many factors that determine whether one will experience beta movement or the phi phenomenon in a particular circumstance. They include the luminance of the stimuli in contrast to the background, the size of the stimuli, how far apart they are, how long each one is displayed, and precisely how much time passes between them (or the extent to which they overlap in time).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "a"
},
{
"math_id": 1,
"text": "b"
}
] | https://en.wikipedia.org/wiki?curid=979199 |
97922 | Digital image processing | Algorithmic processing of digitally-represented images
Digital image processing is the use of a digital computer to process digital images through an algorithm. As a subcategory or field of digital signal processing, digital image processing has many advantages over analog image processing. It allows a much wider range of algorithms to be applied to the input data and can avoid problems such as the build-up of noise and distortion during processing. Since images are defined over two dimensions (perhaps more) digital image processing may be modeled in the form of multidimensional systems. The generation and development of digital image processing are mainly affected by three factors: first, the development of computers; second, the development of mathematics (especially the creation and improvement of discrete mathematics theory); third, the demand for a wide range of applications in environment, agriculture, military, industry and medical science has increased.
History.
Many of the techniques of digital image processing, or digital picture processing as it often was called, were developed in the 1960s, at Bell Laboratories, the Jet Propulsion Laboratory, Massachusetts Institute of Technology, University of Maryland, and a few other research facilities, with application to satellite imagery, wire-photo standards conversion, medical imaging, videophone, character recognition, and photograph enhancement. The purpose of early image processing was to improve the quality of the image. It was aimed for human beings to improve the visual effect of people. In image processing, the input is a low-quality image, and the output is an image with improved quality. Common image processing include image enhancement, restoration, encoding, and compression. The first successful application was the American Jet Propulsion Laboratory (JPL). They useD image processing techniques such as geometric correction, gradation transformation, noise removal, etc. on the thousands of lunar photos sent back by the Space Detector Ranger 7 in 1964, taking into account the position of the Sun and the environment of the Moon. The impact of the successful mapping of the Moon's surface map by the computer has been a success. Later, more complex image processing was performed on the nearly 100,000 photos sent back by the spacecraft, so that the topographic map, color map and panoramic mosaic of the Moon were obtained, which achieved extraordinary results and laid a solid foundation for human landing on the Moon.
The cost of processing was fairly high, however, with the computing equipment of that era. That changed in the 1970s, when digital image processing proliferated as cheaper computers and dedicated hardware became available. This led to images being processed in real-time, for some dedicated problems such as television standards conversion. As general-purpose computers became faster, they started to take over the role of dedicated hardware for all but the most specialized and computer-intensive operations. With the fast computers and signal processors available in the 2000s, digital image processing has become the most common form of image processing, and is generally used because it is not only the most versatile method, but also the cheapest.
Image sensors.
The basis for modern image sensors is metal–oxide–semiconductor (MOS) technology, which originates from the invention of the MOSFET (MOS field-effect transistor) by Mohamed M. Atalla and Dawon Kahng at Bell Labs in 1959. This led to the development of digital semiconductor image sensors, including the charge-coupled device (CCD) and later the CMOS sensor.
The charge-coupled device was invented by Willard S. Boyle and George E. Smith at Bell Labs in 1969. While researching MOS technology, they realized that an electric charge was the analogy of the magnetic bubble and that it could be stored on a tiny MOS capacitor. As it was fairly straightforward to fabricate a series of MOS capacitors in a row, they connected a suitable voltage to them so that the charge could be stepped along from one to the next. The CCD is a semiconductor circuit that was later used in the first digital video cameras for television broadcasting.
The NMOS active-pixel sensor (APS) was invented by Olympus in Japan during the mid-1980s. This was enabled by advances in MOS semiconductor device fabrication, with MOSFET scaling reaching smaller micron and then sub-micron levels. The NMOS APS was fabricated by Tsutomu Nakamura's team at Olympus in 1985. The CMOS active-pixel sensor (CMOS sensor) was later developed by Eric Fossum's team at the NASA Jet Propulsion Laboratory in 1993. By 2007, sales of CMOS sensors had surpassed CCD sensors.
MOS image sensors are widely used in optical mouse technology. The first optical mouse, invented by Richard F. Lyon at Xerox in 1980, used a 5μm NMOS integrated circuit sensor chip. Since the first commercial optical mouse, the IntelliMouse introduced in 1999, most optical mouse devices use CMOS sensors.
Image compression.
An important development in digital image compression technology was the discrete cosine transform (DCT), a lossy compression technique first proposed by Nasir Ahmed in 1972. DCT compression became the basis for JPEG, which was introduced by the Joint Photographic Experts Group in 1992. JPEG compresses images down to much smaller file sizes, and has become the most widely used image file format on the Internet. Its highly efficient DCT compression algorithm was largely responsible for the wide proliferation of digital images and digital photos, with several billion JPEG images produced every day as of 2015[ [update]].
Medical imaging techniques produce very large amounts of data, especially from CT, MRI and PET modalities. As a result, storage and communications of electronic image data are prohibitive without the use of compression. JPEG 2000 image compression is used by the DICOM standard for storage and transmission of medical images. The cost and feasibility of accessing large image data sets over low or various bandwidths are further addressed by use of another DICOM standard, called JPIP, to enable efficient streaming of the JPEG 2000 compressed image data.
Digital signal processor (DSP).
Electronic signal processing was revolutionized by the wide adoption of MOS technology in the 1970s. MOS integrated circuit technology was the basis for the first single-chip microprocessors and microcontrollers in the early 1970s, and then the first single-chip digital signal processor (DSP) chips in the late 1970s. DSP chips have since been widely used in digital image processing.
The discrete cosine transform (DCT) image compression algorithm has been widely implemented in DSP chips, with many companies developing DSP chips based on DCT technology. DCTs are widely used for encoding, decoding, video coding, audio coding, multiplexing, control signals, signaling, analog-to-digital conversion, formatting luminance and color differences, and color formats such as YUV444 and YUV411. DCTs are also used for encoding operations such as motion estimation, motion compensation, inter-frame prediction, quantization, perceptual weighting, entropy encoding, variable encoding, and motion vectors, and decoding operations such as the inverse operation between different color formats (YIQ, YUV and RGB) for display purposes. DCTs are also commonly used for high-definition television (HDTV) encoder/decoder chips.
Medical imaging.
In 1972, the engineer from British company EMI Housfield invented the X-ray computed tomography device for head diagnosis, which is what is usually called CT (computer tomography). The CT nucleus method is based on the projection of the human head section and is processed by computer to reconstruct the cross-sectional image, which is called image reconstruction. In 1975, EMI successfully developed a CT device for the whole body, which obtained a clear tomographic image of various parts of the human body. In 1979, this diagnostic technique won the Nobel Prize. Digital image processing technology for medical applications was inducted into the Space Foundation Space Technology Hall of Fame in 1994.
As of 2010, 5 billion medical imaging studies had been conducted worldwide. Radiation exposure from medical imaging in 2006 made up about 50% of total ionizing radiation exposure in the United States. Medical imaging equipment is manufactured using technology from the semiconductor industry, including CMOS integrated circuit chips, power semiconductor devices, sensors such as image sensors (particularly CMOS sensors) and biosensors, and processors such as microcontrollers, microprocessors, digital signal processors, media processors and system-on-chip devices. As of 2015[ [update]], annual shipments of medical imaging chips amount to 46million units and $.
Tasks.
Digital image processing allows the use of much more complex algorithms, and hence, can offer both more sophisticated performance at simple tasks, and the implementation of methods which would be impossible by analogue means.
In particular, digital image processing is a concrete application of, and a practical technology based on:
Some techniques which are used in digital image processing include:
Digital image transformations.
Filtering.
Digital filters are used to blur and sharpen digital images. Filtering can be performed by:
The following examples show both methods:
Image padding in Fourier domain filtering.
Images are typically padded before being transformed to the Fourier space, the highpass filtered images below illustrate the consequences of different padding techniques:
Notice that the highpass filter shows extra edges when zero padded compared to the repeated edge padding.
Filtering code examples.
MATLAB example for spatial domain highpass filtering.
img=checkerboard(20); % generate checkerboard
% ************************** SPATIAL DOMAIN ***************************
klaplace=[0 -1 0; -1 5 -1; 0 -1 0]; % Laplacian filter kernel
X=conv2(img,klaplace); % convolve test img with
% 3x3 Laplacian kernel
figure()
imshow(X,[]) % show Laplacian filtered
title('Laplacian Edge Detection')
Affine transformations.
Affine transformations enable basic image transformations including scale, rotate, translate, mirror and shear as is shown in the following examples:
To apply the affine matrix to an image, the image is converted to matrix in which each entry corresponds to the pixel intensity at that location. Then each pixel's location can be represented as a vector indicating the coordinates of that pixel in the image, [x, y], where x and y are the row and column of a pixel in the image matrix. This allows the coordinate to be multiplied by an affine-transformation matrix, which gives the position that the pixel value will be copied to in the output image.
However, to allow transformations that require translation transformations, 3 dimensional homogeneous coordinates are needed. The third dimension is usually set to a non-zero constant, usually 1, so that the new coordinate is [x, y, 1]. This allows the coordinate vector to be multiplied by a 3 by 3 matrix, enabling translation shifts. So the third dimension, which is the constant 1, allows translation.
Because matrix multiplication is associative, multiple affine transformations can be combined into a single affine transformation by multiplying the matrix of each individual transformation in the order that the transformations are done. This results in a single matrix that, when applied to a point vector, gives the same result as all the individual transformations performed on the vector [x, y, 1] in sequence. Thus a sequence of affine transformation matrices can be reduced to a single affine transformation matrix.
For example, 2 dimensional coordinates only allow rotation about the origin (0, 0). But 3 dimensional homogeneous coordinates can be used to first translate any point to (0, 0), then perform the rotation, and lastly translate the origin (0, 0) back to the original point (the opposite of the first translation). These 3 affine transformations can be combined into a single matrix, thus allowing rotation around any point in the image.
Image denoising with Morphology.
Mathematical morphology is suitable for denoising images. Structuring element are important in Mathematical morphology.
The following examples are about Structuring elements. The denoise function, image as I, and structuring element as B are shown as below and table.
e.g. formula_0
Define Dilation(I, B)(i,j) = formula_1. Let Dilation(I,B) = D(I,B)
D(I', B)(1,1) = formula_2
Define Erosion(I, B)(i,j) = formula_3. Let Erosion(I,B) = E(I,B)
E(I', B)(1,1) = formula_4
After dilation
formula_5
After erosion
formula_6
An opening method is just simply erosion first, and then dilation while the closing method is vice versa. In reality, the D(I,B) and E(I,B) can implemented by Convolution
Applications.
Digital camera images.
Digital cameras generally include specialized digital image processing hardware – either dedicated chips or added circuitry on other chips – to convert the raw data from their image sensor into a color-corrected image in a standard image file format. Additional post processing techniques increase edge sharpness or color saturation to create more naturally looking images.
Film.
"Westworld" (1973) was the first feature film to use the digital image processing to pixellate photography to simulate an android's point of view. Image processing is also vastly used to produce the chroma key effect that replaces the background of actors with natural or artistic scenery.
Face detection.
Face detection can be implemented with Mathematical morphology, Discrete cosine transform which is usually called DCT, and horizontal Projection (mathematics).
General method with feature-based method
The feature-based method of face detection is using skin tone, edge detection, face shape, and feature of a face (like eyes, mouth, etc.) to achieve face detection. The skin tone, face shape, and all the unique elements that only the human face have can be described as features.
Process explanation
Improvement of image quality method.
Image quality can be influenced by camera vibration, over-exposure, gray level distribution too centralized, and noise, etc. For example, noise problem can be solved by Smoothing method while gray level distribution problem can be improved by histogram equalization.
Smoothing method
In drawing, if there is some dissatisfied color, taking some color around dissatisfied color and averaging them. This is an easy way to think of Smoothing method.
Smoothing method can be implemented with mask and Convolution. Take the small image and mask for instance as below.
image is
formula_7
mask is formula_8
After Convolution and smoothing, image is
formula_9
Oberseving image[1, 1], image[1, 2], image[2, 1], and image[2, 2].
The original image pixel is 1, 4, 28, 30. After smoothing mask, the pixel becomes 9, 10, 9, 9 respectively.
new image[1, 1] = formula_10 * (image[0,0]+image[0,1]+image[0,2]+image[1,0]+image[1,1]+image[1,2]+image[2,0]+image[2,1]+image[2,2])
new image[1, 1] = floor(formula_10 * (2+5+6+3+1+4+1+28+30)) = 9
new image[1, 2] = floor({formula_10 * (5+6+5+1+4+6+28+30+2)) = 10
new image[2, 1] = floor(formula_10 * (3+1+4+1+28+30+7+3+2)) = 9
new image[2, 2] = floor(formula_10 * (1+4+6+28+30+2+3+2+2)) = 9
Gray Level Histogram method
Generally, given a gray level histogram from an image as below. Changing the histogram to uniform distribution from an image is usually what we called Histogram equalization.
In discrete time, the area of gray level histogram is formula_11(see figure 1) while the area of uniform distribution is formula_12(see figure 2). It is clear that the area will not change, so formula_13.
From the uniform distribution, the probability of formula_14 is formula_15 while the formula_16
In continuous time, the equation is formula_17.
Moreover, based on the definition of a function, the Gray level histogram method is like finding a function formula_18 that satisfies f(p)=q.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(I') = \\begin{bmatrix}\n45 & 50 & 65 \\\\\n40 & 60 & 55 \\\\\n25 & 15 & 5\n\\end{bmatrix}\nB = \\begin{bmatrix}\n1 & 2 & 1 \\\\\n2 & 1 & 1 \\\\\n1 & 0 & 3\n\\end{bmatrix}"
},
{
"math_id": 1,
"text": "max\\{I(i+m, j+n) + B(m,n)\\}"
},
{
"math_id": 2,
"text": "max(45+1,50+2,65+1,40+2,60+1,55+1,25+1,15+0,5+3) = 66"
},
{
"math_id": 3,
"text": "min\\{I(i+m, j+n) - B(m,n)\\}"
},
{
"math_id": 4,
"text": "min(45-1,50-2,65-1,40-2,60-1,55-1,25-1,15-0,5-3) = 2"
},
{
"math_id": 5,
"text": "(I') = \\begin{bmatrix}\n45 & 50 & 65 \\\\\n40 & 66 & 55 \\\\\n25 & 15 & 5\n\\end{bmatrix}\n"
},
{
"math_id": 6,
"text": "(I') = \\begin{bmatrix}\n45 & 50 & 65 \\\\\n40 & 2 & 55 \\\\\n25 & 15 & 5\n\\end{bmatrix}\n"
},
{
"math_id": 7,
"text": "\n\\begin{bmatrix}\n2 & 5 & 6 & 5\\\\\n3 & 1 & 4 & 6 \\\\\n1 & 28 & 30 & 2 \\\\\n7 & 3 & 2 & 2\n\\end{bmatrix}\n"
},
{
"math_id": 8,
"text": "\n\\begin{bmatrix}\n1/9 & 1/9 & 1/9 \\\\\n1/9 & 1/9 & 1/9 \\\\\n1/9 & 1/9 & 1/9\n\\end{bmatrix}\n"
},
{
"math_id": 9,
"text": "\n\\begin{bmatrix}\n2 & 5 & 6 & 5\\\\\n3 & 9 & 10 & 6 \\\\\n1 & 9 & 9 & 2 \\\\\n7 & 3 & 2 & 2\n\\end{bmatrix}\n"
},
{
"math_id": 10,
"text": "\\tfrac{1}{9}"
},
{
"math_id": 11,
"text": "\\sum_{i=0}^{k}H(p_i)"
},
{
"math_id": 12,
"text": "\\sum_{i=0}^{k}G(q_i)"
},
{
"math_id": 13,
"text": "\\sum_{i=0}^{k}H(p_i) = \\sum_{i=0}^{k}G(q_i)"
},
{
"math_id": 14,
"text": "q_i"
},
{
"math_id": 15,
"text": "\\tfrac{N^2}{q_k - q_0}"
},
{
"math_id": 16,
"text": " 0 < i < k "
},
{
"math_id": 17,
"text": "\\displaystyle \\int_{q_0}^{q} \\tfrac{N^2}{q_k - q_0}ds = \\displaystyle \\int_{p_0}^{p}H(s)ds"
},
{
"math_id": 18,
"text": "f"
}
] | https://en.wikipedia.org/wiki?curid=97922 |
979227 | Rings of Jupiter | The planet Jupiter has a system of faint planetary rings. The Jovian rings were the third ring system to be discovered in the Solar System, after those of Saturn and Uranus. The main ring was discovered in 1979 by the "Voyager 1" space probe and the system was more thoroughly investigated in the 1990s by the "Galileo" orbiter. The main ring has also been observed by the Hubble Space Telescope and from Earth for several years. Ground-based observation of the rings requires the largest available telescopes.
The Jovian ring system is faint and consists mainly of dust. It has four main components: a thick inner torus of particles known as the "halo ring"; a relatively bright, exceptionally thin "main ring"; and two wide, thick and faint outer "gossamer rings", named for the moons of whose material they are composed: Amalthea and Thebe.
The main and halo rings consist of dust ejected from the moons Metis, Adrastea and perhaps smaller, unobserved bodies as the result of high-velocity impacts. High-resolution images obtained in February and March 2007 by the "New Horizons" spacecraft revealed a rich fine structure in the main ring.
In visible and near-infrared light, the rings have a reddish color, except the halo ring, which is neutral or blue in color. The size of the dust in the rings varies, but the cross-sectional area is greatest for nonspherical particles of radius about 15 μm in all rings except the halo. The halo ring is probably dominated by submicrometre dust. The total mass of the ring system (including unresolved parent bodies) is poorly constrained, but is probably in the range of 1011 to 1016 kg. The age of the ring system is also not known, but it is possible that it has existed since the formation of Jupiter.
A ring or ring arc appears to exist close to the moon Himalia's orbit. One explanation is that a small moon recently crashed into Himalia and the force of the impact ejected the material that forms the ring.
Discovery and structure.
Jupiter's ring system was the third to be discovered in the Solar System, after those of Saturn and Uranus. It was first observed on 4 March 1979 by the "Voyager 1" space probe. It is composed of four main components: a thick inner torus of particles known as the "halo ring"; a relatively bright, exceptionally thin "main ring"; and two wide, thick and faint outer "gossamer rings", named after the moons of whose material they are composed: Amalthea and Thebe. The principal attributes of the known Jovian Rings are listed in the table.
In 2022, dynamical simulations suggested that the relative meagreness of Jupiter's ring system, compared to that of the smaller Saturn, is due to destabilising resonances created by the Galilean satellites.
Main ring.
Appearance and structure.
The narrow and relatively thin main ring is the brightest part of Jupiter's ring system. Its outer edge is located at a radius of about (1.806 RJ;RJ = equatorial radius of Jupiter or ) and coincides with the orbit of Jupiter's smallest inner satellite, Adrastea. Its inner edge is not marked by any satellite and is located at about (1.72 RJ).
Thus the width of the main ring is around . The appearance of the main ring depends on the viewing geometry. In forward-scattered light the brightness of the main ring begins to decrease steeply at (just inward of the Adrastean orbit) and reaches the background level at —just outward of the Adrastean orbit. Therefore, Adrastea at clearly shepherds the ring. The brightness continues to increase in the direction of Jupiter and has a maximum near the ring's center at , although there is a pronounced gap (notch) near the Metidian orbit at . The inner boundary of the main ring, in contrast, appears to fade off slowly from to , merging into the halo ring. In forward-scattered light all Jovian rings are especially bright.
In back-scattered light the situation is different. The outer boundary of the main ring, located at , or slightly beyond the orbit of Adrastea, is very steep. The orbit of the moon is marked by a gap in the ring so there is a thin ringlet just outside its orbit. There is another ringlet just inside Adrastean orbit followed by a gap of unknown origin located at about . The third ringlet is found inward of the central gap, outside the orbit of Metis. The ring's brightness drops sharply just outward of the Metidian orbit, forming the Metis notch. Inward of the orbit of Metis, the brightness of the ring rises much less than in forward-scattered light. So in the back-scattered geometry the main ring appears to consist of two different parts: a narrow outer part extending from to , which itself includes three narrow ringlets separated by notches, and a fainter inner part from to , which lacks any visible structure like in the forward-scattering geometry. The Metis notch serves as their boundary. The fine structure of the main ring was discovered in data from the "Galileo" orbiter and is clearly visible in back-scattered images obtained from "New Horizons" in February–March 2007. The early observations by Hubble Space Telescope (HST), Keck and the "Cassini" spacecraft failed to detect it, probably due to insufficient spatial resolution. However the fine structure was observed by the Keck telescope using adaptive optics in 2002–2003.
Observed in back-scattered light the main ring appears to be razor thin, extending in the vertical direction no more than 30 km. In the side scatter geometry the ring thickness is 80–160 km, increasing somewhat in the direction of Jupiter. The ring appears to be much thicker in the forward-scattered light—about 300 km. One of the discoveries of the "Galileo" orbiter was the bloom of the main ring—a faint, relatively thick (about 600 km) cloud of material which surrounds its inner part. The bloom grows in thickness towards the inner boundary of the main ring, where it transitions into the halo.
Detailed analysis of the "Galileo" images revealed longitudinal variations of the main ring's brightness unconnected with the viewing geometry. The Galileo images also showed some patchiness in the ring on the scales 500–1000 km.
In February–March 2007 "New Horizons" spacecraft conducted a deep search for new small moons inside the main ring. While no satellites larger than 0.5 km were found, the cameras of the spacecraft detected seven small clumps of ring particles. They orbit just inside the orbit of Adrastea inside a dense ringlet. The conclusion, that they are clumps and not small moons, is based on their azimuthally extended appearance. They subtend 0.1–0.3° along the ring, which correspond to –. The clumps are divided into two groups of five and two members, respectively. The nature of the clumps is not clear, but their orbits are close to 115:116 and 114:115 resonances with Metis. They may be wavelike structures excited by this interaction.
Spectra and particle size distribution.
Spectra of the main ring obtained by the HST, Keck, Galileo and "Cassini" have shown that particles forming it are red, i.e. their albedo is higher at longer wavelengths. The existing spectra span the range 0.5–2.5 μm. No spectral features have been found so far which can be attributed to particular chemical compounds, although the Cassini observations yielded evidence for absorption bands near 0.8 μm and 2.2 μm. The spectra of the main ring are very similar to Adrastea and Amalthea.
The properties of the main ring can be explained by the hypothesis that it contains significant amounts of dust with 0.1–10 μm particle sizes. This explains the stronger forward-scattering of light as compared to back-scattering. However, larger bodies are required to explain the strong back-scattering and fine structure in the bright outer part of the main ring.
Analysis of available phase and spectral data leads to a conclusion that the size distribution of small particles in the main ring obeys a power law
formula_0
where "n"("r") "dr" is a number of particles with radii between "r" and "r" + "dr" and formula_1 is a normalizing parameter chosen to match the known total light flux from the ring. The parameter "q" is 2.0 ± 0.2 for particles with "r" < 15 ± 0.3 μm and "q" = 5 ± 1 for those with "r" > 15 ± 0.3 μm. The distribution of large bodies in the mm–km size range is undetermined presently. The light scattering in this model is dominated by particles with "r" around 15 μm.
The power law mentioned above allows estimation of the optical depth formula_2 of the main ring: formula_3 for the large bodies and formula_4 for the dust. This optical depth means that the total cross section of all particles inside the ring is about 5000 km². The particles in the main ring are expected to have aspherical shapes. The total mass of the dust is estimated to be 107−109 kg. The mass of large bodies, excluding Metis and Adrastea, is 1011−1016 kg. It depends on their maximum size— the upper value corresponds to about 1 km maximum diameter. These masses can be compared with masses of Adrastea, which is about 2 × 1015 kg, Amalthea, about 2 × 1018 kg, and Earth's Moon, 7.4 × 1022 kg.
The presence of two populations of particles in the main ring explains why its appearance depends on the viewing geometry. The dust scatters light preferably in the forward direction and forms a relatively thick homogenous ring bounded by the orbit of Adrastea. In contrast, large particles, which scatter in the back direction, are confined in a number of ringlets between the Metidian and Adrastean orbits.
Origin and age.
The dust is constantly being removed from the main ring by a combination of Poynting–Robertson drag and electromagnetic forces from the Jovian magnetosphere. Volatile materials such as ices, for example, evaporate quickly. The lifetime of dust particles in the ring is from 100 to , so the dust must be continuously replenished in the collisions between large bodies with sizes from 1 cm to 0.5 km and between the same large bodies and high velocity particles coming from outside the Jovian system. This parent body population is confined to the narrow—about —and bright outer part of the main ring, and includes Metis and Adrastea. The largest parent bodies must be less than 0.5 km in size. The upper limit on their size was obtained by "New Horizons" spacecraft. The previous upper limit, obtained from HST and "Cassini" observations, was near 4 km. The dust produced in collisions retains approximately the same orbital elements as the parent bodies and slowly spirals in the direction of Jupiter forming the faint (in back-scattered light) innermost part of the main ring and halo ring. The age of the main ring is currently unknown, but it may be the last remnant of a past population of small bodies near Jupiter.
Vertical corrugations.
Images from the "Galileo" and "New Horizons" space probes show the presence of two sets of spiraling vertical corrugations in the main ring. These waves became more tightly wound over time at the rate expected for differential nodal regression in Jupiter's gravity field. Extrapolating backwards, the more prominent of the two sets of waves appears to have been excited in 1995, around the time of the impact of Comet Shoemaker-Levy 9 with Jupiter, while the smaller set appears to date to the first half of 1990. "Galileo"'s November 1996 observations are consistent with wavelengths of 1920 ± 150 and 630 ± 20 km, and vertical amplitudes of 2.4 ± 0.7 and 0.6 ± 0.2 km, for the larger and smaller sets of waves, respectively. The formation of the larger set of waves can be explained if the ring was impacted by a cloud of particles released by the comet with a total mass on the order of 2–5 × 1012 kg, which would have tilted the ring out of the equatorial plane by 2 km. A similar spiraling wave pattern that tightens over time has been observed by "Cassini" in Saturns's C and D rings.
Halo ring.
Appearance and structure.
The halo ring is the innermost and the vertically thickest Jovian ring. Its outer edge coincides with the inner boundary of the main ring approximately at the radius (1.72 RJ). From this radius the ring becomes rapidly thicker towards Jupiter. The true vertical extent of the halo is not known but the presence of its material was detected as high as over the ring plane. The inner boundary of the halo is relatively sharp and located at the radius (1.4 RJ), but some material is present further inward to approximately . Thus the width of the halo ring is about . Its shape resembles a thick torus without clear internal structure. In contrast to the main ring, the halo's appearance depends only slightly on the viewing geometry.
The halo ring appears brightest in forward-scattered light, in which it was extensively imaged by "Galileo". While its surface brightness is much less than that of the main ring, its vertically (perpendicular to the ring plane) integrated photon flux is comparable due to its much larger thickness. Despite a claimed vertical extent of more than , the halo's brightness is strongly concentrated towards the ring plane and follows a power law of the form "z"−0.6 to "z"−1.5, where "z" is altitude over the ring plane. The halo's appearance in the back-scattered light, as observed by Keck and HST, is the same. However its total photon flux is several times lower than that of the main ring and is more strongly concentrated near the ring plane than in the forward-scattered light.
The spectral properties of the halo ring are different from the main ring. The flux distribution in the range 0.5–2.5 μm is flatter than in the main ring; the halo is not red and may even be blue.
Origin of the halo ring.
The optical properties of the halo ring can be explained by the hypothesis that it comprises only dust with particle sizes less than 15 μm. Parts of the halo located far from the ring plane may consist of submicrometre dust. This dusty composition explains the much stronger forward-scattering, bluer colors and lack of visible structure in the halo. The dust probably originates in the main ring, a claim supported by the fact that the halo's optical depth formula_5 is comparable with that of the dust in the main ring. The large thickness of the halo can be attributed to the excitation of orbital inclinations and eccentricities of dust particles by the electromagnetic forces in the Jovian magnetosphere. The outer boundary of the halo ring coincides with location of a strong 3:2 Lorentz resonance. As Poynting–Robertson drag causes particles to slowly drift towards Jupiter, their orbital inclinations are excited while passing through it. The bloom of the main ring may be a beginning of the halo. The halo ring's inner boundary is not far from the strongest 2:1 Lorentz resonance. In this resonance the excitation is probably very significant, forcing particles to plunge into the Jovian atmosphere thus defining a sharp inner boundary. Being derived from the main ring, the halo has the same age.
Gossamer rings.
Amalthea gossamer ring.
The Amalthea gossamer ring is a very faint structure with a rectangular cross section, stretching from the orbit of Amalthea at (2.54 "R") to about (1.80 RJ). Its inner boundary is not clearly defined because of the presence of the much brighter main ring and halo. The thickness of the ring is approximately 2300 km near the orbit of Amalthea and slightly decreases in the direction of Jupiter. The Amalthea gossamer ring is actually the brightest near its top and bottom edges and becomes gradually brighter towards Jupiter; one of the edges is often brighter than another. The outer boundary of the ring is relatively steep; the ring's brightness drops abruptly just inward of the orbit of Amalthea, although it may have a small extension beyond the orbit of the satellite ending near 4:3 resonance with Thebe. In forward-scattered light the ring appears to be about 30 times fainter than the main ring. In back-scattered light it has been detected only by the Keck telescope and the ACS (Advanced Camera for Surveys) on HST. Back-scattering images show additional structure in the ring: a peak in the brightness just inside the Amalthean orbit and confined to the top or bottom edge of the ring.
In 2002–2003 Galileo spacecraft had two passes through the gossamer rings. During them its dust counter detected dust particles in the size range 0.2–5 μm. In addition, the Galileo spacecraft's star scanner detected small, discrete bodies (< 1 km) near Amalthea. These may represent collisional debris generated from impacts with this satellite.
The detection of the Amalthea gossamer ring from the ground, in "Galileo" images and the direct dust measurements have allowed the determination of the particle size distribution, which appears to follow the same power law as the dust in the main ring with "q"=2 ± 0.5. The optical depth of this ring is about 10−7, which is an order of magnitude lower than that of the main ring, but the total mass of the dust (107–109 kg) is comparable.
Thebe gossamer ring.
The Thebe gossamer ring is the faintest Jovian ring. It appears as a very faint structure with a rectangular cross section, stretching from the Thebean orbit at (3.11 RJ) to about (1.80 RJ;). Its inner boundary is not clearly defined because of the presence of the much brighter main ring and halo. The thickness of the ring is approximately 8400 km near the orbit of Thebe and slightly decreases in the direction of the planet. The Thebe gossamer ring is brightest near its top and bottom edges and gradually becomes brighter towards Jupiter—much like the Amalthea ring. The outer boundary of the ring is not especially steep, stretching over . There is a barely visible continuation of the ring beyond the orbit of Thebe, extending up to (3.75 RJ) and called the Thebe Extension. In forward-scattered light the ring appears to be about 3 times fainter than the Amalthea gossamer ring. In back-scattered light it has been detected only by the Keck telescope. Back-scattering images show a peak of brightness just inside the orbit of Thebe. In 2002–2003 the dust counter of the Galileo spacecraft detected dust particles in the size range 0.2–5 μm—similar to those in the Amalthea ring—and confirmed the results obtained from imaging.
The optical depth of the Thebe gossamer ring is about 3 × 10−8, which is three times lower than the Amalthea gossamer ring, but the total mass of the dust is the same—about 107–109 kg. However the particle size distribution of the dust is somewhat shallower than in the Amalthea ring. It follows a power law with q < 2. In the Thebe extension the parameter q may be even smaller.
Origin of the gossamer rings.
The dust in the gossamer rings originates in essentially the same way as that in the main ring and halo. Its sources are the inner Jovian moons Amalthea and Thebe respectively. High velocity impacts by projectiles coming from outside the Jovian system eject dust particles from their surfaces. These particles initially retain the same orbits as their moons but then gradually spiral inward by Poynting–Robertson drag. The thickness of the gossamer rings is determined by vertical excursions of the moons due to their nonzero orbital inclinations. This hypothesis naturally explains almost all observable properties of the rings: rectangular cross-section, decrease of thickness in the direction of Jupiter and brightening of the top and bottom edges of the rings.
However some properties have so far gone unexplained, like the Thebe Extension, which may be due to unseen bodies outside Thebe's orbit, and structures visible in the back-scattered light. One possible explanation of the Thebe Extension is influence of the electromagnetic forces from the Jovian magnetosphere. When the dust enters the shadow behind Jupiter, it loses its electrical charge fairly quickly. Since the small dust particles partially corotate with the planet, they will move outward during the shadow pass creating an outward extension of the Thebe gossamer ring. The same forces can explain a dip in the particle distribution and ring's brightness, which occurs between the orbits of Amalthea and Thebe.
The peak in the brightness just inside of the Amalthea's orbit and, therefore, the vertical asymmetry the Amalthea gossamer ring may be due to the dust particles trapped at the leading (L4) and trailing (L5) Lagrange points of this moon. The particles may also follow horseshoe orbits between the Lagrangian points. The dust may be present at the leading and trailing Lagrange points of Thebe as well. This discovery implies that there are two particle populations in the gossamer rings: one slowly drifts in the direction of Jupiter as described above, while another remains near a source moon trapped in 1:1 resonance with it.
Himalia ring.
In September 2006, as NASA's "New Horizons" mission to Pluto approached Jupiter for a gravity assist, it photographed what appeared to be a faint, previously unknown planetary ring or ring arc, parallel with and slightly inside the orbit of the irregular satellite Himalia. The amount of material in the part of the ring or arc imaged by "New Horizons" was at least 0.04 km3, assuming it had the same albedo as Himalia. If the ring (arc) is debris from Himalia, it must have formed quite recently, given the century-scale precession of the Himalian orbit. It is possible that the ring could be debris from the impact of a very small undiscovered moon into Himalia, suggesting that Jupiter might continue to gain and lose small moons through collisions.
Exploration.
The existence of the Jovian rings was inferred from observations of the planetary radiation belts by Pioneer 11 spacecraft in 1975. In 1979 the "Voyager 1" spacecraft obtained a single overexposed image of the ring system. More extensive imaging was conducted by "Voyager 2" in the same year, which allowed rough determination of the ring's structure. The superior quality of the images obtained by the "Galileo" orbiter between 1995 and 2003 greatly extended the existing knowledge about the Jovian rings. Ground-based observation of the rings by the Keck telescope in 1997 and 2002 and the HST in 1999 revealed the rich structure visible in back-scattered light. Images transmitted by the "New Horizons" spacecraft in February–March 2007 allowed observation of the fine structure in the main ring for the first time. In 2000, the "Cassini" spacecraft en route to Saturn conducted extensive observations of the Jovian ring system. Future missions to the Jovian system will provide additional information about the rings.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "n(r)=A\\times r^{-q}"
},
{
"math_id": 1,
"text": "A"
},
{
"math_id": 2,
"text": "\\scriptstyle\\tau"
},
{
"math_id": 3,
"text": "\\scriptstyle\\tau_l\\,=\\,4.7\\times 10^{-6}"
},
{
"math_id": 4,
"text": "\\scriptstyle \\tau_s = 1.3\\times 10^{-6}"
},
{
"math_id": 5,
"text": "\\scriptstyle\\tau_s\\,\\sim\\,10^{-6}"
}
] | https://en.wikipedia.org/wiki?curid=979227 |
9792297 | Solder form | Mathematical construct of fiber bundles
In mathematics, more precisely in differential geometry, a soldering (or sometimes solder form) of a fiber bundle to a smooth manifold is a manner of attaching the fibers to the manifold in such a way that they can be regarded as tangent. Intuitively, soldering expresses in abstract terms the idea that a manifold may have a point of contact with a certain model Klein geometry at each point. In extrinsic differential geometry, the soldering is simply expressed by the tangency of the model space to the manifold. In intrinsic geometry, other techniques are needed to express it. Soldering was introduced in this general form by Charles Ehresmann in 1950.
Soldering of a fibre bundle.
Let "M" be a smooth manifold, and "G" a Lie group, and let "E" be a smooth fibre bundle over "M" with structure group "G". Suppose that "G" acts transitively on the typical fibre "F" of "E", and that dim "F" = dim "M". A soldering of "E" to "M" consists of the following data:
In particular, this latter condition can be interpreted as saying that θ determines a linear isomorphism
formula_0
from the tangent space of "M" at "x" to the (vertical) tangent space of the fibre at the point determined by the distinguished section. The form θ is called the solder form for the soldering.
Special cases.
By convention, whenever the choice of soldering is unique or canonically determined, the solder form is called the canonical form, or the tautological form.
Affine bundles and vector bundles.
Suppose that "E" is an affine vector bundle (a vector bundle without a choice of zero section). Then a soldering on "E" specifies first a "distinguished section": that is, a choice of zero section "o", so that "E" may be identified as a vector bundle. The solder form is then a linear isomorphism
formula_1
However, for a vector bundle there is a canonical isomorphism between the vertical space at the origin and the fibre Vo"E" ≈ "E". Making this identification, the solder form is specified by a linear isomorphism
formula_2
In other words, a soldering on an affine bundle "E" is a choice of isomorphism of "E" with the tangent bundle of "M".
Often one speaks of a "solder form on a vector bundle", where it is understood "a priori" that the distinguished section of the soldering is the zero section of the bundle. In this case, the structure group of the vector bundle is often implicitly enlarged by the semidirect product of "GL"("n") with the typical fibre of "E" (which is a representation of "GL"("n")).
Principal bundles.
In the language of principal bundles, a solder form on a smooth principal "G"-bundle "P" over a smooth manifold "M" is a horizontal and "G"-equivariant differential 1-form on "P" with values in a linear representation "V" of "G" such that the associated bundle map from the tangent bundle "TM" to the associated bundle "P"×"G" "V" is a bundle isomorphism. (In particular, "V" and "M" must have the same dimension.)
A motivating example of a solder form is the tautological or fundamental form on the frame bundle of a manifold.
The reason for the name is that a solder form solders (or attaches) the abstract principal bundle to the manifold "M" by identifying an associated bundle with the tangent bundle. Solder forms provide a method for studying "G"-structures and are important in the theory of Cartan connections. The terminology and approach is particularly popular in the physics literature. | [
{
"math_id": 0,
"text": "\\theta_x : T_xM\\rightarrow V_{o(x)} E"
},
{
"math_id": 1,
"text": "\\theta\\colon TM \\to V_oE,"
},
{
"math_id": 2,
"text": "TM \\to E."
},
{
"math_id": 3,
"text": "g\\colon TM \\to T^*M"
},
{
"math_id": 4,
"text": "TM"
},
{
"math_id": 5,
"text": "VE"
}
] | https://en.wikipedia.org/wiki?curid=9792297 |
979251 | Orbital maneuver | A movement during spaceflight
In spaceflight, an orbital maneuver (otherwise known as a burn) is the use of propulsion systems to change the orbit of a spacecraft.
For spacecraft far from Earth (for example those in orbits around the Sun) an orbital maneuver is called a "deep-space maneuver (DSM)".
When a spacecraft is not conducting a maneuver, especially in a transfer orbit, it is said to be "coasting".
General.
Rocket equation.
The Tsiolkovsky rocket equation, or ideal rocket equation, can be useful for analysis of maneuvers by vehicles using rocket propulsion. A rocket applies acceleration to itself (a thrust) by expelling part of its mass at high speed. The rocket itelf moves due to the conservation of momentum.
Delta-v.
The applied change in velocity of each maneuver is referred to as delta-v (formula_0).
The delta-v for all the expected maneuvers are estimated for a mission are summarized in a delta-v budget. With a good approximation of the delta-v budget designers can estimate the propellant required for planned maneuvers.
Propulsion.
Impulsive maneuvers.
An impulsive maneuver is the mathematical model of a maneuver as an instantaneous change in the spacecraft's velocity (magnitude and/or direction) as illustrated in figure 1. It is the limit case of a burn to generate a particular amount of delta-v, as the burn time tends to zero.
In the physical world no truly instantaneous change in velocity is possible as this would require an "infinite force" applied during an "infinitely short time" but as a mathematical model it in most cases describes the effect of a maneuver on the orbit very well.
The off-set of the velocity vector after the end of real burn from the velocity vector at the same time resulting from the theoretical impulsive maneuver is only caused by the difference in gravitational force along the two paths (red and black in figure 1) which in general is small.
In the planning phase of space missions designers will first approximate their intended orbital changes using impulsive maneuvers that greatly reduces the complexity of finding the correct orbital transitions.
Low thrust propulsion.
Applying a low thrust over a longer period of time is referred to as a non-impulsive maneuver. 'Non-impulsive' refers to the momentum changing slowly over a long time, as in electrically powered spacecraft propulsion, rather than by a short impulse.
Another term is "finite burn", where the word "finite" is used to mean "non-zero", or practically, again: over a longer period.
For a few space missions, such as those including a space rendezvous, high fidelity models of the trajectories are required to meet the mission goals. Calculating a "finite" burn requires a detailed model of the spacecraft and its thrusters. The most important of details include: mass, center of mass, moment of inertia, thruster positions, thrust vectors, thrust curves, specific impulse, thrust centroid offsets, and fuel consumption.
Assists.
Oberth effect.
In astronautics, the Oberth effect is where the use of a rocket engine when travelling at high speed generates much more useful energy than one at low speed. Oberth effect occurs because the propellant has more usable energy (due to its kinetic energy on top of its chemical potential energy) and it turns out that the vehicle is able to employ this kinetic energy to generate more mechanical power. It is named after Hermann Oberth, the Austro-Hungarian-born, German physicist and a founder of modern rocketry, who apparently first described the effect.
The Oberth effect is used in a powered flyby or Oberth maneuver where the application of an impulse, typically from the use of a rocket engine, close to a gravitational body (where the gravity potential is low, and the speed is high) can give much more change in kinetic energy and final speed (i.e. higher specific energy) than the same impulse applied further from the body for the same initial orbit.
Since the Oberth maneuver happens in a very limited time (while still at low altitude), to generate a high impulse the engine necessarily needs to achieve high thrust (impulse is by definition the time multiplied by thrust). Thus the Oberth effect is far less useful for low-thrust engines, such as ion thrusters.
Historically, a lack of understanding of this effect led investigators to conclude that interplanetary travel would require completely impractical amounts of propellant, as without it, enormous amounts of energy are needed.
Gravity assist.
In astrodynamics a gravity assist maneuver, gravitational slingshot or swing-by is the use of the relative movement and gravity of a planet or other celestial body to alter the trajectory of a spacecraft, typically in order to save propellant, time, and expense. Gravity assistance can be used to accelerate, decelerate and/or re-direct the path of a spacecraft.
The "assist" is provided by the motion (orbital angular momentum) of the gravitating body as it pulls on the spacecraft. The technique was first proposed as a mid-course maneuver in 1961, and used by interplanetary probes from "Mariner 10" onwards, including the two "Voyager" probes' notable fly-bys of Jupiter and Saturn.
Transfer orbits.
Orbit insertion maneuvers leave a spacecraft in a destination orbit. In contrast, orbit injection maneuvers occur when a spacecraft enters a transfer orbit, e.g. trans-lunar injection (TLI), trans-Mars injection (TMI) and trans-Earth injection (TEI). These are generally larger than small trajectory correction maneuvers. Insertion, injection and sometimes initiation are used to describe entry into a "descent orbit", e.g. the Powered Descent Initiation maneuver used for Apollo lunar landings.
Hohmann transfer.
In orbital mechanics, the Hohmann transfer orbit is an elliptical orbit used to transfer between two circular orbits of different altitudes, in the same plane.
The orbital maneuver to perform the Hohmann transfer uses two engine impulses which move a spacecraft onto and off the transfer orbit. This maneuver was named after Walter Hohmann, the German scientist who published a description of it in his 1925 book "Die Erreichbarkeit der Himmelskörper" ("The Accessibility of Celestial Bodies"). Hohmann was influenced in part by the German science fiction author Kurd Laßwitz and his 1897 book "Two Planets".
Bi-elliptic transfer.
In astronautics and aerospace engineering, the bi-elliptic transfer is an orbital maneuver that moves a spacecraft from one orbit to another and may, in certain situations, require less delta-v than a Hohmann transfer maneuver.
The bi-elliptic transfer consists of two half elliptic orbits. From the initial orbit, a delta-v is applied boosting the spacecraft into the first transfer orbit with an apoapsis at some point formula_1 away from the central body. At this point, a second delta-v is applied sending the spacecraft into the second elliptical orbit with periapsis at the radius of the final desired orbit, where a third delta-v is performed, injecting the spacecraft into the desired orbit.
While they require one more engine burn than a Hohmann transfer and generally requires a greater travel time, some bi-elliptic transfers require a lower amount of total delta-v than a Hohmann transfer when the ratio of final to initial semi-major axis is 11.94 or greater, depending on the intermediate semi-major axis chosen.
The idea of the bi-elliptical transfer trajectory was first published by Ary Sternfeld in 1934.
Low energy transfer.
A low energy transfer, or low energy trajectory, is a route in space which allows spacecraft to change orbits using very little fuel. These routes work in the Earth-Moon system and also in other systems, such as traveling between the satellites of Jupiter. The drawback of such trajectories is that they take much longer to complete than higher energy (more fuel) transfers such as Hohmann transfer orbits.
Low energy transfer are also known as weak stability boundary trajectories, or ballistic capture trajectories.
Low energy transfers follow special pathways in space, sometimes referred to as the Interplanetary Transport Network. Following these pathways allows for long distances to be traversed for little expenditure of delta-v.
Orbital inclination change.
Orbital inclination change is an orbital maneuver aimed at changing the inclination of an orbiting body's orbit. This maneuver is also known as an orbital plane change as the plane of the orbit is tipped. This maneuver requires a change in the orbital velocity vector (delta v) at the orbital nodes (i.e. the point where the initial and desired orbits intersect, the line of orbital nodes is defined by the intersection of the two orbital planes).
In general, inclination changes can require a great deal of delta-v to perform, and most mission planners try to avoid them whenever possible to conserve fuel. This is typically achieved by launching a spacecraft directly into the desired inclination, or as close to it as possible so as to minimize any inclination change required over the duration of the spacecraft life.
Maximum efficiency of inclination change is achieved at apoapsis, (or apogee), where orbital velocity formula_2 is the lowest. In some cases, it may require less total delta v to raise the spacecraft into a higher orbit, change the orbit plane at the higher apogee, and then lower the spacecraft to its original altitude.
Constant-thrust trajectory.
Constant-thrust and constant-acceleration trajectories involve the spacecraft firing its engine in a prolonged constant burn. In the limiting case where the vehicle acceleration is high compared to the local gravitational acceleration, the spacecraft points straight toward the target (accounting for target motion), and remains accelerating constantly under high thrust until it reaches its target. In this high-thrust case, the trajectory approaches a straight line. If it is required that the spacecraft rendezvous with the target, rather than performing a flyby, then the spacecraft must flip its orientation halfway through the journey, and decelerate the rest of the way.
In the constant-thrust trajectory, the vehicle's acceleration increases during thrusting period, since the fuel use means the vehicle mass decreases. If, instead of constant thrust, the vehicle has constant acceleration, the engine thrust must decrease during the trajectory.
This trajectory requires that the spacecraft maintain a high acceleration for long durations. For interplanetary transfers, days, weeks or months of constant thrusting may be required. As a result, there are no currently available spacecraft propulsion systems capable of using this trajectory. It has been suggested that some forms of nuclear (fission or fusion based) or antimatter powered rockets would be capable of this trajectory.
More practically, this type of maneuver is used in low thrust maneuvers, for example with ion engines, Hall-effect thrusters, and others. These types of engines have very high specific impulse (fuel efficiency) but currently are only available with fairly low absolute thrust.
Rendezvous and docking.
Orbit phasing.
In astrodynamics orbit phasing is the adjustment of the time-position of spacecraft along its orbit, usually described as adjusting the orbiting spacecraft's true anomaly.
Space rendezvous and docking.
A space rendezvous is a sequence of orbital maneuvers during which two spacecraft, one of which is often a space station, arrive at the same orbit and approach to a very close distance (e.g. within visual contact). Rendezvous requires a precise match of the orbital velocities of the two spacecraft, allowing them to remain at a constant distance through orbital station-keeping. Rendezvous is commonly followed by docking or berthing, procedures which bring the spacecraft into physical contact and create a link between them.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\Delta\\mathbf{v}\\,"
},
{
"math_id": 1,
"text": "r_b"
},
{
"math_id": 2,
"text": "v\\,"
}
] | https://en.wikipedia.org/wiki?curid=979251 |
9793263 | Covariance function | Function in probability theory
In probability theory and statistics, the covariance function describes how much two random variables change together (their "covariance") with varying spatial or temporal separation. For a random field or stochastic process "Z"("x") on a domain "D", a covariance function "C"("x", "y") gives the covariance of the values of the random field at the two locations "x" and "y":
formula_0
The same "C"("x", "y") is called the autocovariance function in two instances: in time series (to denote exactly the same concept except that "x" and "y" refer to locations in time rather than in space), and in multivariate random fields (to refer to the covariance of a variable with itself, as opposed to the cross covariance between two different variables at different locations, Cov("Z"("x"1), "Y"("x"2))).
Admissibility.
For locations "x"1, "x"2, ..., "x""N" ∈ "D" the variance of every linear combination
formula_1
can be computed as
formula_2
A function is a valid covariance function if and only if this variance is non-negative for all possible choices of "N" and weights "w"1, ..., "w""N". A function with this property is called positive semidefinite.
Simplifications with stationarity.
In case of a weakly stationary random field, where
formula_3
for any lag "h", the covariance function can be represented by a one-parameter function
formula_4
which is called a "covariogram" and also a "covariance function". Implicitly the "C"("x""i", "x""j") can be computed from "C""s"("h") by:
formula_5
The positive definiteness of this single-argument version of the covariance function can be checked by Bochner's theorem.
Parametric families of covariance functions.
For a given variance formula_6, a simple stationary parametric covariance function is the "exponential covariance function"
formula_7
where "V" is a scaling parameter (correlation length), and "d" = "d"("x","y") is the distance between two points. Sample paths of a Gaussian process with the exponential covariance function are not smooth. The "squared exponential" (or "Gaussian") covariance function:
formula_8
is a stationary covariance function with smooth sample paths.
The Matérn covariance function and rational quadratic covariance function are two parametric families of stationary covariance functions. The Matérn family includes the exponential and squared exponential covariance functions as special cases. | [
{
"math_id": 0,
"text": "\nC(x,y) \n:= \\operatorname{cov}(Z(x),Z(y)) \n= \\mathbb{E}\\Big[\\big(Z(x)-\\mathbb{E}[Z(x)]\\big)\\big(Z(y)-\\mathbb{E}[Z(y)]\\big) \\Big].\\, \n"
},
{
"math_id": 1,
"text": "X=\\sum_{i=1}^N w_i Z(x_i)"
},
{
"math_id": 2,
"text": "\\operatorname{var}(X)=\\sum_{i=1}^N \\sum_{j=1}^N w_i C(x_i,x_j) w_j."
},
{
"math_id": 3,
"text": "C(x_i,x_j)=C(x_i+h,x_j+h)\\,"
},
{
"math_id": 4,
"text": "C_s(h)=C(0,h)=C(x,x+h)\\,"
},
{
"math_id": 5,
"text": "C(x,y)=C_s(y-x).\\,"
},
{
"math_id": 6,
"text": "\\sigma^2"
},
{
"math_id": 7,
"text": "\nC(d) = \\sigma^2 \\exp(-d/V)\n"
},
{
"math_id": 8,
"text": "\nC(d) = \\sigma^2 \\exp(-(d/V)^2)\n"
}
] | https://en.wikipedia.org/wiki?curid=9793263 |
979374 | Orbital inclination change | Orbital inclination change is an orbital maneuver aimed at changing the inclination of an orbiting body's orbit. This maneuver is also known as an orbital plane change as the plane of the orbit is tipped. This maneuver requires a change in the orbital velocity vector (delta-v) at the orbital nodes (i.e. the point where the initial and desired orbits intersect, the line of orbital nodes is defined by the intersection of the two orbital planes).
In general, inclination changes can take a very large amount of delta-v to perform, and most mission planners try to avoid them whenever possible to conserve fuel. This is typically achieved by launching a spacecraft directly into the desired inclination, or as close to it as possible so as to minimize any inclination change required over the duration of the spacecraft life. Planetary flybys are the most efficient way to achieve large inclination changes, but they are only effective for interplanetary missions.
Efficiency.
The simplest way to perform a plane change is to perform a burn around one of the two crossing points of the initial and final planes. The delta-v required is the vector change in velocity between the two planes at that point.
However, maximum efficiency of inclination changes are achieved at apoapsis, (or apogee), where orbital velocity formula_0 is the lowest. In some cases, it can require less total delta-v to raise the satellite into a higher orbit, change the orbit plane at the higher apogee, and then lower the satellite to its original altitude.
For the most efficient example mentioned above, targeting an inclination at apoapsis also changes the argument of periapsis. However, targeting in this manner limits the mission designer to changing the plane only along the line of apsides.
For Hohmann transfer orbits, the initial orbit and the final orbit are 180 degrees apart. Because the transfer orbital plane has to include the central body, such as the Sun, and the initial and final nodes, this can require two 90 degree plane changes to reach and leave the transfer plane. In such cases it is often more efficient to use a "broken plane maneuver" where an additional burn is done so that plane change only occurs at the intersection of the initial and final orbital planes, rather than at the ends.
Inclination entangled with other orbital elements.
An important subtlety of performing an inclination change is that Keplerian orbital inclination is defined by the angle between ecliptic North and the vector normal to the orbit plane, (i.e. the angular momentum vector). This means that inclination is always positive and is entangled with other orbital elements primarily the argument of periapsis which is in turn connected to the longitude of the ascending node. This can result in two very different orbits with precisely the same inclination.
Calculation.
In a pure inclination change, only the inclination of the orbit is changed while all other orbital characteristics (radius, shape, etc.) remains the same as before. Delta-v (formula_1) required for an inclination change (formula_2) can be calculated as follows:
formula_3
where:
For more complicated maneuvers which may involve a combination of change in inclination and orbital radius, the delta-v is the vector difference between the velocity vectors of the initial orbit and the desired orbit at the transfer point. These types of combined maneuvers are commonplace, as it is more efficient to perform multiple orbital maneuvers at the same time if these maneuvers have to be done at the same location.
According to the law of cosines, the minimum Delta-v (formula_9) required for any such combined maneuver can be calculated with the following equation
formula_10
Here formula_11 and formula_12 are the initial and target velocities.
Circular orbit inclination change.
Where both orbits are circular (i.e. formula_13) and have the same radius the Delta-v (formula_1) required for an inclination change (formula_2) can be calculated using:
formula_14
where formula_0 is the orbital velocity and has the same units as formula_1.
Other ways to change inclination.
Some other ways to change inclination that do not require burning propellant (or help reduce the amount of propellant required) include
Transits of other bodies such as the Moon can also be done.
None of these methods will change the delta-V required, they are simply alternate means of achieving the same end result and, ideally, will reduce propellant usage.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "v"
},
{
"math_id": 1,
"text": "\\Delta v_i"
},
{
"math_id": 2,
"text": "\\Delta i"
},
{
"math_id": 3,
"text": "\\Delta v_i = {2\\sin(\\frac{\\Delta{i}}{2})(1+e\\cos(f))na \\over {\\sqrt{1-e^2}\\cos(\\omega+f)}}"
},
{
"math_id": 4,
"text": "e\\,"
},
{
"math_id": 5,
"text": "\\omega\\,"
},
{
"math_id": 6,
"text": "f\\,"
},
{
"math_id": 7,
"text": "n\\,"
},
{
"math_id": 8,
"text": "a\\,"
},
{
"math_id": 9,
"text": "\\Delta{v}\\,"
},
{
"math_id": 10,
"text": "\\Delta v = \\sqrt{V_1^2 + V_2^2 - 2 V_1 V_2 cos(\\Delta i)}"
},
{
"math_id": 11,
"text": "V_1"
},
{
"math_id": 12,
"text": "V_2"
},
{
"math_id": 13,
"text": "e = 0"
},
{
"math_id": 14,
"text": "\\Delta v_i = {2v\\, \\sin \\left(\\frac{\\Delta{i}}{2} \\right)}"
}
] | https://en.wikipedia.org/wiki?curid=979374 |
979500 | Gravity loss | In astrodynamics and rocketry, gravity loss is a measure of the loss in the net performance of a rocket while it is thrusting in a gravitational field. In other words, it is the cost of having to hold the rocket up in a gravity field.
Gravity losses depend on the time over which thrust is applied as well the direction the thrust is applied in. Gravity losses as a proportion of delta-v are minimised if maximum thrust is applied for a short time, and by avoiding thrusting directly away from the local gravitational field. During the launch and ascent phase, however, thrust must be applied over a long period with a major component of thrust in the opposite direction to gravity, so gravity losses become significant. For example, to reach a speed of 7.8 km/s in low Earth orbit requires a delta-v of between 9 and 10 km/s. The additional 1.5 to 2 km/s delta-v is due to gravity losses, steering losses and atmospheric drag.
Example.
Consider the simplified case of a vehicle with constant mass accelerating vertically with a constant thrust per unit mass "a" in a gravitational field of strength "g". The actual acceleration of the craft is "a"-"g" and it is using delta-v at a rate of "a" per unit time.
Over a time "t" the change in speed of the spacecraft is ("a"-"g")"t", whereas the delta-v expended is "at". The gravity loss is the difference between these figures, which is "gt". As a proportion of delta-v, the gravity loss is "g"/"a".
A very large thrust over a very short time will achieve a desired speed increase with little gravity loss. On the other hand, if "a" is only slightly greater than "g", the gravity loss is a large proportion of delta-v. Gravity loss can be described as the extra delta-v needed because of not being able to spend all the needed delta-v instantaneously.
This effect can be explained in two equivalent ways:
These effects apply whenever climbing to an orbit with higher specific orbital energy, such as during launch to low Earth orbit (LEO) or from LEO to an escape orbit. This is a worst case calculation - in practice, gravity loss during launch and ascent is less than the maximum value of "gt" because the launch trajectory does not remain vertical and the vehicle's mass is not constant, due to consumption of propellant and staging.
Vector considerations.
Thrust is a vector quantity, and the direction of the thrust has a large impact on the size of gravity losses. For instance, gravity loss on a rocket of mass "m" would reduce a 3"m""g" thrust directed upward to an acceleration of 2"g". However, the same 3"mg" thrust could be directed at such an angle that it had a 1"mg" upward component, completely canceled by gravity, and a horizontal component of mg×formula_0 = 2.8"mg" (by Pythagoras' theorem), achieving a 2.8"g" horizontal acceleration.
As orbital speeds are approached, vertical thrust can be reduced as centrifugal force (in the rotating frame of reference around the center of the Earth) counteracts a large proportion of the gravitation force on the rocket, and more of the thrust can be used to accelerate. Gravity losses can therefore also be described as the integral of gravity (irrespective of the vector of the rocket) minus the centrifugal force. Using this perspective, when a spacecraft reaches orbit, the gravity losses continue but are counteracted perfectly by the centrifugal force. Since a rocket has very little centrifugal force at launch, the net gravity losses per unit time are large at liftoff.
It is important to note that minimising gravity losses is not the only objective of a launching spacecraft. Rather, the objective is to achieve the position/velocity combination for the desired orbit. For instance, the way to maximize acceleration is to thrust straight downward; however, thrusting downward is clearly not a viable course of action for a rocket intending to reach orbit. | [
{
"math_id": 0,
"text": "\\sqrt{3^2-1^2}"
}
] | https://en.wikipedia.org/wiki?curid=979500 |
979503 | Graham number | Calculated number representing the hypothetical value of a stock
The Graham number or Benjamin Graham number is a figure used in securities investing that measures a stock's so-called fair value. Named after Benjamin Graham, the founder of value investing, the Graham number can be calculated as follows:
formula_0
The final number is, theoretically, the maximum price that a defensive investor should pay for the given stock. Put another way, a stock priced below the Graham Number would be considered a good value, if it also meets a number of other criteria.
The Number represents the geometric mean of the maximum that one would pay based on earnings and based on book value. Graham writes:
<templatestyles src="Template:Blockquote/styles.css" />Current price should not be more than 1<templatestyles src="Fraction/styles.css" />1⁄2 times the book value last reported. However a multiplier of earnings below 15 could justify a correspondingly higher multiplier of assets. As a rule of thumb we suggest that the "product" of the multiplier times the ratio of price to book value should not exceed 22.5. (This figure corresponds to 15 times earnings and 1<templatestyles src="Fraction/styles.css" />1⁄2 times book value. It would admit an issue selling at only 9 times earnings and 2.5 times asset value, etc.)
Alternative calculation.
Earnings per share is calculated by dividing "net income" by "shares outstanding". Book value is another way of saying "shareholders' equity". Therefore, book value per share is calculated by dividing "equity" by "shares outstanding". Consequently, the formula for the Graham number can also be written as follows:
formula_1
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\sqrt{22.5\\times(\\text{earnings per share})\\times(\\text{book value per share})}"
},
{
"math_id": 1,
"text": "\\sqrt{15 \\times 1.5 \\times \\left(\\frac{\\text{net income}}{\\text{shares outstanding}}\\right) \\times \\left(\\frac{\\mathrm{shareholders'\\ equity}}{\\text{shares outstanding}}\\right)}"
}
] | https://en.wikipedia.org/wiki?curid=979503 |
9795255 | Forward measure | In finance, a "T"-forward measure is a pricing measure absolutely continuous with respect to a risk-neutral measure, but rather than using the money market as numeraire, it uses a bond with maturity "T". The use of the forward measure was pioneered by Farshid Jamshidian (1987), and later used as a means of calculating the price of options on bonds.
Mathematical definition.
Let
formula_0
be the bank account or money market account numeraire and
formula_1
be the discount factor in the market at time 0 for maturity "T". If formula_2 is the risk neutral measure, then the forward measure formula_3 is defined via the Radon–Nikodym derivative given by
formula_4
Note that this implies that the forward measure and the risk neutral measure coincide when interest rates are deterministic. Also, this is a particular form of the change of numeraire formula by changing the numeraire from the money market or bank account "B"("t") to a "T"-maturity bond "P"("t","T"). Indeed, if in general
formula_5
is the price of a zero coupon bond at time "t" for maturity "T", where formula_6 is the filtration denoting market information at time "t", then we can write
formula_7
from which it is indeed clear that the forward "T" measure is associated to the "T"-maturity zero coupon bond as numeraire. For a more detailed discussion see Brigo and Mercurio (2001).
Consequences.
The name "forward measure" comes from the fact that under the forward measure, forward prices are martingales, a fact first observed by Geman (1989) (who is responsible for formally defining the measure). Compare with futures prices, which are martingales under the risk neutral measure. Note that when interest rates are deterministic, this implies that forward prices and futures prices are the same.
For example, the discounted stock price is a martingale under the risk-neutral measure:
formula_8
The forward price is given by formula_9. Thus, we have formula_10
formula_11
by using the Radon-Nikodym derivative formula_12 and the equality formula_10. The last term is equal to unity by definition of the bond price so that we get
formula_13
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "B(T) = \\exp\\left(\\int_0^T r(u)\\, du\\right)"
},
{
"math_id": 1,
"text": "D(T) = 1/B(T) = \\exp\\left(-\\int_0^T r(u)\\, du\\right)"
},
{
"math_id": 2,
"text": "Q_*"
},
{
"math_id": 3,
"text": "Q_T"
},
{
"math_id": 4,
"text": "\\frac{dQ_T}{dQ_*} = \\frac{1}{B(T) E_{Q_*}[1/B(T)]} = \\frac{D(T)}{E_{Q_*}[D(T)]}."
},
{
"math_id": 5,
"text": "P(t,T) = E_{Q_*}\\left[\\frac{B(t)}{B(T)}|\\mathcal{F}(t)\\right] = E_{Q_*}\\left[\\frac{D(T)}{D(t)}|\\mathcal{F}(t)\\right] "
},
{
"math_id": 6,
"text": "\\mathcal{F}(t)"
},
{
"math_id": 7,
"text": "\\frac{dQ_T}{dQ_*} = \\frac{B(0) P(T,T)}{B(T) P(0,T)} "
},
{
"math_id": 8,
"text": "S(t) D(t) = E_{Q_*}[D(T)S(T) | \\mathcal{F}(t)].\\,"
},
{
"math_id": 9,
"text": "F_S(t,T) = \\frac{S(t)}{P(t,T)}"
},
{
"math_id": 10,
"text": "F_S(T,T)=S(T)"
},
{
"math_id": 11,
"text": "F_S(t,T) = \\frac{E_{Q_*}[D(T)S(T) | \\mathcal{F}(t)]}{D(t) P(t,T)}= E_{Q_T}[F_S(T,T) | \\mathcal{F}(t)]\\frac{E_{Q_*}[D(T)|\\mathcal{F}(t)]}{D(t) P(t,T)}"
},
{
"math_id": 12,
"text": "\\frac{dQ_T}{dQ_*}"
},
{
"math_id": 13,
"text": "F_S(t,T) = E_{Q_T}[F_S(T,T)|\\mathcal{F}(t)].\\,"
}
] | https://en.wikipedia.org/wiki?curid=9795255 |
9795423 | No-wandering-domain theorem | Mathematical theorem
In mathematics, the no-wandering-domain theorem is a result on dynamical systems, proven by Dennis Sullivan in 1985.
The theorem states that a rational map "f" : Ĉ → Ĉ with deg("f") ≥ 2 does not have a wandering domain, where Ĉ denotes the Riemann sphere. More precisely, for every component "U" in the Fatou set of "f", the sequence
formula_0
will eventually become periodic. Here, "f" "n" denotes the "n"-fold iteration of "f", that is,
formula_1
The theorem does not hold for arbitrary maps; for example, the transcendental map formula_2 has wandering domains. However, the result can be generalized to many situations where the functions naturally belong to a finite-dimensional parameter space, most notably to transcendental entire and meromorphic functions with a finite number of singular values. | [
{
"math_id": 0,
"text": "U,f(U),f(f(U)),\\dots,f^n(U), \\dots"
},
{
"math_id": 1,
"text": "f^n = \\underbrace{f \\circ f\\circ \\cdots \\circ f}_n ."
},
{
"math_id": 2,
"text": "f(z)=z+2\\pi\\sin(z)"
}
] | https://en.wikipedia.org/wiki?curid=9795423 |
9795530 | Relative risk reduction | In epidemiology, the relative risk reduction (RRR) or efficacy is the relative decrease in the risk of an adverse event in the exposed group compared to an unexposed group. It is computed as formula_0, where formula_1 is the incidence in the exposed group, and formula_2 is the incidence in the unexposed group. If the risk of an adverse event is increased by the exposure rather than decreased, the term relative risk increase (RRI) is used, and it is computed as formula_3. If the direction of risk change is not assumed, the term relative effect is used, and it is computed in the same way as relative risk increase. | [
{
"math_id": 0,
"text": "(I_u - I_e) / I_u"
},
{
"math_id": 1,
"text": "I_e"
},
{
"math_id": 2,
"text": "I_u"
},
{
"math_id": 3,
"text": "(I_e - I_u)/I_u"
}
] | https://en.wikipedia.org/wiki?curid=9795530 |
979564 | Near-infrared spectroscopy | Analytical method
Near-infrared spectroscopy (NIRS) is a spectroscopic method that uses the near-infrared region of the electromagnetic spectrum (from 780 nm to 2500 nm). Typical applications include medical and physiological diagnostics and research including blood sugar, pulse oximetry, functional neuroimaging, sports medicine, elite sports training, ergonomics, rehabilitation, neonatal research, brain computer interface, urology (bladder contraction), and neurology (neurovascular coupling). There are also applications in other areas as well such as pharmaceutical, food and agrochemical quality control, atmospheric chemistry, combustion research and knowledge.
Theory.
Near-infrared spectroscopy is based on molecular overtone and combination vibrations. Overtones and combinations exhibit lower intensity compared to the fundamental, as a result, the molar absorptivity in the near-IR region is typically quite small. (NIR absorption bands are typically 10–100 times weaker than the corresponding fundamental mid-IR absorption band.) The lower absorption allows NIR radiation to penetrate much further into a sample than mid infrared radiation. Near-infrared spectroscopy is, therefore, not a particularly sensitive technique, but it can be very useful in probing bulk material with little to no sample preparation.
The molecular overtone and combination bands seen in the near-IR are typically very broad, leading to complex spectra; it can be difficult to assign specific features to specific chemical components. Multivariate (multiple variables) calibration techniques (e.g., principal components analysis, partial least squares, or artificial neural networks) are often employed to extract the desired chemical information. Careful development of a set of calibration samples and application of multivariate calibration techniques is essential for near-infrared analytical methods.
History.
The discovery of near-infrared energy is ascribed to William Herschel in the 19th century, but the first industrial application began in the 1950s. In the first applications, NIRS was used only as an add-on unit to other optical devices that used other wavelengths such as ultraviolet (UV), visible (Vis), or mid-infrared (MIR) spectrometers. In the 1980s, a single-unit, stand-alone NIRS system was made available.
In the 1980s, Karl Norris (while working at the USDA Instrumentation Research Laboratory, Beltsville, USA) pioneered the use NIR spectroscopy for quality assessments of agricultural products. Since then, use has expanded from food and agricultural to chemical, polymer, and petroleum industries; pharmaceutical industry; biomedical sciences; and environmental analysis.
With the introduction of light-fiber optics in the mid-1980s and the monochromator-detector developments in the early 1990s, NIRS became a more powerful tool for scientific research. The method has been used in a number of fields of science including physics, physiology, or medicine. It is only in the last few decades that NIRS began to be used as a medical tool for monitoring patients, with the first clinical application of so-called fNIRS in 1994.
Instrumentation.
Instrumentation for near-IR (NIR) spectroscopy is similar to instruments for the UV-visible and mid-IR ranges. There is a source, a detector, and a dispersive element (such as a prism, or, more commonly, a diffraction grating) to allow the intensity at different wavelengths to be recorded. Fourier transform NIR instruments using an interferometer are also common, especially for wavelengths above ~1000 nm. Depending on the sample, the spectrum can be measured in either reflection or transmission.
Common incandescent or quartz halogen light bulbs are most often used as broadband sources of near-infrared radiation for analytical applications. Light-emitting diodes (LEDs) can also be used. For high precision spectroscopy, wavelength-scanned lasers and frequency combs have recently become powerful sources, albeit with sometimes longer acquisition timescales. When lasers are used, a single detector without any dispersive elements might be sufficient.
The type of detector used depends primarily on the range of wavelengths to be measured. Silicon-based CCDs are suitable for the shorter end of the NIR range, but are not sufficiently sensitive over most of the range (over 1000 nm). InGaAs and PbS devices are more suitable and have higher quantum efficiency for wavelengths above 1100 nm. It is possible to combine silicon-based and InGaAs detectors in the same instrument. Such instruments can record both UV-visible and NIR spectra 'simultaneously'.
Instruments intended for chemical imaging in the NIR may use a 2D array detector with an acousto-optic tunable filter. Multiple images may be recorded sequentially at different narrow wavelength bands.
Many commercial instruments for UV/vis spectroscopy are capable of recording spectra in the NIR range (to perhaps ~900 nm). In the same way, the range of some mid-IR instruments may extend into the NIR. In these instruments, the detector used for the NIR wavelengths is often the same detector used for the instrument's "main" range of interest.
NIRS as an analytical technique.
The use of NIR as an analytical technique did not come from extending the use of mid-IR into the near-IR range, but developed independently. A striking way this was exhibited is that, while mid-IR spectroscopists use wavenumbers ("cm"−1) when displaying spectra, NIR spectroscopists used wavelength ("nm"), as is used in ultraviolet–visible spectroscopy. The early practitioners of IR spectroscopy, who depended on assignment of absorption bands to specific bond types, were frustrated by the complexity of the region. However, as a quantitative tool, the lower molar absorption levels in the region tended to keep absorption maxima "on-scale", enabling quantitative work with little sample preparation. The techniques applied to extract the quantitative information from these complex spectra were unfamiliar to analytical chemists, and the technique was viewed with suspicion in academia.
Generally, a quantitative NIR analysis is accomplished by selecting a group of calibration samples, for which the concentration of the analyte of interest has been determined by a reference method, and finding a correlation between various spectral features and those concentrations using a chemometric tool. The calibration is then validated by using it to predict the analyte values for samples in a validation set, whose values have been determined by the reference method but have not been included in the calibration. A validated calibration is then used to predict the values of samples. The complexity of the spectra are overcome by the use of multivariate calibration. The two tools most often used a multi-wavelength linear regression and partial least squares.
Applications.
Typical applications of NIR spectroscopy include the analysis of food products, pharmaceuticals, combustion products, and a major branch of astronomical spectroscopy.
Astronomical spectroscopy.
Near-infrared spectroscopy is used in astronomy for studying the atmospheres of cool stars where molecules can form. The vibrational and rotational signatures of molecules such as titanium oxide, cyanide, and carbon monoxide can be seen in this wavelength range and can give a clue towards the star's spectral type. It is also used for studying molecules in other astronomical contexts, such as in molecular clouds where new stars are formed. The astronomical phenomenon known as reddening means that near-infrared wavelengths are less affected by dust in the interstellar medium, such that regions inaccessible by optical spectroscopy can be studied in the near-infrared. Since dust and gas are strongly associated, these dusty regions are exactly those where infrared spectroscopy is most useful. The near-infrared spectra of very young stars provide important information about their ages and masses, which is important for understanding star formation in general. Astronomical spectrographs have also been developed for the detection of exoplanets using the Doppler shift of the parent star due to the radial velocity of the planet around the star.
Agriculture.
Near-infrared spectroscopy is widely applied in agriculture for determining the quality of forages, grains, and grain products, oilseeds, coffee, tea, spices, fruits, vegetables, sugarcane, beverages, fats, and oils, dairy products, eggs, meat, and other agricultural products. It is widely used to quantify the composition of agricultural products because it meets the criteria of being accurate, reliable, rapid, non-destructive, and inexpensive. Abeni and Bergoglio 2001 apply NIRS to chicken breeding as the assay method for characteristics of fat composition.
Remote monitoring.
Techniques have been developed for NIR spectroscopic imaging. Hyperspectral imaging has been applied for a wide range of uses, including the remote investigation of plants and soils. Data can be collected from instruments on airplanes, satellites or unmanned aerial systems to assess ground cover and soil chemistry.
Remote monitoring or remote sensing from the NIR spectroscopic region can also be used to study the atmosphere. For example, measurements of atmospheric gases are made from NIR spectra measured by the OCO-2, GOSAT, and the TCCON.
Materials science.
Techniques have been developed for NIR spectroscopy of microscopic sample areas for film thickness measurements, research into the optical characteristics of nanoparticles and optical coatings for the telecommunications industry.
Medical uses.
The application of NIRS in medicine centres on its ability to provide information about the oxygen saturation of haemoglobin within the microcirculation. Broadly speaking, it can be used to assess oxygenation and microvascular function in the brain (cerebral NIRS) or in the peripheral tissues (peripheral NIRS).
"Cerebral NIRS"
When a specific area of the brain is activated, the localized blood volume in that area changes quickly. Optical imaging can measure the location and activity of specific regions of the brain by continuously monitoring blood hemoglobin levels through the determination of optical absorption coefficients.
NIRS can be used as a quick screening tool for possible intracranial bleeding cases by placing the scanner on four locations on the head. In non-injured patients the brain absorbs the NIR light evenly. When there is an internal bleeding from an injury, the blood may be concentrated in one location causing the NIR light to be absorbed more than other locations, which the scanner detects.
So-called functional NIRS can be used for non-invasive assessment of brain function through the intact skull in human subjects by detecting changes in blood hemoglobin concentrations associated with neural activity, e.g., in branches of cognitive psychology as a partial replacement for fMRI techniques. NIRS can be used on infants, and NIRS is much more portable than fMRI machines, even wireless instrumentation is available, which enables investigations in freely moving subjects. However, NIRS cannot fully replace fMRI because it can only be used to scan cortical tissue, whereas fMRI can be used to measure activation throughout the brain. Special public domain statistical toolboxes for analysis of stand alone and combined NIRS/MRI measurement have been developed (NIRS-SPM).
The application in functional mapping of the human cortex is called functional NIRS (fNIRS) or diffuse optical tomography (DOT). The term diffuse optical tomography is used for three-dimensional NIRS. The terms NIRS, NIRI, and DOT are often used interchangeably, but they have some distinctions. The most important difference between NIRS and DOT/NIRI is that DOT/NIRI is used mainly to detect changes in optical properties of tissue simultaneously from multiple measurement points and display the results in the form of a map or image over a specific area, whereas NIRS provides quantitative data in absolute terms on up to a few specific points. The latter is also used to investigate other tissues such as, e.g., muscle, breast and tumors. NIRS can be used to quantify blood flow, blood volume, oxygen consumption, reoxygenation rates and muscle recovery time in muscle.
By employing several wavelengths and time resolved (frequency or time domain) and/or spatially resolved methods blood flow, volume and absolute tissue saturation (formula_0 or Tissue Saturation Index (TSI)) can be quantified. Applications of oximetry by NIRS methods include neuroscience, ergonomics, rehabilitation, brain-computer interface, urology, the detection of illnesses that affect the blood circulation (e.g., peripheral vascular disease), the detection and assessment of breast tumors, and the optimization of training in sports medicine.
The use of NIRS in conjunction with a bolus injection of indocyanine green (ICG) has been used to measure cerebral blood flow and cerebral metabolic rate of oxygen consumption (CMRO2).
It has also been shown that CMRO2 can be calculated with combined NIRS/MRI measurements. Additionally metabolism can be interrogated by resolving an additional mitochondrial chromophore, cytochrome-c-oxidase, using broadband NIRS.
NIRS is starting to be used in pediatric critical care, to help manage patients following cardiac surgery. Indeed, NIRS is able to measure venous oxygen saturation (SVO2), which is determined by the cardiac output, as well as other parameters (FiO2, hemoglobin, oxygen uptake). Therefore, examining the NIRS provides critical care physicians with an estimate of the cardiac output. NIRS is favoured by patients, because it is non-invasive, painless, and does not require ionizing radiation.
Optical coherence tomography (OCT) is another NIR medical imaging technique capable of 3D imaging with high resolution on par with low-power microscopy. Using optical coherence to measure photon pathlength allows OCT to build images of live tissue and clear examinations of tissue morphology. Due to technique differences OCT is limited to imaging 1–2 mm below tissue surfaces, but despite this limitation OCT has become an established medical imaging technique especially for imaging of the retina and anterior segments of the eye, as well as coronaries.
A type of neurofeedback, hemoencephalography or HEG, uses NIR technology to measure brain activation, primarily of the frontal lobes, for the purpose of training cerebral activation of that region.
The instrumental development of NIRS/NIRI/DOT/OCT has proceeded tremendously during the last years and, in particular, in terms of quantification, imaging and miniaturization.
"Peripheral NIRS"
Peripheral microvascular function can be assessed using NIRS. The oxygen saturation of haemoglobin in the tissue (StO2) can provide information about tissue perfusion. A vascular occlusion test (VOT) can be employed to assess microvascular function. Common sites for peripheral NIRS monitoring include the thenar eminence, forearm and calf muscles.
Particle measurement.
NIR is often used in particle sizing in a range of different fields, including studying pharmaceutical and agricultural powders.
Industrial uses.
As opposed to NIRS used in optical topography, general NIRS used in chemical assays does not provide imaging by mapping. For example, a clinical carbon dioxide analyzer requires reference techniques and calibration routines to be able to get accurate CO2 content change. In this case, calibration is performed by adjusting the zero control of the sample being tested after purposefully supplying 0% CO2 or another known amount of CO2 in the sample. Normal compressed gas from distributors contains about 95% O2 and 5% CO2, which can also be used to adjust %CO2 meter reading to be exactly 5% at initial calibration.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "StO_2"
}
] | https://en.wikipedia.org/wiki?curid=979564 |
9795740 | Reaction engine | Type of engine
A reaction engine is an engine or motor that produces thrust by expelling reaction mass (reaction propulsion), in accordance with Newton's third law of motion. This law of motion is commonly paraphrased as: "For every action force there is an equal, but opposite, reaction force."
Examples include jet engines, rocket engines, pump-jets, and more uncommon variations such as Hall effect thrusters, ion drives, mass drivers, and nuclear pulse propulsion.
Discovery.
The discovery of the reaction engine has been attributed to the Romanian inventor Alexandru Ciurcu and to the French journalist Just Buisson.
Energy use.
Propulsive efficiency.
For all reaction engines that carry on-board propellant (such as rocket engines and electric propulsion drives) some energy must go into accelerating the reaction mass. Every engine wastes some energy, but even assuming 100% efficiency, the engine needs energy amounting to
formula_0
(where M is the mass of propellent expended and formula_1 is the exhaust velocity), which is simply the energy to accelerate the exhaust.
Comparing the rocket equation (which shows how much energy ends up in the final vehicle) and the above equation (which shows the total energy required) shows that even with 100% engine efficiency, certainly not all energy supplied ends up in the vehicle – some of it, indeed usually most of it, ends up as kinetic energy of the exhaust.
If the specific impulse (formula_2) is fixed, for a mission delta-v, there is a particular formula_2 that minimises the overall energy used by the rocket. This comes to an exhaust velocity of about ⅔ of the mission delta-v (see the energy computed from the rocket equation). Drives with a specific impulse that is both high and fixed such as Ion thrusters have exhaust velocities that can be enormously higher than this ideal, and thus end up powersource limited and give very low thrust. Where the vehicle performance is power limited, e.g. if solar power or nuclear power is used, then in the case of a large formula_3 the maximum acceleration is inversely proportional to it. Hence the time to reach a required delta-v is proportional to formula_3. Thus the latter should not be too large.
On the other hand, if the exhaust velocity can be made to vary so that at each instant it is equal and opposite to the vehicle velocity then the absolute minimum energy usage is achieved. When this is achieved, the exhaust stops in space and has no kinetic energy; and the propulsive efficiency is 100% all the energy ends up in the vehicle (in principle such a drive would be 100% efficient, in practice there would be thermal losses from within the drive system and residual heat in the exhaust). However, in most cases this uses an impractical quantity of propellant, but is a useful theoretical consideration.
Some drives (such as VASIMR or electrodeless plasma thruster) actually can significantly vary their exhaust velocity. This can help reduce propellant usage and improve acceleration at different stages of the flight. However the best energetic performance and acceleration is still obtained when the exhaust velocity is close to the vehicle speed. Proposed ion and plasma drives usually have exhaust velocities enormously higher than that ideal (in the case of VASIMR the lowest quoted speed is around 15 km/s compared to a mission delta-v from high Earth orbit to Mars of about 4 km/s).
For a mission, for example, when launching from or landing on a planet, the effects of gravitational attraction and any atmospheric drag must be overcome by using fuel. It is typical to combine the effects of these and other effects into an effective mission delta-v. For example, a launch mission to low Earth orbit requires about 9.3–10 km/s delta-v. These mission delta-vs are typically numerically integrated on a computer.
Cycle efficiency.
All reaction engines lose some energy, mostly as heat.
Different reaction engines have different efficiencies and losses. For example, rocket engines can be up to 60–70% energy efficient in terms of accelerating the propellant. The rest is lost as heat and thermal radiation, primarily in the exhaust.
Oberth effect.
Reaction engines are more energy efficient when they emit their reaction mass when the vehicle is travelling at high speed.
This is because the useful mechanical energy generated is simply force times distance, and when a thrust force is generated while the vehicle moves, then:
formula_4
where F is the force and d is the distance moved.
Dividing by length of time of motion we get:
formula_5
Hence:
formula_6
where P is the useful power and v is the speed.
Hence, v should be as high as possible, and a stationary engine does no useful work.
Delta-v and propellant.
Exhausting the entire usable propellant of a spacecraft through the engines in a straight line in free space would produce a net velocity change to the vehicle; this number is termed "delta-v" (formula_7).
If the exhaust velocity is constant then the total formula_7 of a vehicle can be calculated using the rocket equation, where "M" is the mass of propellant, "P" is the mass of the payload (including the rocket structure), and formula_8 is the velocity of the rocket exhaust. This is known as the Tsiolkovsky rocket equation:
formula_9
For historical reasons, as discussed above, formula_8 is sometimes written as
formula_10
where formula_11 is the specific impulse of the rocket, measured in seconds, and formula_12 is the gravitational acceleration at sea level.
For a high delta-v mission, the majority of the spacecraft's mass needs to be reaction mass. Because a rocket must carry all of its reaction mass, most of the initially-expended reaction mass goes towards accelerating reaction mass rather than payload. If the rocket has a payload of mass "P", the spacecraft needs to change its velocity by formula_7, and the rocket engine has exhaust velocity "ve", then the reaction mass "M" which is needed can be calculated using the rocket equation and the formula for formula_11:
formula_13
For formula_7 much smaller than "ve", this equation is roughly linear, and little reaction mass is needed. If formula_7 is comparable to "ve", then there needs to be about twice as much fuel as combined payload and structure (which includes engines, fuel tanks, and so on). Beyond this, the growth is exponential; speeds much higher than the exhaust velocity require very high ratios of fuel mass to payload and structural mass.
For a mission, for example, when launching from or landing on a planet, the effects of gravitational attraction and any atmospheric drag must be overcome by using fuel. It is typical to combine the effects of these and other effects into an effective mission delta-v. For example, a launch mission to low Earth orbit requires about 9.3–10 km/s delta-v. These mission delta-vs are typically numerically integrated on a computer.
Some effects such as Oberth effect can only be significantly utilised by high thrust engines such as rockets; i.e., engines that can produce a high g-force (thrust per unit mass, equal to delta-v per unit time).
Energy.
In the ideal case formula_14 is useful payload and formula_15 is reaction mass (this corresponds to empty tanks having no mass, etc.). The energy required can simply be computed as
formula_16
This corresponds to the kinetic energy the expelled reaction mass would have at a speed equal to the exhaust speed. If the reaction mass had to be accelerated from zero speed to the exhaust speed, all energy produced would go into the reaction mass and nothing would be left for kinetic energy gain by the rocket and payload. However, if the rocket already moves and accelerates (the reaction mass is expelled in the direction opposite to the direction in which the rocket moves) less kinetic energy is added to the reaction mass. To see this, if, for example, formula_8=10 km/s and the speed of the rocket is 3 km/s, then the speed of a small amount of expended reaction mass changes from 3 km/s forwards to 7 km/s rearwards. Thus, although the energy required is 50 MJ per kg reaction mass, only 20 MJ is used for the increase in speed of the reaction mass. The remaining 30 MJ is the increase of the kinetic energy of the rocket and payload.
In general:
formula_17
Thus the specific energy gain of the rocket in any small time interval is the energy gain of the rocket including the remaining fuel, divided by its mass, where the energy gain is equal to the energy produced by the fuel minus the energy gain of the reaction mass. The larger the speed of the rocket, the smaller the energy gain of the reaction mass; if the rocket speed is more than half of the exhaust speed the reaction mass even loses energy on being expelled, to the benefit of the energy gain of the rocket; the larger the speed of the rocket, the larger the energy loss of the reaction mass.
We have
formula_18
where formula_19 is the specific energy of the rocket (potential plus kinetic energy) and formula_7 is a separate variable, not just the change in formula_20. In the case of using the rocket for deceleration; i.e., expelling reaction mass in the direction of the velocity, formula_20 should be taken negative.
The formula is for the ideal case again, with no energy lost on heat, etc. The latter causes a reduction of thrust, so it is a disadvantage even when the objective is to lose energy (deceleration).
If the energy is produced by the mass itself, as in a chemical rocket, the fuel value has to be formula_21, where for the fuel value also the mass of the oxidizer has to be taken into account. A typical value is formula_22 = 4.5 km/s, corresponding to a fuel value of 10.1MJ/kg. The actual fuel value is higher, but much of the energy is lost as waste heat in the exhaust that the nozzle was unable to extract.
The required energy formula_23 is
formula_24
Conclusions:
formula_28.
In the case of acceleration in a fixed direction, and starting from zero speed, and in the absence of other forces, this is 54.4% more than just the final kinetic energy of the payload. In this optimal case the initial mass is 4.92 times the final mass.
These results apply for a fixed exhaust speed.
Due to the Oberth effect and starting from a nonzero speed, the required potential energy needed from the propellant may be "less" than the increase in energy in the vehicle and payload. This can be the case when the reaction mass has a lower speed after being expelled than before – rockets are able to liberate some or all of the initial kinetic energy of the propellant.
Also, for a given objective such as moving from one orbit to another, the required formula_7 may depend greatly on the rate at which the engine can produce formula_7 and maneuvers may even be impossible if that rate is too low. For example, a launch to Low Earth orbit (LEO) normally requires a formula_7 of ca. 9.5 km/s (mostly for the speed to be acquired), but if the engine could produce formula_7 at a rate of only slightly more than "g", it would be a slow launch requiring altogether a very large formula_7 (think of hovering without making any progress in speed or altitude, it would cost a formula_7 of 9.8 m/s each second). If the possible rate is only formula_29 or less, the maneuver can not be carried out at all with this engine.
The power is given by
formula_30
where formula_31 is the thrust and formula_32 the acceleration due to it. Thus the theoretically possible thrust per unit power is 2 divided by the specific impulse in m/s. The thrust efficiency is the actual thrust as percentage of this.
If, e.g., solar power is used, this restricts formula_32; in the case of a large formula_22 the possible acceleration is inversely proportional to it, hence the time to reach a required delta-v is proportional to formula_22; with 100% efficiency:
Examples:
Thus formula_22 should not be too large.
Power to thrust ratio.
The power to thrust ratio is simply:
formula_35
Thus for any vehicle power P, the thrust that may be provided is:
formula_36
Example.
Suppose a 10,000 kg space probe will be sent to Mars. The required formula_7 from LEO is approximately 3000 m/s, using a Hohmann transfer orbit. For the sake of argument, assume the following thrusters are options to be used:
<templatestyles src="Reflist/styles.css" />
Observe that the more fuel-efficient engines can use far less fuel; their mass is almost negligible (relative to the mass of the payload and the engine itself) for some of the engines. However, these require a large total amount of energy. For Earth launch, engines require a thrust to weight ratio of more than one. To do this with the ion or more theoretical electrical drives, the engine would have to be supplied with one to several gigawatts of power, equivalent to a major metropolitan generating station. From the table it can be seen that this is clearly impractical with current power sources.
Alternative approaches include some forms of laser propulsion, where the reaction mass does not provide the energy required to accelerate it, with the energy instead being provided from an external laser or other beam-powered propulsion system. Small models of some of these concepts have flown, although the engineering problems are complex and the ground-based power systems are not a solved problem.
Instead, a much smaller, less powerful generator may be included which will take much longer to generate the total energy needed. This lower power is only sufficient to accelerate a tiny amount of fuel per second, and would be insufficient for launching from Earth. However, over long periods in orbit where there is no friction, the velocity will be finally achieved. For example, it took the SMART-1 more than a year to reach the Moon, whereas with a chemical rocket it takes a few days. Because the ion drive needs much less fuel, the total launched mass is usually lower, which typically results in a lower overall cost, but the journey takes longer.
Mission planning therefore frequently involves adjusting and choosing the propulsion system so as to minimise the total cost of the project, and can involve trading off launch costs and mission duration against payload fraction.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\begin{matrix} \\frac{1}{2} \\end{matrix} MV_e^2"
},
{
"math_id": 1,
"text": "V_e"
},
{
"math_id": 2,
"text": "I_{sp}"
},
{
"math_id": 3,
"text": "v_{e}"
},
{
"math_id": 4,
"text": "E = F \\times d \\;"
},
{
"math_id": 5,
"text": " \\frac E t = P = \\frac {F \\times d} t = F \\times v"
},
{
"math_id": 6,
"text": " P = F \\times v \\;"
},
{
"math_id": 7,
"text": "\\Delta v"
},
{
"math_id": 8,
"text": "v_e"
},
{
"math_id": 9,
"text": " \\Delta v = v_e \\ln \\left(\\frac{M+P}{P}\\right). "
},
{
"math_id": 10,
"text": " v_e = I_\\text{sp} g_0 "
},
{
"math_id": 11,
"text": "I_\\text{sp}"
},
{
"math_id": 12,
"text": "g_0"
},
{
"math_id": 13,
"text": " M = P \\left(e^\\frac{\\Delta v}{v_e} - 1\\right)."
},
{
"math_id": 14,
"text": "m_1"
},
{
"math_id": 15,
"text": "m_0-m_1"
},
{
"math_id": 16,
"text": "\\frac{1}{2}(m_0 - m_1)v_\\text{e}^2"
},
{
"math_id": 17,
"text": "\n d\\left(\\frac{1}{2}v^2\\right) =\n vdv = vv_\\text{e}\\frac{dm}{m} =\n \\frac{1}{2}\\left[v_\\text{e}^2 - \\left(v - v_\\text{e}\\right)^2 + v^2\\right]\\frac{dm}{m}\n"
},
{
"math_id": 18,
"text": "\\Delta \\epsilon = \\int v\\, d (\\Delta v)"
},
{
"math_id": 19,
"text": "\\epsilon"
},
{
"math_id": 20,
"text": "v"
},
{
"math_id": 21,
"text": "\\scriptstyle{v_\\text{e}^2/2}"
},
{
"math_id": 22,
"text": "v_\\text{e}"
},
{
"math_id": 23,
"text": "E"
},
{
"math_id": 24,
"text": "E = \\frac{1}{2}m_1\\left(e^\\frac{\\Delta v}{v_\\text{e}} - 1\\right)v_\\text{e}^2"
},
{
"math_id": 25,
"text": "\\Delta v \\ll v_e"
},
{
"math_id": 26,
"text": "E \\approx \\frac{1}{2}m_1 v_\\text{e} \\Delta v"
},
{
"math_id": 27,
"text": "v_\\text{e} = 0.6275 \\Delta v"
},
{
"math_id": 28,
"text": "E = 0.772 m_1(\\Delta v)^2"
},
{
"math_id": 29,
"text": "g"
},
{
"math_id": 30,
"text": "P = \\frac{1}{2} m a v_\\text{e} = \\frac{1}{2}F v_\\text{e}"
},
{
"math_id": 31,
"text": "F"
},
{
"math_id": 32,
"text": "a"
},
{
"math_id": 33,
"text": "\\Delta v \\ll v_\\text{e}"
},
{
"math_id": 34,
"text": "t\\approx \\frac{m v_\\text{e} \\Delta v}{2P}"
},
{
"math_id": 35,
"text": "\\frac{P}{F} = \\frac{\\frac{1}{2} {\\dot m v^2}}{\\dot m v} = \\frac{1}{2} v "
},
{
"math_id": 36,
"text": "F = \\frac{P}{\\frac{1}{2} v} = \\frac{2 P} v"
}
] | https://en.wikipedia.org/wiki?curid=9795740 |
9795777 | Power transform | Family of functions to transform data
In statistics, a power transform is a family of functions applied to create a monotonic transformation of data using power functions. It is a data transformation technique used to stabilize variance, make the data more normal distribution-like, improve the validity of measures of association (such as the Pearson correlation between variables), and for other data stabilization procedures.
Power transforms are used in multiple fields, including multi-resolution and wavelet analysis, statistical data analysis, medical research, modeling of physical processes, geochemical data analysis, epidemiology and many other clinical, environmental and social research areas.
Definition.
The power transformation is defined as a continuous function of power parameter "λ", typically given in piece-wise form that makes it continuous at the point of singularity ("λ" = 0). For data vectors ("y"1..., "y""n") in which each "y""i" > 0, the power transform is
formula_0
where
formula_1
is the geometric mean of the observations "y"1, ..., "y""n". The case for formula_2 is the limit as formula_3 approaches 0. To see this, note that formula_4 - using Taylor series. Then formula_5, and everything but formula_6 becomes negligible for formula_3 sufficiently small.
The inclusion of the ("λ" − 1)th power of the geometric mean in the denominator simplifies the scientific interpretation of any equation involving formula_7, because the units of measurement do not change as "λ" changes.
Box and Cox (1964) introduced the geometric mean into this transformation by first including the Jacobian of rescaled power transformation
formula_8
with the likelihood. This Jacobian is as follows:
formula_9
This allows the normal log likelihood at its maximum to be written as follows:
formula_10
From here, absorbing formula_11 into the expression for formula_12 produces an expression that establishes that minimizing the sum of squares of residuals from formula_7is equivalent to maximizing the sum of the normal log likelihood of deviations from formula_13 and the log of the Jacobian of the transformation.
The value at "Y" = 1 for any "λ" is 0, and the derivative with respect to "Y" there is 1 for any "λ". Sometimes "Y" is a version of some other variable scaled to give "Y" = 1 at some sort of average value.
The transformation is a power transformation, but done in such a way as to make it continuous with the parameter "λ" at "λ" = 0. It has proved popular in regression analysis, including econometrics.
Box and Cox also proposed a more general form of the transformation that incorporates a shift parameter.
formula_14
which holds if "y""i" + α > 0 for all "i". If τ("Y", λ, α) follows a truncated normal distribution, then "Y" is said to follow a Box–Cox distribution.
Bickel and Doksum eliminated the need to use a truncated distribution by extending the range of the transformation to all "y", as follows:
formula_15
where sgn(.) is the sign function. This change in definition has little practical import as long as formula_16 is less than formula_17, which it usually is.
Bickel and Doksum also proved that the parameter estimates are consistent and asymptotically normal under appropriate regularity conditions, though the standard Cramér–Rao lower bound can substantially underestimate the variance when parameter values are small relative to the noise variance. However, this problem of underestimating the variance may not be a substantive problem in many applications.
Box–Cox transformation.
The one-parameter Box–Cox transformations are defined as
formula_18
and the two-parameter Box–Cox transformations as
formula_19
as described in the original article. Moreover, the first transformations hold for formula_20, and the second for formula_21.
The parameter formula_3 is estimated using the profile likelihood function and using goodness-of-fit tests.
Confidence interval.
Confidence interval for the Box–Cox transformation can be asymptotically constructed using on the profile likelihood function to find all the possible values of formula_3 that fulfill the following restriction:
formula_22
Example.
The BUPA liver data set contains data on liver enzymes ALT and γGT. Suppose we are interested in using log(γGT) to predict ALT. A plot of the data appears in panel (a) of the figure. There appears to be non-constant variance, and a Box–Cox transformation might help.
The log-likelihood of the power parameter appears in panel (b). The horizontal reference line is at a distance of χ12/2 from the maximum and can be used to read off an approximate 95% confidence interval for λ. It appears as though a value close to zero would be good, so we take logs.
Possibly, the transformation could be improved by adding a shift parameter to the log transformation. Panel (c) of the figure shows the log-likelihood. In this case, the maximum of the likelihood is close to zero suggesting that a shift parameter is not needed. The final panel shows the transformed data with a superimposed regression line.
Note that although Box–Cox transformations can make big improvements in model fit, there are some issues that the transformation cannot help with. In the current example, the data are rather heavy-tailed so that the assumption of normality is not realistic and a robust regression approach leads to a more precise model.
Econometric application.
Economists often characterize production relationships by some variant of the Box–Cox transformation.
Consider a common representation of production "Q" as dependent on services provided by a capital stock "K" and by labor hours "N":
formula_23
Solving for "Q" by inverting the Box–Cox transformation we find
formula_24
which is known as the "constant elasticity of substitution (CES)" production function.
The CES production function is a homogeneous function of degree one.
When "λ" = 1, this produces the linear production function:
formula_25
When "λ" → 0 this produces the famous Cobb–Douglas production function:
formula_26
Activities and demonstrations.
The SOCR resource pages contain a number of hands-on interactive activities demonstrating the Box–Cox (power) transformation using Java applets and charts. These directly illustrate the effects of this transform on Q–Q plots, X–Y scatterplots, time-series plots and histograms.
Yeo–Johnson transformation.
The Yeo–Johnson transformation
allows also for zero and negative values of formula_27.
formula_3 can be any real number, where formula_28 produces the identity transformation.
The transformation law reads:
formula_29
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "y_i^{(\\lambda)} =\n\\begin{cases}\n\\dfrac{y_i^\\lambda-1}{\\lambda(\\operatorname{GM}(y))^{\\lambda -1}} , &\\text{if } \\lambda \\neq 0 \\\\[12pt]\n\\operatorname{GM}(y)\\ln{y_i} , &\\text{if } \\lambda = 0\n\\end{cases}\n"
},
{
"math_id": 1,
"text": " \\operatorname{GM}(y) = \\left(\\prod_{i=1}^n y_i\\right)^\\frac{1}{n} = \\sqrt[n]{y_1 y_2 \\cdots y_n} \\, "
},
{
"math_id": 2,
"text": "\\lambda = 0"
},
{
"math_id": 3,
"text": "\\lambda"
},
{
"math_id": 4,
"text": "y_i^\\lambda = \\exp({\\lambda \\ln(y_i)}) = 1 + \\lambda \\ln(y_i) + O((\\lambda \\ln(y_i))^2)"
},
{
"math_id": 5,
"text": "\\dfrac{y_i^\\lambda-1}\\lambda = \\ln(y_i) + O(\\lambda)"
},
{
"math_id": 6,
"text": "\\ln(y_i)"
},
{
"math_id": 7,
"text": "y_i^{(\\lambda)}"
},
{
"math_id": 8,
"text": " \\frac{y^\\lambda-1} \\lambda. "
},
{
"math_id": 9,
"text": " J(\\lambda; y_1, \\ldots, y_n) = \\prod_{i=1}^n |d y_i^{(\\lambda)} / dy|\n= \\prod_{i=1}^n y_i^{\\lambda-1}\n= \\operatorname{GM}(y)^{n(\\lambda-1)}\n"
},
{
"math_id": 10,
"text": "\n\\begin{align}\n \\log ( \\mathcal{L} (\\hat\\mu,\\hat\\sigma)) & = (-n/2)(\\log(2\\pi\\hat\\sigma^2) +1) +\nn(\\lambda-1) \\log(\\operatorname{GM}(y)) \\\\[5pt]\n& = (-n/2)(\\log(2\\pi\\hat\\sigma^2 / \\operatorname{GM}(y)^{2(\\lambda-1)}) + 1).\n\\end{align}\n"
},
{
"math_id": 11,
"text": "\\operatorname{GM}(y)^{2(\\lambda-1)}"
},
{
"math_id": 12,
"text": "\\hat\\sigma^2"
},
{
"math_id": 13,
"text": "(y^\\lambda-1)/\\lambda"
},
{
"math_id": 14,
"text": "\\tau(y_i;\\lambda, \\alpha) = \\begin{cases} \\dfrac{(y_i + \\alpha)^\\lambda - 1}{\\lambda (\\operatorname{GM}(y+\\alpha))^{\\lambda - 1}} & \\text{if } \\lambda\\neq 0, \\\\ \\\\\n\\operatorname{GM}(y+\\alpha)\\ln(y_i + \\alpha)& \\text{if } \\lambda=0,\\end{cases}"
},
{
"math_id": 15,
"text": "\\tau(y_i;\\lambda, \\alpha) = \\begin{cases}\n\\dfrac{\\operatorname{sgn}(y_i + \\alpha)|y_i + \\alpha|^\\lambda - 1}{\\lambda (\\operatorname{GM}(y+\\alpha))^{\\lambda - 1}} & \\text{if } \\lambda\\neq 0, \\\\ \\\\\n\\operatorname{GM}(y+\\alpha)\\operatorname{sgn}(y+\\alpha)\\ln(y_i + \\alpha)& \\text{if } \\lambda=0,\\end{cases}"
},
{
"math_id": 16,
"text": "\\alpha"
},
{
"math_id": 17,
"text": "\\operatorname{min}(y_i)"
},
{
"math_id": 18,
"text": "\ny_i^{(\\lambda)} =\n\\begin{cases}\n \\dfrac{y_i^\\lambda - 1}{\\lambda} & \\text{if } \\lambda \\neq 0, \\\\\n \\ln y_i & \\text{if } \\lambda = 0,\n\\end{cases}\n"
},
{
"math_id": 19,
"text": "\ny_i^{(\\boldsymbol{\\lambda})} =\n\\begin{cases}\n \\dfrac{(y_i + \\lambda_2)^{\\lambda_1} - 1}{\\lambda_1} & \\text{if } \\lambda_1 \\neq 0, \\\\\n \\ln (y_i + \\lambda_2) & \\text{if } \\lambda_1 = 0,\n\\end{cases}\n"
},
{
"math_id": 20,
"text": "y_i > 0"
},
{
"math_id": 21,
"text": "y_i > -\\lambda_2"
},
{
"math_id": 22,
"text": "\\ln \\big(L(\\lambda)\\big) \\ge \\ln \\big(L(\\hat\\lambda)\\big) - \\frac{1}{2} {\\chi^2}_{1,1 - \\alpha}."
},
{
"math_id": 23,
"text": "\\tau(Q)=\\alpha \\tau(K)+ (1-\\alpha)\\tau(N).\\,"
},
{
"math_id": 24,
"text": "Q=\\big(\\alpha K^\\lambda + (1-\\alpha) N^\\lambda\\big)^{1/\\lambda},\\,"
},
{
"math_id": 25,
"text": "Q=\\alpha K + (1-\\alpha)N.\\,"
},
{
"math_id": 26,
"text": "Q=K^\\alpha N^{1-\\alpha}.\\,"
},
{
"math_id": 27,
"text": "y"
},
{
"math_id": 28,
"text": "\\lambda = 1"
},
{
"math_id": 29,
"text": "\ny_i^{(\\lambda)} = \\begin{cases} ((y_i+1)^\\lambda-1)/\\lambda & \\text{if }\\lambda \\neq 0, y \\geq 0 \\\\[4pt] \n \\ln(y_i + 1) & \\text{if }\\lambda = 0, y \\geq 0 \\\\[4pt]\n -((-y_i + 1)^{(2-\\lambda)} - 1) / (2 - \\lambda) & \\text{if }\\lambda \\neq 2, y < 0 \\\\[4pt]\n -\\ln(-y_i + 1) & \\text{if }\\lambda = 2, y < 0\n \\end{cases}\n"
}
] | https://en.wikipedia.org/wiki?curid=9795777 |
9797479 | Rotor (electric) | Non-stationary part of a rotary electric motor
The rotor is a moving component of an electromagnetic system in the electric motor, electric generator, or alternator. Its rotation is due to the interaction between the windings and magnetic fields which produces a torque around the rotor's axis.
Early development.
An early example of electromagnetic rotation was the first rotary machine built by Ányos Jedlik with electromagnets and a commutator, in 1826-27. Other pioneers in the field of electricity include Hippolyte Pixii who built an alternating current generator in 1832, and William Ritchie's construction of an electromagnetic generator with four rotor coils, a commutator and brushes, also in 1832. Development quickly included more useful applications such as Moritz Hermann Jacobi's motor that could lift 10 to 12 pounds with a speed of one foot per second, about 15 watts of mechanical power in 1834. In 1835, Francis Watkins describes an electrical "toy" he created; he is generally regarded as one of the first to understand the interchangeability of motor and generator.
Type and construction of rotors.
Induction (asynchronous) motors, generators and alternators (synchronous) have an electromagnetic system consisting of a stator and rotor. There are two designs for the rotor in an induction motor: squirrel cage and wound. In generators and alternators, the rotor designs are salient pole or cylindrical.
Squirrel-cage rotor.
The squirrel-cage rotor consists of laminated steel in the core with evenly spaced bars of copper or aluminum placed axially around the periphery, permanently shorted at the ends by the end rings. This simple and rugged construction makes it the favorite for most applications. The assembly has a twist: the bars are slanted, or skewed, to reduce magnetic hum and slot harmonics and to reduce the tendency of locking. Housed in the stator, the rotor and stator teeth can lock when they are in equal number and the magnets position themselves equally apart, opposing rotation in both directions. Bearings at each end mount the rotor in its housing, with one end of the shaft protruding to allow the attachment of the load. In some motors, there is an extension at the non-driving end for speed sensors or other electronic controls. The generated torque forces motion through the rotor to the load.
Wound rotor.
The wound rotor is a cylindrical core made of steel lamination with slots to hold the wires for its 3-phase windings which are evenly spaced at 120 electrical degrees apart and connected in a 'Y' configuration. The rotor winding terminals are brought out and attached to the three slips rings with brushes, on the shaft of the rotor. Brushes on the slip rings allow for external three-phase resistors to be connected in series to the rotor windings for providing speed control. The external resistances become a part of the rotor circuit to produce a large torque when starting the motor. As the motor speeds up, the resistances can be reduced to zero.
Salient pole rotor.
A salient pole rotor is built upon a stack of "star shaped" steel laminations, typically with 2 or 3 or 4 or 6, maybe even 18 or more "radial prongs" sticking out from the middle, each of which is wound with copper wire to form a discrete outward facing electromagnet pole. The inward facing ends of each prong are magnetically grounded into the common central body of the rotor. The poles are supplied by direct current or magnetized by permanent magnets. The armature with a three-phase winding is on the stator where voltage is induced. Direct current (DC), from an external exciter or from a diode bridge mounted on the rotor shaft, produces a magnetic field and energizes the rotating field windings and alternating current energizes the armature windings simultaneously.
A salient pole ends in a pole shoe, a high-permeability part with an outer surface shaped as a segment of a cylinder to homogenize the distribution of the magnetic flux to the stator.
Non-salient rotor.
The cylindrical shaped rotor is made of a solid steel shaft with slots running along the outside length of the cylinder for holding the field windings of the rotor which are laminated copper bars inserted into the slots and is secured by wedges. The slots are insulated from the windings and are held at the end of the rotor by slip rings. An external direct current (DC) source is connected to the concentrically mounted slip rings with brushes running along the rings. The brushes make electrical contact with the rotating slip rings. DC current is also supplied through brushless excitation from a rectifier mounted on the machine shaft that converts alternating current to direct current.
Operating principle.
In a three-phase induction machine, alternating current supplied to the stator windings energizes it to create a rotating magnetic flux. The flux generates a magnetic field in the air gap between the stator and the rotor and induces a voltage which produces current through the rotor bars. The rotor circuit is shorted and current flows in the rotor conductors. The action of the rotating flux and the current produces a force that generates a torque to start the motor.
An alternator rotor is made up of a wire coil enveloped around an iron core. The magnetic component of the rotor is made from steel laminations to aid stamping conductor slots to specific shapes and sizes. As currents travel through the wire coil a magnetic field is created around the core, which is referred to as field current. The field current strength controls the power level of the magnetic field. Direct current (DC) drives the field current in one direction, and is delivered to the wire coil by a set of brushes and slip rings. Like any magnet, the magnetic field produced has a north and a south pole. The normal clockwise direction of the motor that the rotor is powering can be manipulated by using the magnets and magnetic fields installed in the design of the rotor, allowing the motor to run in reverse or counterclockwise.
This rotor rotates at a speed less than the stator rotating magnetic field or synchronous speed.
Rotor slip provides necessary induction of rotor currents for motor torque, which is in proportion to slip.
When rotor speed increases, the slip decreases.
Increasing the slip increases induced motor current, which in turn increases rotor current, resulting in a higher torque for increase load demands.
This rotor operates at constant speed and has lower starting current
External resistance added to rotor circuit, increases starting torque
Motor running efficiency improves as external resistance is reduced when motor speed up.
Higher torque and speed control
This rotor operates at a speed below 1500 rpm (revolutions per minute) and 40% of its rated torque without excitation
It has a large diameter and short axial length
Air gap is non uniform
Rotor has low mechanical strength
The rotor operates at speed between 1500-3600 rpm
It has strong mechanical strength
Air gap is uniform
Its diameter is small and has a large axial length and requires a higher torque than salient pole rotor
Rotor equations.
Rotor bar voltage.
The rotating magnetic field induces a voltage in the rotor bars as it passes over them. This equation applies to induced voltage in the rotor bars.
formula_0
where:
formula_1 = induced voltage
formula_2 = magnetic field
formula_3 = conductor length
formula_4 = synchronous speed
formula_5= conductor speed
Torque in rotor.
A torque is produced by the force produced through the interactions of the magnetic field and current as expressed by the given: "Ibid"
formula_6
formula_7
where:
formula_8 = force
formula_9 = torque
formula_10 = radius of rotor rings
formula_11 = rotor bar
Induction motor slip.
A stator magnetic field rotates at synchronous speed, formula_12 "Ibid"
formula_13
where:
formula_14 = frequency
formula_15 = number of poles
If formula_16 = rotor speed, the slip, S for an induction motor is expressed as:
formula_17
mechanical speed of rotor, in terms of slip and synchronous speed:
formula_18
formula_19
Relative speed of slip:
formula_20
formula_21
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " E=BL(V_{syn}-V_m) "
},
{
"math_id": 1,
"text": "E"
},
{
"math_id": 2,
"text": "B"
},
{
"math_id": 3,
"text": "L"
},
{
"math_id": 4,
"text": "V_{syn}"
},
{
"math_id": 5,
"text": "V_m"
},
{
"math_id": 6,
"text": "F=(BxI)L "
},
{
"math_id": 7,
"text": "T=Fxr "
},
{
"math_id": 8,
"text": "F"
},
{
"math_id": 9,
"text": "T"
},
{
"math_id": 10,
"text": "r"
},
{
"math_id": 11,
"text": "I"
},
{
"math_id": 12,
"text": "n_s "
},
{
"math_id": 13,
"text": " n_s=\\frac{120f}{p} "
},
{
"math_id": 14,
"text": "f"
},
{
"math_id": 15,
"text": "p"
},
{
"math_id": 16,
"text": " n_m "
},
{
"math_id": 17,
"text": " s=\\frac{n_s - n_m}{n_s} \\times 100\\% "
},
{
"math_id": 18,
"text": " n_m = (1-s)n_s "
},
{
"math_id": 19,
"text": " \\omega_m=(1-s)\\omega_s "
},
{
"math_id": 20,
"text": " n_{slip}=n_s-n_m "
},
{
"math_id": 21,
"text": " f_r= sf_e "
}
] | https://en.wikipedia.org/wiki?curid=9797479 |
979771 | Marginal likelihood | In Bayesian probability theory
A marginal likelihood is a likelihood function that has been integrated over the parameter space. In Bayesian statistics, it represents the probability of generating the observed sample for all possible values of the parameters; it can be understood as the probability of the model itself and is therefore often referred to as model evidence or simply evidence.
Due to the integration over the parameter space, the marginal likelihood does not directly depend upon the parameters. If the focus is not on model comparison, the marginal likelihood is simply the normalizing constant that ensures that the posterior is a proper probability. It is related to the partition function in statistical mechanics.
Concept.
Given a set of independent identically distributed data points formula_0 where formula_1 according to some probability distribution parameterized by formula_2, where formula_2 itself is a random variable described by a distribution, i.e. formula_3 the marginal likelihood in general asks what the probability formula_4 is, where formula_2 has been marginalized out (integrated out):
formula_5
The above definition is phrased in the context of Bayesian statistics in which case formula_6 is called prior density and formula_7 is the likelihood. The marginal likelihood quantifies the agreement between data and prior in a geometric sense made precise in de Carvalho et al. (2019). In classical (frequentist) statistics, the concept of marginal likelihood occurs instead in the context of a joint parameter formula_8, where formula_9 is the actual parameter of interest, and formula_10 is a non-interesting nuisance parameter. If there exists a probability distribution for formula_10, it is often desirable to consider the likelihood function only in terms of formula_9, by marginalizing out formula_10:
formula_11
Unfortunately, marginal likelihoods are generally difficult to compute. Exact solutions are known for a small class of distributions, particularly when the marginalized-out parameter is the conjugate prior of the distribution of the data. In other cases, some kind of numerical integration method is needed, either a general method such as Gaussian integration or a Monte Carlo method, or a method specialized to statistical problems such as the Laplace approximation, Gibbs/Metropolis sampling, or the EM algorithm.
It is also possible to apply the above considerations to a single random variable (data point) formula_12, rather than a set of observations. In a Bayesian context, this is equivalent to the prior predictive distribution of a data point.
Applications.
Bayesian model comparison.
In Bayesian model comparison, the marginalized variables formula_2 are parameters for a particular type of model, and the remaining variable formula_13 is the identity of the model itself. In this case, the marginalized likelihood is the probability of the data given the model type, not assuming any particular model parameters. Writing formula_2 for the model parameters, the marginal likelihood for the model "M" is
formula_14
It is in this context that the term "model evidence" is normally used. This quantity is important because the posterior odds ratio for a model "M"1 against another model "M"2 involves a ratio of marginal likelihoods, called the Bayes factor:
formula_15
which can be stated schematically as
posterior odds = prior odds × Bayes factor
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbf{X}=(x_1,\\ldots,x_n),"
},
{
"math_id": 1,
"text": "x_i \\sim p(x|\\theta)"
},
{
"math_id": 2,
"text": "\\theta"
},
{
"math_id": 3,
"text": "\\theta \\sim p(\\theta\\mid\\alpha),"
},
{
"math_id": 4,
"text": "p(\\mathbf{X}\\mid\\alpha)"
},
{
"math_id": 5,
"text": "p(\\mathbf{X}\\mid\\alpha) = \\int_\\theta p(\\mathbf{X}\\mid\\theta) \\, p(\\theta\\mid\\alpha)\\ \\operatorname{d}\\!\\theta "
},
{
"math_id": 6,
"text": "p(\\theta\\mid\\alpha)"
},
{
"math_id": 7,
"text": "p(\\mathbf{X}\\mid\\theta)"
},
{
"math_id": 8,
"text": "\\theta = (\\psi,\\lambda)"
},
{
"math_id": 9,
"text": "\\psi"
},
{
"math_id": 10,
"text": "\\lambda"
},
{
"math_id": 11,
"text": "\\mathcal{L}(\\psi;\\mathbf{X}) = p(\\mathbf{X}\\mid\\psi) = \\int_\\lambda p(\\mathbf{X}\\mid\\lambda,\\psi) \\, p(\\lambda\\mid\\psi) \\ \\operatorname{d}\\!\\lambda "
},
{
"math_id": 12,
"text": "x"
},
{
"math_id": 13,
"text": "M"
},
{
"math_id": 14,
"text": " p(\\mathbf{X}\\mid M) = \\int p(\\mathbf{X}\\mid\\theta, M) \\, p(\\theta\\mid M) \\, \\operatorname{d}\\!\\theta "
},
{
"math_id": 15,
"text": " \\frac{p(M_1\\mid \\mathbf{X})}{p(M_2\\mid \\mathbf{X})} = \\frac{p(M_1)}{p(M_2)} \\, \\frac{p(\\mathbf{X}\\mid M_1)}{p(\\mathbf{X}\\mid M_2)} "
}
] | https://en.wikipedia.org/wiki?curid=979771 |
9798796 | Heavy Rydberg system | A heavy Rydberg system consists of a weakly bound positive and negative ion orbiting their common centre of mass. Such systems share many properties with the conventional Rydberg atom and consequently are sometimes referred to as heavy Rydberg atoms. While such a system is a type of ionically bound molecule, it should not be confused with a molecular Rydberg state, which is simply a molecule with one or more highly excited electrons.
The peculiar properties of the Rydberg atom come from the large charge separation and the resulting hydrogenic potential. The extremely large separation between the two components of a heavy Rydberg system results in an almost perfect "1/r" hydrogenic potential seen by each ion. The positive ion can be viewed as analogous to the nucleus of a hydrogen atom, with the negative ion playing the role of the electron.
Species.
The most commonly studied system to date is the system, consisting of a proton bound with a ion. The system was first observed in 2000 by a group at the University of Waterloo in Canada.
The formation of the ion can be understood classically; as the single electron in a hydrogen atom cannot fully shield the positively charged nucleus, another electron brought into close proximity will feel an attractive force. While this classical description is nice for getting a feel for the interactions involved, it is an oversimplification; many other atoms have a greater electron affinity than hydrogen. In general the process of forming a negative ion is driven by the filling of atomic electron shells to form a lower energy configuration.
Only a small number of molecules have been used to produce heavy Rydberg systems although in principle any atom with a positive electron affinity can bind with a positive ion. Species used include , and . Fluorine and oxygen are particularly favoured due to their high electron affinity, high ionisation energy and consequently high electronegativity.
Production.
The difficulty in the production of heavy Rydberg systems arises in finding an energetic pathway by which a molecule can be excited with just the right energy to form an ion pair, without sufficient internal energy to cause autodissociation (a process analogous to autoionization in atoms) or rapid dissociation due to collisions or local fields.
Currently production of heavy Rydberg systems relies on complex vacuum ultra-violet (so called because it is strongly absorbed in air and requires the entire system to be enclosed within a vacuum chamber) or multi-photon transitions (relying on absorption of multiple photons almost simultaneously), both of which are rather inefficient and result in systems with high internal energy.
Features.
The bond length in a heavy Rydberg system is 10,000 times larger than in a typical diatomic molecule. As well as producing the characteristic hydrogen-like behaviour, this also makes them extremely sensitive to perturbation by external electric and magnetic fields.
Heavy Rydberg systems have a relatively large reduced mass, given by:
formula_0
This leads to a very slow time evolution, which makes them easy to manipulate both spatially and energetically, while their low binding energy makes them relatively simple to detect through field dissociation and detection of the resulting ions, in a process known as "threshold ion-pair production spectroscopy".
Kepler's third law states that the period of an orbit is proportional to the cube of the semi-major axis; this can be applied to the Coulomb force:
formula_1
where formula_2 is the time-period, formula_3 is the reduced mass, formula_4 is the semi-major axis and formula_5.
Classically we can say that a system with a large reduced mass has a long orbital period. Quantum mechanically, a large reduced mass in a system leads to narrow spacing of the energy levels and the rate of time-evolution of the wavefunction depends on this energy spacing. This slow time-evolution makes heavy Rydberg systems ideal for experimentally probing the dynamics of quantum systems. | [
{
"math_id": 0,
"text": " \\mu = {m_1m_2 \\over m_1+m_2} "
},
{
"math_id": 1,
"text": " \\tau^2 = {4\\pi^2\\mu \\over kZe^2}a^3 "
},
{
"math_id": 2,
"text": "\\tau"
},
{
"math_id": 3,
"text": "\\mu"
},
{
"math_id": 4,
"text": "a"
},
{
"math_id": 5,
"text": "k = 1/(4\\pi\\epsilon_0)"
}
] | https://en.wikipedia.org/wiki?curid=9798796 |
980328 | Cauchy space | Concept in general topology and analysis
In general topology and analysis, a Cauchy space is a generalization of metric spaces and uniform spaces for which the notion of Cauchy convergence still makes sense. Cauchy spaces were introduced by H. H. Keller in 1968, as an axiomatic tool derived from the idea of a Cauchy filter, in order to study completeness in topological spaces. The category of Cauchy spaces and "Cauchy continuous maps" is Cartesian closed, and contains the category of proximity spaces.
Definition.
Throughout, formula_0 is a set, formula_1 denotes the power set of formula_2 and all filters are assumed to be proper/non-degenerate (i.e. a filter may not contain the empty set).
A Cauchy space is a pair formula_3 consisting of a set formula_0 together a family formula_4 of (proper) filters on formula_0 having all of the following properties:
An element of formula_16 is called a Cauchy filter, and a map formula_17 between Cauchy spaces formula_3 and formula_18 is Cauchy continuous if formula_19; that is, the image of each Cauchy filter in formula_0 is a Cauchy filter base in formula_20
Properties and definitions.
Any Cauchy space is also a convergence space, where a filter formula_11 converges to formula_21 if formula_22 is Cauchy. In particular, a Cauchy space carries a natural topology.
Category of Cauchy spaces.
The natural notion of morphism between Cauchy spaces is that of a Cauchy-continuous function, a concept that had earlier been studied for uniform spaces.
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "X"
},
{
"math_id": 1,
"text": "\\wp(X)"
},
{
"math_id": 2,
"text": "X,"
},
{
"math_id": 3,
"text": "(X, C)"
},
{
"math_id": 4,
"text": "C \\subseteq \\wp(\\wp(X))"
},
{
"math_id": 5,
"text": "x \\in X,"
},
{
"math_id": 6,
"text": "x,"
},
{
"math_id": 7,
"text": "U(x),"
},
{
"math_id": 8,
"text": "C."
},
{
"math_id": 9,
"text": "F \\in C,"
},
{
"math_id": 10,
"text": "G"
},
{
"math_id": 11,
"text": "F"
},
{
"math_id": 12,
"text": "G,"
},
{
"math_id": 13,
"text": "G \\in C."
},
{
"math_id": 14,
"text": "F, G \\in C"
},
{
"math_id": 15,
"text": "F \\cap G \\in C."
},
{
"math_id": 16,
"text": "C"
},
{
"math_id": 17,
"text": "f"
},
{
"math_id": 18,
"text": "(Y, D)"
},
{
"math_id": 19,
"text": "\\uparrow f(C) \\subseteq D"
},
{
"math_id": 20,
"text": "Y."
},
{
"math_id": 21,
"text": "x"
},
{
"math_id": 22,
"text": "F \\cap U(x)"
},
{
"math_id": 23,
"text": "A"
},
{
"math_id": 24,
"text": "n \\in A,"
},
{
"math_id": 25,
"text": "U \\in F"
},
{
"math_id": 26,
"text": "U"
},
{
"math_id": 27,
"text": "\\{m : m \\geq n\\}."
},
{
"math_id": 28,
"text": "A."
},
{
"math_id": 29,
"text": "A,"
},
{
"math_id": 30,
"text": "A \\cup \\{\\infty\\};"
},
{
"math_id": 31,
"text": "\\infty"
},
{
"math_id": 32,
"text": "\\{1, 2, 3, \\ldots\\}"
},
{
"math_id": 33,
"text": "\\{1, 1/2, 1/3, \\ldots\\}."
}
] | https://en.wikipedia.org/wiki?curid=980328 |
980388 | Ideal quotient | In abstract algebra, if "I" and "J" are ideals of a commutative ring "R", their ideal quotient ("I" : "J") is the set
formula_0
Then ("I" : "J") is itself an ideal in "R". The ideal quotient is viewed as a quotient because formula_1 if and only if formula_2. The ideal quotient is useful for calculating primary decompositions. It also arises in the description of the set difference in algebraic geometry (see below).
("I" : "J") is sometimes referred to as a colon ideal because of the notation. In the context of fractional ideals, there is a related notion of the inverse of a fractional ideal.
Properties.
The ideal quotient satisfies the following properties:
Calculating the quotient.
The above properties can be used to calculate the quotient of ideals in a polynomial ring given their generators. For example, if "I" = ("f"1, "f"2, "f"3) and "J" = ("g"1, "g"2) are ideals in "k"["x"1, ..., "x""n"], then
formula_14
Then elimination theory can be used to calculate the intersection of "I" with ("g"1) and ("g"2):
formula_15
Calculate a Gröbner basis for formula_16 with respect to lexicographic order. Then the basis functions which have no "t" in them generate formula_17.
Geometric interpretation.
The ideal quotient corresponds to set difference in algebraic geometry. More precisely,
formula_18
where formula_19 denotes the taking of the ideal associated to a subset.
formula_20
where formula_21 denotes the Zariski closure, and formula_22 denotes the taking of the variety defined by an ideal. If "I" is not radical, then the same property holds if we saturate the ideal "J":
formula_23
where formula_24.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(I : J) = \\{r \\in R \\mid rJ \\subseteq I\\}"
},
{
"math_id": 1,
"text": "KJ \\subseteq I"
},
{
"math_id": 2,
"text": "K \\subseteq (I : J)"
},
{
"math_id": 3,
"text": "(I :J)=\\mathrm{Ann}_R((J+I)/I)"
},
{
"math_id": 4,
"text": "R"
},
{
"math_id": 5,
"text": "\\mathrm{Ann}_R(M)"
},
{
"math_id": 6,
"text": "M"
},
{
"math_id": 7,
"text": "J \\subseteq I \\Leftrightarrow (I : J) = R"
},
{
"math_id": 8,
"text": "(I : I) = (R : I) = (I : 0) = R"
},
{
"math_id": 9,
"text": "(I : R) = I"
},
{
"math_id": 10,
"text": "(I : (JK)) = ((I : J) : K)"
},
{
"math_id": 11,
"text": "(I : (J + K)) = (I : J) \\cap (I : K)"
},
{
"math_id": 12,
"text": "((I \\cap J) : K) = (I : K) \\cap (J : K)"
},
{
"math_id": 13,
"text": "(I : (r)) = \\frac{1}{r}(I \\cap (r))"
},
{
"math_id": 14,
"text": "I : J = (I : (g_1)) \\cap (I : (g_2)) = \\left(\\frac{1}{g_1}(I \\cap (g_1))\\right) \\cap \\left(\\frac{1}{g_2}(I \\cap (g_2))\\right)"
},
{
"math_id": 15,
"text": "I \\cap (g_1) = tI + (1-t) (g_1) \\cap k[x_1, \\dots, x_n], \\quad I \\cap (g_2) = tI + (1-t) (g_2) \\cap k[x_1, \\dots, x_n]"
},
{
"math_id": 16,
"text": "tI+(1-t)(g_1)"
},
{
"math_id": 17,
"text": "I \\cap (g_1)"
},
{
"math_id": 18,
"text": "I(V) : I(W) = I(V \\setminus W)"
},
{
"math_id": 19,
"text": "I(\\bullet)"
},
{
"math_id": 20,
"text": "Z(I : J) = \\mathrm{cl}(Z(I) \\setminus Z(J))"
},
{
"math_id": 21,
"text": "\\mathrm{cl}(\\bullet)"
},
{
"math_id": 22,
"text": "Z(\\bullet)"
},
{
"math_id": 23,
"text": "Z(I : J^{\\infty}) = \\mathrm{cl}(Z(I) \\setminus Z(J))"
},
{
"math_id": 24,
"text": "(I : J^\\infty )= \\cup_{n \\geq 1} (I:J^n)"
},
{
"math_id": 25,
"text": "\\mathbb{Z}"
},
{
"math_id": 26,
"text": "((6):(2)) = (3)"
},
{
"math_id": 27,
"text": "I"
},
{
"math_id": 28,
"text": "((1):I) = I^{-1}"
},
{
"math_id": 29,
"text": "I = (xyz), J = (xy)"
},
{
"math_id": 30,
"text": "\\mathbb{C}[x,y,z]"
},
{
"math_id": 31,
"text": "\\mathbb{A}^3_\\mathbb{C}"
},
{
"math_id": 32,
"text": "(I:J) = (z)"
},
{
"math_id": 33,
"text": "((x^4y^3):(x^2y^2)) = (x^2y)"
},
{
"math_id": 34,
"text": "I \\subset R[x_0,\\ldots,x_n]"
},
{
"math_id": 35,
"text": "(I: \\mathfrak{m}^\\infty) = \\cup_{i \\geq 1} (I:\\mathfrak{m}^i)"
},
{
"math_id": 36,
"text": "\\mathfrak{m} = (x_0,\\ldots,x_n) \\subset R[x_0,\\ldots, x_n]"
},
{
"math_id": 37,
"text": "R[x_0,\\ldots, x_n]"
},
{
"math_id": 38,
"text": "\\mathfrak{m}"
},
{
"math_id": 39,
"text": "\\mathbb{P}^n_R"
},
{
"math_id": 40,
"text": "(x^4 + y^4 + z^4)\\mathfrak{m}^k"
},
{
"math_id": 41,
"text": "(x^4 + y^4 + z^4)"
},
{
"math_id": 42,
"text": "\\mathbb{P}^2_\\mathbb{C}"
}
] | https://en.wikipedia.org/wiki?curid=980388 |
9804 | Electric charge | Electromagnetic property of matter
Electric charge (symbol "q", sometimes "Q") is the physical property of matter that causes it to experience a force when placed in an electromagnetic field. Electric charge can be "positive" or "negative". Like charges repel each other and unlike charges attract each other. An object with no net charge is referred to as electrically neutral. Early knowledge of how charged substances interact is now called classical electrodynamics, and is still accurate for problems that do not require consideration of quantum effects.
Electric charge is a conserved property: the net charge of an isolated system, the quantity of positive charge minus the amount of negative charge, cannot change. Electric charge is carried by subatomic particles. In ordinary matter, negative charge is carried by electrons, and positive charge is carried by the protons in the nuclei of atoms. If there are more electrons than protons in a piece of matter, it will have a negative charge, if there are fewer it will have a positive charge, and if there are equal numbers it will be neutral. Charge is "quantized": it comes in integer multiples of individual small units called the elementary charge, "e", about , which is the smallest charge that can exist freely. Particles called quarks have smaller charges, multiples of "e", but they are found only combined in particles that have a charge that is an integer multiple of "e". In the Standard Model, charge is an absolutely conserved quantum number. The proton has a charge of +"e", and the electron has a charge of −"e".
Today, a negative charge is defined as the charge carried by an electron and a positive charge is that carried by a proton. Before these particles were discovered, a positive charge was defined by Benjamin Franklin as the charge acquired by a glass rod when it is rubbed with a silk cloth.
Electric charges produce electric fields. A moving charge also produces a magnetic field. The interaction of electric charges with an electromagnetic field (a combination of an electric and a magnetic field) is the source of the electromagnetic (or Lorentz) force, which is one of the four fundamental interactions in physics. The study of photon-mediated interactions among charged particles is called quantum electrodynamics.
The SI derived unit of electric charge is the coulomb (C) named after French physicist Charles-Augustin de Coulomb. In electrical engineering it is also common to use the ampere-hour (A⋅h). In physics and chemistry it is common to use the elementary charge ("e") as a unit. Chemistry also uses the Faraday constant, which is the charge of one mole of elementary charges.
Overview.
Charge is the fundamental property of matter that exhibits electrostatic attraction or repulsion in the presence of other matter with charge. Electric charge is a characteristic property of many subatomic particles. The charges of free-standing particles are integer multiples of the elementary charge "e"; we say that electric charge is "quantized". Michael Faraday, in his electrolysis experiments, was the first to note the discrete nature of electric charge. Robert Millikan's oil drop experiment demonstrated this fact directly, and measured the elementary charge. It has been discovered that one type of particle, quarks, have fractional charges of either − or +, but it is believed they always occur in multiples of integral charge; free-standing quarks have never been observed.
By convention, the charge of an electron is negative, "−e", while that of a proton is positive, "+e". Charged particles whose charges have the same sign repel one another, and particles whose charges have different signs attract. Coulomb's law quantifies the electrostatic force between two particles by asserting that the force is proportional to the product of their charges, and inversely proportional to the square of the distance between them. The charge of an antiparticle equals that of the corresponding particle, but with opposite sign.
The electric charge of a macroscopic object is the sum of the electric charges of the particles that it is made up of. This charge is often small, because matter is made of atoms, and atoms typically have equal numbers of protons and electrons, in which case their charges cancel out, yielding a net charge of zero, thus making the atom neutral.
An "ion" is an atom (or group of atoms) that has lost one or more electrons, giving it a net positive charge (cation), or that has gained one or more electrons, giving it a net negative charge (anion). "Monatomic ions" are formed from single atoms, while "polyatomic ions" are formed from two or more atoms that have been bonded together, in each case yielding an ion with a positive or negative net charge.
During the formation of macroscopic objects, constituent atoms and ions usually combine to form structures composed of neutral "ionic compounds" electrically bound to neutral atoms. Thus macroscopic objects tend toward being neutral overall, but macroscopic objects are rarely perfectly net neutral.
Sometimes macroscopic objects contain ions distributed throughout the material, rigidly bound in place, giving an overall net positive or negative charge to the object. Also, macroscopic objects made of conductive elements can more or less easily (depending on the element) take on or give off electrons, and then maintain a net negative or positive charge indefinitely. When the net electric charge of an object is non-zero and motionless, the phenomenon is known as static electricity. This can easily be produced by rubbing two dissimilar materials together, such as rubbing amber with fur or glass with silk. In this way, non-conductive materials can be charged to a significant degree, either positively or negatively. Charge taken from one material is moved to the other material, leaving an opposite charge of the same magnitude behind. The law of "conservation of charge" always applies, giving the object from which a negative charge is taken a positive charge of the same magnitude, and vice versa.
Even when an object's net charge is zero, the charge can be distributed non-uniformly in the object (e.g., due to an external electromagnetic field, or bound polar molecules). In such cases, the object is said to be polarized. The charge due to polarization is known as bound charge, while the charge on an object produced by electrons gained or lost from outside the object is called "free charge". The motion of electrons in conductive metals in a specific direction is known as electric current.
Unit.
The SI unit of quantity of electric charge is the coulomb (symbol: C). The coulomb is defined as the quantity of charge that passes through the cross section of an electrical conductor carrying one ampere for one second. This unit was proposed in 1946 and ratified in 1948. The lowercase symbol "q" is often used to denote a quantity of electric charge. The quantity of electric charge can be directly measured with an electrometer, or indirectly measured with a ballistic galvanometer.
The elementary charge (the electric charge of the proton) is defined as a fundamental constant in the SI. The value for elementary charge, when expressed in SI units, is exactly .
After discovering the quantized character of charge, in 1891, George Stoney proposed the unit 'electron' for this fundamental unit of electrical charge. J. J. Thomson subsequently discovered the particle that we now call the electron in 1897. The unit is today referred to as elementary charge, fundamental unit of charge, or simply denoted "e", with the charge of an electron being −"e". The charge of an isolated system should be a multiple of the elementary charge "e", even if at large scales charge seems to behave as a continuous quantity. In some contexts it is meaningful to speak of fractions of an elementary charge; for example, in the fractional quantum Hall effect.
The unit faraday is sometimes used in electrochemistry. One faraday is the magnitude of the charge of one mole of elementary charges, i.e. C.
History.
From ancient times, people were familiar with four types of phenomena that today would all be explained using the concept of electric charge: (a) lightning, (b) the torpedo fish (or electric ray), (c) St Elmo's Fire, and (d) that amber rubbed with fur would attract small, light objects. The first account of the amber effect is often attributed to the ancient Greek mathematician Thales of Miletus, who lived from c. 624 to c. 546 BC, but there are doubts about whether Thales left any writings; his account about amber is known from an account from early 200s. This account can be taken as evidence that the phenomenon was known since at least c. 600 BC, but Thales explained this phenomenon as evidence for inanimate objects having a soul. In other words, there was no indication of any conception of electric charge. More generally, the ancient Greeks did not understand the connections among these four kinds of phenomena. The Greeks observed that the charged amber buttons could attract light objects such as hair. They also found that if they rubbed the amber for long enough, they could even get an electric spark to jump, but there is also a claim that no mention of electric sparks appeared until late 17th century. This property derives from the triboelectric effect.
In late 1100s, the substance jet, a compacted form of coal, was noted to have an amber effect, and in the middle of the 1500s, Girolamo Fracastoro, discovered that diamond also showed this effect. Some efforts were made by Fracastoro and others, especially Gerolamo Cardano to develop explanations for this phenomenon.
In contrast to astronomy, mechanics, and optics, which had been studied quantitatively since antiquity, the start of ongoing qualitative and quantitative research into electrical phenomena can be marked with the publication of "De Magnete" by the English scientist William Gilbert in 1600. In this book, there was a small section where Gilbert returned to the amber effect (as he called it) in addressing many of the earlier theories, and coined the Neo-Latin word "electrica" (from (ēlektron), the Greek word for "amber"). The Latin word was translated into English as electrics. Gilbert is also credited with the term "electrical", while the term "electricity" came later, first attributed to Sir Thomas Browne in his Pseudodoxia Epidemica from 1646. (For more linguistic details see Etymology of electricity.) Gilbert hypothesized that this amber effect could be explained by an effluvium (a small stream of particles that flows from the electric object, without diminishing its bulk or weight) that acts on other objects. This idea of a material electrical effluvium was influential in the 17th and 18th centuries. It was a precursor to ideas developed in the 18th century about "electric fluid" (Dufay, Nollet, Franklin) and "electric charge".
Around 1663 Otto von Guericke invented what was probably the first electrostatic generator, but he did not recognize it primarily as an electrical device and only conducted minimal electrical experiments with it. Other European pioneers were Robert Boyle, who in 1675 published the first book in English that was devoted solely to electrical phenomena. His work was largely a repetition of Gilbert's studies, but he also identified several more "electrics", and noted mutual attraction between two bodies.
In 1729 Stephen Gray was experimenting with static electricity, which he generated using a glass tube. He noticed that a cork, used to protect the tube from dust and moisture, also became electrified (charged). Further experiments (e.g., extending the cork by putting thin sticks into it) showed—for the first time—that electrical effluvia (as Gray called it) could be transmitted (conducted) over a distance. Gray managed to transmit charge with twine (765 feet) and wire (865 feet). Through these experiments, Gray discovered the importance of different materials, which facilitated or hindered the conduction of electrical effluvia. John Theophilus Desaguliers, who repeated many of Gray's experiments, is credited with coining the terms conductors and insulators to refer to the effects of different materials in these experiments. Gray also discovered electrical induction (i.e., where charge could be transmitted from one object to another without any direct physical contact). For example, he showed that by bringing a charged glass tube close to, but not touching, a lump of lead that was sustained by a thread, it was possible to make the lead become electrified (e.g., to attract and repel brass filings). He attempted to explain this phenomenon with the idea of electrical effluvia.
Gray's discoveries introduced an important shift in the historical development of knowledge about electric charge. The fact that electrical effluvia could be transferred from one object to another, opened the theoretical possibility that this property was not inseparably connected to the bodies that were electrified by rubbing. In 1733 Charles François de Cisternay du Fay, inspired by Gray's work, made a series of experiments (reported in "Mémoires de l'Académie Royale des Sciences"), showing that more or less all substances could be 'electrified' by rubbing, except for metals and fluids and proposed that electricity comes in two varieties that cancel each other, which he expressed in terms of a two-fluid theory. When glass was rubbed with silk, du Fay said that the glass was charged with "vitreous electricity", and, when amber was rubbed with fur, the amber was charged with "resinous electricity". In contemporary understanding, positive charge is now defined as the charge of a glass rod after being rubbed with a silk cloth, but it is arbitrary which type of charge is called positive and which is called negative. Another important two-fluid theory from this time was proposed by Jean-Antoine Nollet (1745).
Up until about 1745, the main explanation for electrical attraction and repulsion was the idea that electrified bodies gave off an effluvium.
Benjamin Franklin started electrical experiments in late 1746, and by 1750 had developed a one-fluid theory of electricity, based on an experiment that showed that a rubbed glass received the same, but opposite, charge strength as the cloth used to rub the glass. Franklin imagined electricity as being a type of invisible fluid present in all matter and coined the term charge itself (as well as battery and some others); for example, he believed that it was the glass in a Leyden jar that held the accumulated charge. He posited that rubbing insulating surfaces together caused this fluid to change location, and that a flow of this fluid constitutes an electric current. He also posited that when matter contained an excess of the fluid it was positively charged and when it had a deficit it was negatively charged. He identified the term positive with vitreous electricity and negative with resinous electricity after performing an experiment with a glass tube he had received from his overseas colleague Peter Collinson. The experiment had participant A charge the glass tube and participant B receive a shock to the knuckle from the charged tube. Franklin identified participant B to be positively charged after having been shocked by the tube. There is some ambiguity about whether William Watson independently arrived at the same one-fluid explanation around the same time (1747). Watson, after seeing Franklin's letter to Collinson, claims that he had presented the same explanation as Franklin in spring 1747. Franklin had studied some of Watson's works prior to making his own experiments and analysis, which was probably significant for Franklin's own theorizing. One physicist suggests that Watson first proposed a one-fluid theory, which Franklin then elaborated further and more influentially. A historian of science argues that Watson missed a subtle difference between his ideas and Franklin's, so that Watson misinterpreted his ideas as being similar to Franklin's. In any case, there was no animosity between Watson and Franklin, and the Franklin model of electrical action, formulated in early 1747, eventually became widely accepted at that time. After Franklin's work, effluvia-based explanations were rarely put forward.
It is now known that the Franklin model was fundamentally correct. There is only one kind of electrical charge, and only one variable is required to keep track of the amount of charge.
Until 1800 it was only possible to study conduction of electric charge by using an electrostatic discharge. In 1800 Alessandro Volta was the first to show that charge could be maintained in continuous motion through a closed path.
In 1833, Michael Faraday sought to remove any doubt that electricity is identical, regardless of the source by which it is produced. He discussed a variety of known forms, which he characterized as common electricity (e.g., static electricity, piezoelectricity, magnetic induction), voltaic electricity (e.g., electric current from a voltaic pile), and animal electricity (e.g., bioelectricity).
In 1838, Faraday raised a question about whether electricity was a fluid or fluids or a property of matter, like gravity. He investigated whether matter could be charged with one kind of charge independently of the other. He came to the conclusion that electric charge was a relation between two or more bodies, because he could not charge one body without having an opposite charge in another body.
In 1838, Faraday also put forth a theoretical explanation of electric force, while expressing neutrality about whether it originates from one, two, or no fluids. He focused on the idea that the normal state of particles is to be nonpolarized, and that when polarized, they seek to return to their natural, nonpolarized state.
In developing a field theory approach to electrodynamics (starting in the mid-1850s), James Clerk Maxwell stops considering electric charge as a special substance that accumulates in objects, and starts to understand electric charge as a consequence of the transformation of energy in the field. This pre-quantum understanding considered magnitude of electric charge to be a continuous quantity, even at the microscopic level.
The role of charge in static electricity.
Static electricity refers to the electric charge of an object and the related electrostatic discharge when two objects are brought together that are not at equilibrium. An electrostatic discharge creates a change in the charge of each of the two objects.
Electrification by sliding.
When a piece of glass and a piece of resin—neither of which exhibit any electrical properties—are rubbed together and left with the rubbed surfaces in contact, they still exhibit no electrical properties. When separated, they attract each other.
A second piece of glass rubbed with a second piece of resin, then separated and suspended near the former pieces of glass and resin causes these phenomena:
This attraction and repulsion is an "electrical phenomenon", and the bodies that exhibit them are said to be "electrified", or "electrically charged". Bodies may be electrified in many other ways, as well as by sliding. The electrical properties of the two pieces of glass are similar to each other but opposite to those of the two pieces of resin: The glass attracts what the resin repels and repels what the resin attracts.
If a body electrified in any manner whatsoever behaves as the glass does, that is, if it repels the glass and attracts the resin, the body is said to be "vitreously" electrified, and if it attracts the glass and repels the resin it is said to be "resinously" electrified. All electrified bodies are either vitreously or resinously electrified.
An established convention in the scientific community defines vitreous electrification as positive, and resinous electrification as negative. The exactly opposite properties of the two kinds of electrification justify our indicating them by opposite signs, but the application of the positive sign to one rather than to the other kind must be considered as a matter of arbitrary convention—just as it is a matter of convention in mathematical diagram to reckon positive distances towards the right hand.
The role of charge in electric current.
Electric current is the flow of electric charge through an object. The most common charge carriers are the positively charged proton and the negatively charged electron. The movement of any of these charged particles constitutes an electric current. In many situations, it suffices to speak of the "conventional current" without regard to whether it is carried by positive charges moving in the direction of the conventional current or by negative charges moving in the opposite direction. This macroscopic viewpoint is an approximation that simplifies electromagnetic concepts and calculations.
At the opposite extreme, if one looks at the microscopic situation, one sees there are many ways of carrying an electric current, including: a flow of electrons; a flow of electron holes that act like positive particles; and both negative and positive particles (ions or other charged particles) flowing in opposite directions in an electrolytic solution or a plasma.
Beware that, in the common and important case of metallic wires, the direction of the conventional current is opposite to the drift velocity of the actual charge carriers; i.e., the electrons. This is a source of confusion for beginners.
Conservation of electric charge.
The total electric charge of an isolated system remains constant regardless of changes within the system itself. This law is inherent to all processes known to physics and can be derived in a local form from gauge invariance of the wave function. The conservation of charge results in the charge-current continuity equation. More generally, the rate of change in charge density "ρ" within a volume of integration "V" is equal to the area integral over the current density J through the closed surface "S" = ∂"V", which is in turn equal to the net current "I":
formula_0 formula_1 formula_2
Thus, the conservation of electric charge, as expressed by the continuity equation, gives the result:
formula_3
The charge transferred between times formula_4 and formula_5 is obtained by integrating both sides:
formula_6
where "I" is the net outward current through a closed surface and "q" is the electric charge contained within the volume defined by the surface.
Relativistic invariance.
Aside from the properties described in articles about electromagnetism, charge is a relativistic invariant. This means that any particle that has charge "q" has the same charge regardless of how fast it is travelling. This property has been experimentally verified by showing that the charge of one helium nucleus (two protons and two neutrons bound together in a nucleus and moving around at high speeds) is the same as two deuterium nuclei (one proton and one neutron bound together, but moving much more slowly than they would if they were in a helium nucleus).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "- \\frac{d}{dt} \\int_V \\rho \\, \\mathrm{d}V = "
},
{
"math_id": 1,
"text": "\\scriptstyle \\partial V"
},
{
"math_id": 2,
"text": "\\mathbf J \\cdot\\mathrm{d}\\mathbf S = \\int J \\mathrm{d}S \\cos\\theta = I."
},
{
"math_id": 3,
"text": "I = -\\frac{\\mathrm{d}q}{\\mathrm{d}t}."
},
{
"math_id": 4,
"text": "t_\\mathrm{i}"
},
{
"math_id": 5,
"text": "t_\\mathrm{f}"
},
{
"math_id": 6,
"text": "q = \\int_{t_{\\mathrm{i}}}^{t_{\\mathrm{f}}} I\\, \\mathrm{d}t "
}
] | https://en.wikipedia.org/wiki?curid=9804 |
980508 | Cubic graph | Graph with all vertices of degree 3
In the mathematical field of graph theory, a cubic graph is a graph in which all vertices have degree three. In other words, a cubic graph is a 3-regular graph. Cubic graphs are also called trivalent graphs.
A bicubic graph is a cubic bipartite graph.
Symmetry.
In 1932, Ronald M. Foster began collecting examples of cubic symmetric graphs, forming the start of the Foster census. Many well-known individual graphs are cubic and symmetric, including the utility graph, the Petersen graph, the Heawood graph, the Möbius–Kantor graph, the Pappus graph, the Desargues graph, the Nauru graph, the Coxeter graph, the Tutte–Coxeter graph, the Dyck graph, the Foster graph and the Biggs–Smith graph. W. T. Tutte classified the symmetric cubic graphs by the smallest integer number "s" such that each two oriented paths of length "s" can be mapped to each other by exactly one symmetry of the graph. He showed that "s" is at most 5, and provided examples of graphs with each possible value of "s" from 1 to 5.
Semi-symmetric cubic graphs include the Gray graph (the smallest semi-symmetric cubic graph), the Ljubljana graph, and the Tutte 12-cage.
The Frucht graph is one of the five smallest cubic graphs without any symmetries: it possesses only a single graph automorphism, the identity automorphism.
Coloring and independent sets.
According to Brooks' theorem every connected cubic graph other than the complete graph "K"4 has a vertex coloring with at most three colors. Therefore, every connected cubic graph other than "K"4 has an independent set of at least "n"/3 vertices, where "n" is the number of vertices in the graph: for instance, the largest color class in a 3-coloring has at least this many vertices.
According to Vizing's theorem every cubic graph needs either three or four colors for an edge coloring. A 3-edge-coloring is known as a Tait coloring, and forms a partition of the edges of the graph into three perfect matchings. By Kőnig's line coloring theorem every bicubic graph has a Tait coloring.
The bridgeless cubic graphs that do not have a Tait coloring are known as snarks. They include the Petersen graph, Tietze's graph, the Blanuša snarks, the flower snark, the double-star snark, the Szekeres snark and the Watkins snark. There is an infinite number of distinct snarks.
Topology and geometry.
Cubic graphs arise naturally in topology in several ways. For example, the cubic graphs with "2g-2" vertices describe the different ways of cutting a surface of genus "g ≥ 2" into pairs of pants. If one considers a graph to be a 1-dimensional CW complex, cubic graphs are "generic" in that most 1-cell attaching maps are disjoint from the 0-skeleton of the graph. Cubic graphs are also formed as the graphs of simple polyhedra in three dimensions, polyhedra such as the regular dodecahedron with the property that three faces meet at every vertex.
An arbitrary graph embedding on a two-dimensional surface may be represented as a cubic graph structure known as a graph-encoded map. In this structure, each vertex of a cubic graph represents a flag of the embedding, a mutually incident triple of a vertex, edge, and face of the surface. The three neighbors of each flag are the three flags that may be obtained from it by changing one of the members of this mutually incident triple and leaving the other two members unchanged.
Hamiltonicity.
There has been much research on Hamiltonicity of cubic graphs. In 1880, P.G. Tait conjectured that every cubic polyhedral graph has a Hamiltonian circuit. William Thomas Tutte provided a counter-example to Tait's conjecture, the 46-vertex Tutte graph, in 1946. In 1971, Tutte conjectured that all bicubic graphs are Hamiltonian. However, Joseph Horton provided a counterexample on 96 vertices, the Horton graph. Later, Mark Ellingham constructed two more counterexamples: the Ellingham–Horton graphs. Barnette's conjecture, a still-open combination of Tait's and Tutte's conjecture, states that every bicubic polyhedral graph is Hamiltonian. When a cubic graph is Hamiltonian, LCF notation allows it to be represented concisely.
If a cubic graph is chosen uniformly at random among all "n"-vertex cubic graphs, then it is very likely to be Hamiltonian: the proportion of the "n"-vertex cubic graphs that are Hamiltonian tends to one in the limit as "n" goes to infinity.
David Eppstein conjectured that every "n"-vertex cubic graph has at most 2"n"/3 (approximately 1.260"n") distinct Hamiltonian cycles, and provided examples of cubic graphs with that many cycles. The best proven estimate for the number of distinct Hamiltonian cycles is formula_0.
Other properties.
<templatestyles src="Unsolved/styles.css" />
Unsolved problem in mathematics:
What is the largest possible pathwidth of an formula_1-vertex cubic graph?
The pathwidth of any "n"-vertex cubic graph is at most "n"/6. The best known lower bound on the pathwidth of cubic graphs is 0.082"n". It is not known how to reduce this gap between this lower bound and the "n"/6 upper bound.
It follows from the handshaking lemma, proven by Leonhard Euler in 1736 as part of the first paper on graph theory, that every cubic graph has an even number of vertices.
Petersen's theorem states that every cubic bridgeless graph has a perfect matching.
Lovász and Plummer conjectured that every cubic bridgeless graph has an exponential number of perfect matchings. The conjecture was recently proved, showing that every cubic bridgeless graph with "n" vertices has at least 2n/3656 perfect matchings.
Algorithms and complexity.
Several researchers have studied the complexity of exponential time algorithms restricted to cubic graphs. For instance, by applying dynamic programming to a path decomposition of the graph, Fomin and Høie showed how to find their maximum independent sets in time 2"n"/6 + o("n"). The travelling salesman problem in cubic graphs can be solved in time O(1.2312"n") and polynomial space.
Several important graph optimization problems are APX hard, meaning that, although they have approximation algorithms whose approximation ratio is bounded by a constant, they do not have polynomial time approximation schemes whose approximation ratio tends to 1 unless P=NP. These include the problems of finding a minimum vertex cover, maximum independent set, minimum dominating set, and maximum cut.
The crossing number (the minimum number of edges which cross in any graph drawing) of a cubic graph is also NP-hard for cubic graphs but may be approximated.
The Travelling Salesman Problem on cubic graphs has been proven to be NP-hard to approximate to within any factor less than 1153/1152.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " O({1.276}^n)"
},
{
"math_id": 1,
"text": "n"
}
] | https://en.wikipedia.org/wiki?curid=980508 |
980666 | Pioneer anomaly | Deviation in spacecraft deceleration
The Pioneer anomaly, or Pioneer effect, was the observed deviation from predicted accelerations of the "Pioneer 10" and "Pioneer 11" spacecraft after they passed about on their trajectories out of the Solar System. The apparent anomaly was a matter of much interest for many years but has been subsequently explained by anisotropic radiation pressure caused by the spacecraft's heat loss.
Both "Pioneer" spacecraft are escaping the Solar System but are slowing under the influence of the Sun's gravity. Upon very close examination of navigational data, the spacecraft were found to be slowing slightly more than expected. The effect is an extremely small acceleration towards the Sun, of , which is equivalent to a reduction of the outbound velocity by over a period of ten years. The two spacecraft were launched in 1972 and 1973. The anomalous acceleration was first noticed as early as 1980 but not seriously investigated until 1994. The last communication with either spacecraft was in 2003, but analysis of recorded data continues.
Various explanations, both of spacecraft behavior and of gravitation itself, were proposed to explain the anomaly. Over the period from 1998 to 2012, one particular explanation became accepted. The spacecraft, which are surrounded by an ultra-high vacuum and are each powered by a radioisotope thermoelectric generator (RTG), can shed heat only via thermal radiation. If, due to the design of the spacecraft, more heat is emitted in a particular direction by what is known as a radiative anisotropy, then the spacecraft would accelerate slightly in the direction opposite of the excess emitted radiation due to the recoil of thermal photons. If the excess radiation and attendant radiation pressure were pointed in a general direction opposite the Sun, the spacecraft's velocity away from the Sun would be decreasing at a rate greater than could be explained by previously recognized forces, such as gravity and trace friction due to the interplanetary medium (imperfect vacuum).
By 2012, several papers by different groups, all reanalyzing the thermal radiation pressure forces inherent in the spacecraft, showed that a careful accounting of this explains the entire anomaly; thus the cause is mundane and does not point to any new phenomenon or need to update the laws of physics. The most detailed analysis to date, by some of the original investigators, explicitly looks at two methods of estimating thermal forces, concluding that there is "no statistically significant difference between the two estimates and [...] that once the thermal recoil force is properly accounted for, no anomalous acceleration remains."
Description.
"Pioneer 10" and "11" were sent on missions to Jupiter and Jupiter/Saturn respectively. Both spacecraft were spin-stabilised in order to keep their high-gain antennas pointed towards Earth using gyroscopic forces. Although the spacecraft included thrusters, after the planetary encounters they were used only for semiannual conical scanning maneuvers to track Earth in its orbit, leaving them on a long "cruise" phase through the outer Solar System. During this period, both spacecraft were repeatedly contacted to obtain various measurements on their physical environment, providing valuable information long after their initial missions were complete.
Because the spacecraft were flying with almost no additional stabilization thrusts during their "cruise", it is possible to characterize the density of the solar medium by its effect on the spacecraft's motion. In the outer Solar System this effect would be easily calculable, based on ground-based measurements of the deep space environment. When these effects were taken into account, along with all other known effects, the calculated position of the Pioneers did not agree with measurements based on timing the return of the radio signals being sent back from the spacecraft. These consistently showed that both spacecraft were closer to the inner Solar System than they should be, by thousands of kilometres—small compared to their distance from the Sun, but still statistically significant. This apparent discrepancy grew over time as the measurements were repeated, suggesting that whatever was causing the anomaly was still acting on the spacecraft.
As the anomaly was growing, it appeared that the spacecraft were moving more slowly than expected. Measurements of the spacecraft's speed using the Doppler effect demonstrated the same thing: the observed redshift was less than expected, which meant that the Pioneers had slowed down more than expected.
When all known forces acting on the spacecraft were taken into consideration, a very small but unexplained force remained. It appeared to cause an approximately constant sunward acceleration of for both spacecraft. If the positions of the spacecraft were predicted one year in advance based on measured velocity and known forces (mostly gravity), they were actually found to be some closer to the sun at the end of the year. This anomaly is now believed to be accounted for by thermal recoil forces.
Explanation: thermal recoil force.
Starting in 1998, there were suggestions that the thermal recoil force was underestimated, and perhaps could account for the entire anomaly. However, accurately accounting for thermal forces was hard, because it needed telemetry records of the spacecraft temperatures and a detailed thermal model, neither of which was available at the time. Furthermore, all thermal models predicted a decrease in the effect with time, which did not appear in the initial analysis.
One by one these objections were addressed. Many of the old telemetry records were found, and converted to modern formats. This gave power consumption figures and some temperatures for parts of the spacecraft. Several groups built detailed thermal models, which could be checked against the known temperatures and powers, and allowed a quantitative calculation of the recoil force. The longer span of navigational records showed the acceleration was in fact decreasing.
In July 2012, Slava Turyshev "et al." published a paper in "Physical Review Letters" that explained the anomaly. The work explored the effect of the thermal recoil force on Pioneer 10, and concluded that "once the thermal recoil force is properly accounted for, no anomalous acceleration remains." Although the paper by Turyshev "et al." has the most detailed analysis to date, the explanation based on thermal recoil force has the support of other independent research groups, using a variety of computational techniques. Examples include "thermal recoil pressure is not the cause of the Rosetta flyby anomaly but likely resolves the anomalous acceleration observed for "Pioneer 10"." and "It is shown that the whole anomalous acceleration can be explained by thermal effects".
Indications from other missions.
The Pioneers were uniquely suited to discover the effect because they have been flying for long periods of time without additional course corrections. Most deep-space probes launched after the Pioneers either stopped at one of the planets, or used thrusting throughout their mission.
The Voyagers flew a mission profile similar to the Pioneers, but were not spin stabilized. Instead, they required frequent firings of their thrusters for attitude control to stay aligned with Earth. Spacecraft like the Voyagers acquire small and unpredictable changes in speed as a side effect of the frequent attitude control firings. This 'noise' makes it impractical to measure small accelerations such as the Pioneer effect; accelerations as large as 10−9 m/s2 would be undetectable.
Newer spacecraft have used spin stabilization for some or all of their mission, including both "Galileo" and "Ulysses". These spacecraft indicate a similar effect, although for various reasons (such as their relative proximity to the Sun) firm conclusions cannot be drawn from these sources. The "Cassini" mission has reaction wheels as well as thrusters for attitude control, and during cruise could rely for long periods on the reaction wheels alone, thus enabling precision measurements. It also had radioisotope thermoelectric generators (RTGs) mounted close to the spacecraft body, radiating kilowatts of heat in hard-to-predict directions.
After "Cassini" arrived at Saturn, it shed a large fraction of its mass from the fuel used in the insertion burn and the release of the "Huygens" probe. This increases the acceleration caused by the radiation forces because they are acting on less mass. This change in acceleration allows the radiation forces to be measured independently of any gravitational acceleration. Comparing cruise and Saturn-orbit results shows that for "Cassini", almost all the unmodelled acceleration was due to radiation forces, with only a small residual acceleration, much smaller than the Pioneer acceleration, and with opposite sign.
The non-gravitational acceleration of the deep space probe "New Horizons" has been measured at about sunward, somewhat larger than the effect on Pioneer. Modelling of thermal effects indicates an expected sunward acceleration of , and given the uncertainties, the acceleration appears consistent with thermal radiation as the source of the non-gravitational forces measured. The measured acceleration is slowly decreasing as would be expected from the decreasing thermal output of the RTG.
Potential issues with the thermal solution.
There are two features of the anomaly, as originally reported, that are not addressed by the thermal solution: periodic variations in the anomaly, and the onset of the anomaly near the orbit of Saturn.
First, the anomaly has an apparent annual periodicity and an apparent Earth sidereal daily periodicity with amplitudes that are formally greater than the error budget. However, the same paper also states this problem is most likely not related to the anomaly: "The annual and diurnal terms are very likely different manifestations of the same modeling problem. [...] Such a modeling problem arises when there are errors in any of the parameters of the spacecraft orientation with respect to the chosen reference frame."
Second, the value of the anomaly measured over a period during and after the Pioneer 11 Saturn encounter had a relatively high uncertainty and a significantly lower value. The Turyshev, et al. 2012 paper compared the thermal analysis to the "Pioneer 10" only. The Pioneer anomaly was unnoticed until after "Pioneer 10" passed its Saturn encounter. However, the most recent analysis states: "Figure 2 is strongly suggestive that the previously reported "onset" of the Pioneer anomaly may in fact be a simple result of mis-modeling of the solar thermal contribution; this question may be resolved with further analysis of early trajectory data".
Previously proposed explanations.
Before the thermal recoil explanation became accepted, other proposed explanations fell into two classes—"mundane causes" or "new physics". Mundane causes include conventional effects that were overlooked or mis-modeled in the initial analysis, such as measurement error, thrust from gas leakage, or uneven heat radiation. The "new physics" explanations proposed revision of our understanding of gravitational physics.
If the Pioneer anomaly had been a gravitational effect due to some long-range modifications of the known laws of gravity, it did not affect the orbital motions of the major natural bodies in the same way (in particular those moving in the regions in which the Pioneer anomaly manifested itself in its presently known form). Hence a gravitational explanation would need to
violate the equivalence principle, which states that all objects are affected the same way by gravity. It was therefore argued that increasingly accurate measurements and modelling of the motions of the outer planets and their satellites undermined the possibility that the Pioneer anomaly is a phenomenon of gravitational origin. However, others believed that our knowledge of the motions of the outer planets and dwarf planet Pluto was still insufficient to disprove the gravitational nature of the Pioneer anomaly. The same authors ruled out the existence of a gravitational Pioneer-type extra-acceleration in the outskirts of the Solar System by using a sample of Trans-Neptunian objects.
The magnitude of the Pioneer effect formula_0 () is numerically quite close to the product () of the speed of light formula_1 and the Hubble constant formula_2, hinting at a cosmological connection, but this is now believed to be of no particular significance.
In fact the latest Jet Propulsion Laboratory review (2010) undertaken by Turyshev and Toth claims to rule out the cosmological connection by considering rather conventional sources whereas other scientists provided a disproof based on the physical implications of cosmological models themselves.
Gravitationally bound objects such as the Solar System, or even the Milky Way, are not supposed to partake of the expansion of the universe—this is known both from conventional theory and by direct measurement. This does not necessarily interfere with paths new physics can take with drag effects from planetary secular accelerations of possible cosmological origin.
Deceleration model.
It has been viewed as possible that a real deceleration is not accounted for in the current model for several reasons.
Gravity.
It is possible that deceleration is caused by gravitational forces from unidentified sources such as the Kuiper belt or dark matter. However, this acceleration does not show up in the orbits of the outer planets, so any generic gravitational answer would need to violate the equivalence principle (see modified inertia below). Likewise, the anomaly does not appear in the orbits of Neptune's moons, challenging the possibility that the Pioneer anomaly may be an unconventional gravitational phenomenon based on range from the Sun.
Drag.
The cause could be drag from the interplanetary medium, including dust, solar wind and cosmic rays. However, the measured densities are too small to cause the effect.
Gas leaks.
Gas leaks, including helium from the spacecraft's radioisotope thermoelectric generators (RTGs) have been thought as possible cause.
Observational or recording errors.
The possibility of observational errors, which include measurement and computational errors, has been advanced as a reason for interpreting the data as an anomaly. Hence, this would result in approximation and statistical errors. However, further analysis has determined that significant errors are not likely because seven independent analyses have shown the existence of the Pioneer anomaly as of March 2010.
The effect is so small that it could be a statistical anomaly caused by differences in the way data were collected over the lifetime of the probes. Numerous changes were made over this period, including changes in the receiving instruments, reception sites, data recording systems and recording formats.
New physics.
Because the "Pioneer anomaly" does not show up as an effect on the planets, Anderson "et al." speculated that this would be interesting if this was "new physics". Later, with the Doppler shifted signal confirmed, the team again speculated that one explanation may lie with new physics, if not some unknown systemic explanation.
Clock acceleration.
Clock acceleration was an alternate explanation to anomalous acceleration of the spacecraft towards the Sun. This theory took notice of an expanding universe, which was thought to create an "increasing" background 'gravitational potential'. The increased gravitational potential would then accelerate cosmological time. It was proposed that this particular effect causes the observed deviation from predicted trajectories and velocities of "Pioneer 10" and "Pioneer 11".
From their data, Anderson's team deduced a steady frequency drift of over eight years. This could be mapped on to a clock acceleration theory, which meant all clocks would be changing in relation to a constant acceleration: in other words, that there would be a non-uniformity of time. Moreover, for such a distortion related to time, Anderson's team reviewed several models in which time distortion as a phenomenon is considered. They arrived at the "clock acceleration" model after completion of the review. Although the best model adds a quadratic term to defined International Atomic Time, the team encountered problems with this theory. This then led to "non-uniform time in relation to a constant acceleration" as the most likely theory.
Definition of gravity modified.
The Modified Newtonian dynamics or MOND hypothesis proposed that the force of gravity deviates from the traditional Newtonian value to a very different force law at very low accelerations on the order of . Given the low accelerations placed on the spacecraft while in the outer Solar System, MOND may be in effect, modifying the normal gravitational equations. The Lunar Laser Ranging experiment combined with data of LAGEOS satellites refutes that simple gravity modification is the cause of the Pioneer anomaly. The precession of the longitudes of perihelia of the solar planets or the trajectories of long-period comets have not been reported to experience an anomalous gravitational field toward the Sun of the magnitude capable of describing the Pioneer anomaly.
Definition of inertia modified.
MOND can also be interpreted as a modification of inertia, perhaps due to an interaction with vacuum energy, and such a trajectory-dependent theory could account for the different accelerations apparently acting on the orbiting planets and the Pioneer craft on their escape trajectories. A possible terrestrial test for evidence of a different model of modified inertia has also been proposed.
Parametric time.
Another theoretical explanation was based on a possible non-equivalence of the atomic time and the astronomical time, which could give the same observational fingerprint as the anomaly.
Celestial ephemerides in an expanding universe.
Another proposed explanation of Pioneer anomaly is that the background spacetime is described by a cosmological Friedmann–Lemaître–Robertson–Walker metric that is not Minkowski flat. In this model of spacetime manifold, light moves uniformly with respect to the conformal cosmological time whereas physical measurements are performed with the help of atomic clocks that count the proper time of observer coinciding with the cosmic time. This difference yields exactly the same numerical value and signature of the Doppler shift measured in the Pioneer experiment. However, this explanation requires the thermal effects be a small percentage of the total, in contradiction to the many studies that estimate it to be the bulk of the effect.
Further research avenues.
It is possible, but not proven, that this anomaly is linked to the flyby anomaly, which has been observed in other spacecraft. Although the circumstances are very different (planet flyby vs. deep space cruise), the overall effect is similar—a small but unexplained velocity change is observed on top of a much larger conventional gravitational acceleration.
The Pioneer spacecraft are no longer providing new data (the last contact was on 23 January 2003) and other deep-space missions that might be studied ("Galileo" and "Cassini") were deliberately disposed of in the atmospheres of Jupiter and Saturn respectively at the ends of their missions. This leaves several remaining options for further research:
Meetings and conferences about the anomaly.
A meeting was held at the University of Bremen in 2004 to discuss the Pioneer anomaly.
The Pioneer Explorer Collaboration was formed to study the Pioneer Anomaly and has hosted three meetings (2005, 2007, and 2008) at International Space Science Institute in Bern, Switzerland, to discuss the anomaly, and discuss possible means for resolving the source.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
The original paper describing the anomaly
A lengthy survey of several years of debate by the authors of the original 1998 paper documenting the anomaly. The authors conclude, "Until more is known, we must admit that the most likely cause of this effect is an unknown systematic. (We ourselves are divided as to whether 'gas leaks' or 'heat' is this 'most likely cause.')"
Further reading.
The ISSI meeting above has an excellent reference list divided into sections such as primary references, attempts at explanation, proposals for new physics, possible new missions, popular press, and so on. A sampling of these are shown here:
Further elaboration on a dedicated mission plan (restricted access) | [
{
"math_id": 0,
"text": "a_p"
},
{
"math_id": 1,
"text": "c"
},
{
"math_id": 2,
"text": "H_0"
}
] | https://en.wikipedia.org/wiki?curid=980666 |
9807828 | Rotor (mathematics) | Object in geometric algebra
A rotor is an object in the geometric algebra (also called Clifford algebra) of a vector space that represents a rotation about the origin. The term originated with William Kingdon Clifford, in showing that the quaternion algebra is just a special case of Hermann Grassmann's "theory of extension" (Ausdehnungslehre). Hestenes defined a rotor to be any element formula_0 of a geometric algebra that can be written as the product of an even number of unit vectors and satisfies formula_1, where formula_2 is the "reverse" of formula_0—that is, the product of the same vectors, but in reverse order.
Definition.
In mathematics, a rotor in the geometric algebra of a vector space "V" is the same thing as an element of the spin group Spin("V"). We define this group below.
Let "V" be a vector space equipped with a positive definite quadratic form "q", and let Cl("V") be the geometric algebra associated to "V". The algebra Cl("V") is the quotient of the tensor algebra of "V" by the relations formula_3 for all formula_4. (The tensor product in Cl("V") is what is called the geometric product in geometric algebra and in this article is denoted by formula_5.) The Z-grading on the tensor algebra of "V" descends to a Z/2Z-grading on Cl("V"), which we denote by formula_6 Here, Cleven("V") is generated by even-degree blades and Clodd("V") is generated by odd-degree blades.
There is a unique antiautomorphism of Cl("V") which restricts to the identity on "V": this is called the transpose, and the transpose of any multivector "a" is denoted by formula_7. On a blade (i.e., a simple tensor), it simply reverses the order of the factors. The spin group Spin("V") is defined to be the subgroup of Cleven("V") consisting of multivectors "R" such that formula_8 That is, it consists of multivectors that can be written as a product of an even number of unit vectors.
Action as rotation on the vector space.
Reflections along a vector in geometric algebra may be represented as (minus) sandwiching a multivector "M" between a non-null vector "v" perpendicular to the hyperplane of reflection and that vector's inverse "v"−1:
formula_9
and are of even grade. Under a rotation generated by the rotor "R", a general multivector "M" will transform double-sidedly as
formula_10
This action gives a surjective homomorphism formula_11 presenting Spin("V") as a double cover of SO("V"). (See Spin group for more details.)
Restricted alternative formulation.
For a Euclidean space, it may be convenient to consider an alternative formulation, and some authors define the operation of reflection as (minus) the sandwiching of a "unit" (i.e. normalized) multivector:
formula_12
forming rotors that are automatically normalised:
formula_13
The derived rotor action is then expressed as a sandwich product with the reverse:
formula_14
For a reflection for which the associated vector squares to a negative scalar, as may be the case with a pseudo-Euclidean space, such a vector can only be normalized up to the sign of its square, and additional bookkeeping of the sign of the application the rotor becomes necessary. The formulation in terms of the sandwich product with the inverse as above suffers no such shortcoming.
Rotations of multivectors and spinors.
However, though as multivectors also transform double-sidedly, rotors can be combined and form a group, and so multiple rotors compose single-sidedly. The alternative formulation above is not self-normalizing and motivates the definition of spinor in geometric algebra as an object that transforms single-sidedly – i.e., spinors may be regarded as non-normalised rotors in which the reverse rather than the inverse is used in the sandwich product.
Homogeneous representation algebras.
In homogeneous representation algebras such as conformal geometric algebra, a rotor in the representation space corresponds to a rotation about an arbitrary point, a translation or possibly another transformation in the base space.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "R"
},
{
"math_id": 1,
"text": "R\\tilde R = 1"
},
{
"math_id": 2,
"text": "\\tilde R"
},
{
"math_id": 3,
"text": "v\\cdot v=q(v)"
},
{
"math_id": 4,
"text": "v\\in V"
},
{
"math_id": 5,
"text": "\\cdot"
},
{
"math_id": 6,
"text": " \\operatorname{Cl}(V) = \\operatorname{Cl}^\\text{even}(V) \\oplus \\operatorname{Cl}^\\text{odd}(V)."
},
{
"math_id": 7,
"text": "\\tilde a"
},
{
"math_id": 8,
"text": "R\\tilde R = 1."
},
{
"math_id": 9,
"text": "-vMv^{-1}"
},
{
"math_id": 10,
"text": "RMR^{-1}."
},
{
"math_id": 11,
"text": "\\operatorname{Spin}(V)\\to \\operatorname{SO}(V)"
},
{
"math_id": 12,
"text": "-vMv, \\quad v^2=1 ,"
},
{
"math_id": 13,
"text": "R\\tilde R = \\tilde RR = 1 ."
},
{
"math_id": 14,
"text": "RM\\tilde R"
}
] | https://en.wikipedia.org/wiki?curid=9807828 |
9808147 | Niven's constant |
In number theory, Niven's constant, named after Ivan Niven, is the largest exponent appearing in the prime factorization of any natural number "n" "on average". More precisely, if we define "H"(1) = 1 and "H"("n") = the largest exponent appearing in the unique prime factorization of a natural number "n" > 1, then Niven's constant is given by
formula_0
where ζ is the Riemann zeta function.
In the same paper Niven also proved that
formula_1
where "h"(1) = 1, "h"("n") = the smallest exponent appearing in the unique prime factorization of each natural number "n" > 1, "o" is little o notation, and the constant "c" is given by
formula_2
and consequently that
formula_3
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\n\\lim_{n \\to \\infty} \\frac{1}{n} \\sum_{j=1}^n H(j) = 1+\\sum_{k=2}^\\infty \\left(1-\\frac{1}{\\zeta(k)}\\right) \n= 1.705211\\dots\n"
},
{
"math_id": 1,
"text": "\n\\sum_{j=1}^n h(j) = n + c\\sqrt{n} + o (\\sqrt{n})\n"
},
{
"math_id": 2,
"text": "\nc = \\frac{\\zeta(\\frac{3}{2})}{\\zeta(3)},\n"
},
{
"math_id": 3,
"text": " \\lim_{n\\to\\infty} \\frac{1}{n}\\sum_{j=1}^n h(j) = 1. "
}
] | https://en.wikipedia.org/wiki?curid=9808147 |
9808551 | Mass diffusivity | Proportionality constant in some physical laws
Diffusivity, mass diffusivity or diffusion coefficient is usually written as the proportionality constant between the molar flux due to molecular diffusion and the negative value of the gradient in the concentration of the species. More accurately, the diffusion coefficient times the local concentration is the proportionality constant between the negative value of the mole fraction gradient and the molar flux. This distinction is especially significant in gaseous systems with strong temperature gradients. Diffusivity derives its definition from Fick's law and plays a role in numerous other equations of physical chemistry.
The diffusivity is generally prescribed for a given pair of species and pairwise for a multi-species system. The higher the diffusivity (of one substance with respect to another), the faster they diffuse into each other. Typically, a compound's diffusion coefficient is ~10,000× as great in air as in water. Carbon dioxide in air has a diffusion coefficient of 16 mm2/s, and in water its diffusion coefficient is 0.0016 mm2/s.
Diffusivity has dimensions of length2 / time, or m2/s in SI units and cm2/s in CGS units.
Temperature dependence of the diffusion coefficient.
Solids.
The diffusion coefficient in solids at different temperatures is generally found to be well predicted by the Arrhenius equation:
formula_0
where
Diffusion in crystalline solids, termed lattice diffusion, is commonly regarded to occur by two distinct mechanisms, interstitial and substitutional or vacancy diffusion. The former mechanism describes diffusion as the motion of the diffusing atoms between interstitial sites in the lattice of the solid it is diffusion into, the latter describes diffusion through a mechanism more analogue to that in liquids or gases: Any crystal at nonzero temperature will have a certain number of vacancy defects (i.e. empty sites on the lattice) due to the random vibrations of atoms on the lattice, an atom neighbouring a vacancy can spontaneously "jump" into the vacancy, such that the vacancy appears to move. By this process the atoms in the solid can move, and diffuse into each other. Of the two mechanisms, interstitial diffusion is typically more rapid.
Liquids.
An approximate dependence of the diffusion coefficient on temperature in liquids can often be found using Stokes–Einstein equation, which predicts that
formula_1
where
Gases.
The dependence of the diffusion coefficient on temperature for gases can be expressed using Chapman–Enskog theory (predictions accurate on average to about 8%):
formula_2
formula_3
where
The relation
formula_8
is obtained when inserting the ideal gas law into the expression obtained directly from Chapman-Enskog theory, which may be written as
formula_9
where formula_10 is the molar density (mol / mformula_11) of the gas, and
formula_12,
with formula_13 the universal gas constant. At moderate densities (i.e. densities at which the gas has a non-negligible co-volume, but is still sufficiently dilute to be considered as gas-like rather than liquid-like) this simple relation no longer holds, and one must resort to Revised Enskog Theory. Revised Enskog Theory predicts a diffusion coefficient that decreases somewhat more rapidly with density, and which to a first approximation may be written as
formula_14
where formula_15 is the radial distribution function evaluated at the contact diameter of the particles. For molecules behaving like hard, elastic spheres, this value can be computed from the Carnahan-Starling Equation, while for more realistic intermolecular potentials such as the Mie potential or Lennard-Jones potential, its computation is more complex, and may involve invoking a thermodynamic perturbation theory, such as SAFT.
Pressure dependence of the diffusion coefficient.
For self-diffusion in gases at two different pressures (but the same temperature), the following empirical equation has been suggested:
formula_16
where
Population dynamics: dependence of the diffusion coefficient on fitness.
In population dynamics, kinesis is the change of the diffusion coefficient in response to the change of conditions. In models of purposeful kinesis, diffusion coefficient depends on fitness (or reproduction coefficient) "r":
formula_17
where formula_18 is constant and "r" depends on population densities and abiotic characteristics of the living conditions. This dependence is a formalisation of the simple rule: Animals stay longer in good conditions and leave quicker bad conditions (the "Let well enough alone" model).
Effective diffusivity in porous media.
The effective diffusion coefficient describes diffusion through the pore space of porous media. It is macroscopic in nature, because it is not individual pores but the entire pore space that needs to be considered. The effective diffusion coefficient for transport through the pores, "D"e, is estimated as follows:
formula_19
where
The transport-available porosity equals the total porosity less the pores which, due to their size, are not accessible to the diffusing particles, and less dead-end and blind pores (i.e., pores without being connected to the rest of the pore system). The constrictivity describes the slowing down of diffusion by increasing the viscosity in narrow pores as a result of greater proximity to the average pore wall. It is a function of pore diameter and the size of the diffusing particles.
Example values.
Gases at 1 atm., solutes in liquid at infinite dilution. Legend: (s) – solid; (l) – liquid; (g) – gas; (dis) – dissolved.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "D = D_0 \\exp\\left(-\\frac{E_\\text{A}}{RT}\\right)"
},
{
"math_id": 1,
"text": "\\frac {D_{T_1}} {D_{T_2}} = \\frac {T_1} {T_2} \\frac {\\mu_{T_2}} {\\mu_{T_1}},"
},
{
"math_id": 2,
"text": "D = \\frac{A T^\\frac{3}{2}}{p\\sigma_{12}^{2}\\Omega}\\sqrt{\\frac{1}{M_1} + \\frac{1}{M_2}},"
},
{
"math_id": 3,
"text": "A = \\frac{3}{8} k_b^\\frac{3}{2} \\sqrt{\\frac{N_A}{2\\pi}}"
},
{
"math_id": 4,
"text": "1.859 \\times 10^{-3} \\mathrm{\\frac{atm \\cdot \\AA^2 \\cdot {cm}^2}{K^{3/2} \\cdot s} \\sqrt{\\frac{g}{mol}}}"
},
{
"math_id": 5,
"text": "k_b"
},
{
"math_id": 6,
"text": "N_A"
},
{
"math_id": 7,
"text": "\\sigma_{12} = \\frac{1}{2}(\\sigma_1 + \\sigma_2)"
},
{
"math_id": 8,
"text": "D \\sim \\frac{T^{3/2}}{p \\Omega(T)}"
},
{
"math_id": 9,
"text": "D = D_0 \\frac{T^{1/2}}{n \\Omega(T)}"
},
{
"math_id": 10,
"text": "n"
},
{
"math_id": 11,
"text": "^3"
},
{
"math_id": 12,
"text": "D_0 = \\frac{3 }{8 \\sigma_{12} ^2 } \\sqrt{\\left(\\frac{R}{2 \\pi}\\right)\\left(\\frac{1}{M_1} + \\frac{1}{M_2}\\right)} "
},
{
"math_id": 13,
"text": "R = k_B N_A"
},
{
"math_id": 14,
"text": "D = D_0 \\frac{T^{1/2}}{n g(\\sigma) \\Omega(T)}"
},
{
"math_id": 15,
"text": "g(\\sigma)"
},
{
"math_id": 16,
"text": "\\frac {D_{P1}} {D_{P2}} = \\frac {\\rho_{P2}} {\\rho_{P1}}, "
},
{
"math_id": 17,
"text": "D = D_0 e ^{-\\alpha r},"
},
{
"math_id": 18,
"text": "D_0"
},
{
"math_id": 19,
"text": " D_\\text{e} = \\frac{D\\varepsilon_t \\delta}{\\tau},"
}
] | https://en.wikipedia.org/wiki?curid=9808551 |
980911 | Rating system of the Royal Navy | Historic category for ships
The rating system of the Royal Navy and its predecessors was used by the Royal Navy between the beginning of the 17th century and the middle of the 19th century to categorise sailing warships, initially classing them according to their assigned complement of men, and later according to the number of their carriage-mounted guns. The rating system of the Royal Navy formally came to an end in the late 19th century by declaration of the Admiralty. The main cause behind this declaration focused on new types of gun, the introduction of steam propulsion and the use of iron and steel armour which made rating ships by the number of guns obsolete.
Origins and description.
The first movement towards a rating system may be seen in the 15th century and the first half of the 16th century, when the largest carracks in the Navy, such as the "Mary Rose", the "Peter Pomegranate" and the "Henri Grâce à Dieu", were denoted "great ships". This was only on the basis of their roughly-estimated size and not on their weight, crew or number of guns. When these carracks were superseded by the new-style galleons later in the 16th century, the term "great ship" was used to formally delineate the Navy's largest ships from all the rest.
The Stuart era.
The earliest categorisation of Royal Navy ships dates to the reign of King Henry VIII. Henry's Navy consisted of 58 ships, and in 1546 the Anthony Roll divided them into four groups: 'ships, galliasses, pinnaces, and row barges.' "
The formal system of dividing up the Navy's combatant warships into a number or groups or "rates", however, only originated in the very early part of the Stuart era, with the first lists of such categorisation appearing around 1604. At this time the combatant ships of the "Navy Royal" were divided up according to the number of men required to man them at sea (i.e. the size of the crew) into four groups:
A 1612 list referred to four groups: royal, middling, small and pinnaces; but defined them by tonnage instead of by guns, starting from 800 to 1200 tons for the ships royal, down to below 250 tons for the pinnaces.
By the early years of King Charles I's reign, these four groups had been renamed to a numerical sequence. The royal ships were now graded as "first rank", the great ships as "second rank", the middling ships as "third rank", and the small ships as "fourth rank". Soon afterwards, the structure was again modified, with the term "rank" now being replaced by "rate", and the former small ships now being sub-divided into fourth, fifth and sixth rates.
The earliest rating was based not on the number of guns, but on the established complement (number of men). In 1626, a table drawn up by Charles I used the term "rates" for the first time in a classification scheme connected with the Navy. The table specified the amount of monthly wages a seaman or officer would earn, in an ordered scheme of six rates, from "first-rate" to "sixth-rate", with each rate divided into two classes, with differing numbers of men assigned to each class. No specific connection with the size of the ship or number of armaments aboard was given in this 1626 table, and as far as is known, this was related exclusively to seaman pay grades.
This classification scheme was substantially altered in late 1653 as the complements of individual ships were raised. From about 1660 the classification moved from one based on the number of men to one based on the number of carriage guns a ship carried.
Samuel Pepys, then Secretary to the Admiralty, revised the structure in 1677 and laid it down as a "solemn, universal and unalterable" classification. The rating of a ship was of administrative and military use. The number and weight of guns determined the size of crew needed, and hence the amount of pay and rations needed. It also indicated whether a ship was powerful enough to stand in the line of battle. Pepys's original classification was updated by further definitions in 1714, 1721, 1760, 1782, 1801 and 1817, the last being the most severe, as it provided for including in the count of guns the carronades that had previously been excluded. On the whole the trend was for each rate to have a greater number of guns. For instance, Pepys allowed a first rate 90–100 guns, but on the 1801 scheme a first rate had 100–120. A sixth rate's range went from 4–18 to 20–28 (after 1714 any ship with fewer than 20 guns was unrated).
First, second and third rates (ships of the line).
A first-, second- or third-rate ship was regarded as a "ship-of-the-line". The first and second rates were three-deckers; that is, they had three continuous decks of guns (on the lower deck, middle deck and upper deck), usually as well as smaller weapons on the quarterdeck, forecastle and poop.
The largest third rates, those of 80 guns, were likewise three-deckers from the 1690s until the early 1750s, but both before this period and subsequent to it, 80-gun ships were built as two-deckers. All the other third rates, with 74 guns or less, were likewise two-deckers, with just two continuous decks of guns (on the lower deck and upper deck), as well as smaller weapons on the quarterdeck, forecastle and (if they had one) poop. A series of major changes to the rating system took effect from the start of January 1817, when the carronades carried by each ship were included in the count of guns (previously these had usually been omitted); the first rate from that date included all of the three-deckers (the adding in of their carronades had meant that all three-deckers now had over 100 guns), the new second rate included all two-deckers of 80 guns or more, with the third rate reduced to two-deckers of fewer than 80 guns.
A special case were the Royal Yachts, which, for reasons of protocol, had to be commanded by a senior captain. These vessels, despite their small size and minimal armament, were often classed as second or third rate ships, appropriate for the seniority of the captain.
Fourth, fifth and sixth rates.
The smaller fourth rates, of about 50 or 60 guns on two decks, were ships-of-the-line until 1756, when it was felt that such 50-gun ships were now too small for pitched battles. The larger fourth rates of 60 guns continued to be counted as ships-of-the-line, but few new ships of this rate were added, the 60-gun fourth rate being superseded over the next few decades by the 64-gun third rate. The Navy did retain some fourth rates for convoy escort, or as flagships on far-flung stations; it also converted some East Indiamen to that role.
The smaller two deckers originally blurred the distinction between a fourth rate and a fifth rate. At the low end of the fourth rate one might find the two-decker 50-gun ships from about 1756. The high end of the fifth rate would include two-deckers of 40- or 44-guns (from 1690) or even the demi-batterie 32-gun and 36-gun ships of the 1690–1730 period. The fifth rates at the start of the 18th century were generally "demi-batterie" ships, carrying a few heavy guns on their lower deck (which often used the rest of the lower deck for row ports) and a full battery of lesser guns on the upper deck. However, these were gradually phased out, as the low freeboard (i.e., the height of the lower deck gunport sills above the waterline) meant that in rough weather it was often impossible to open the lower deck gunports.
Fifth and sixth rates were never included among ships-of-the-line. The middle of the 18th century saw the introduction of a new fifth-rate type—the classic frigate, with no ports on the lower deck, and the main battery disposed solely on the upper deck, where it could be fought in all weathers.
Sixth-rate ships were generally useful as convoy escorts, for blockade duties and the carrying of dispatches; their small size made them less suited for the general cruising tasks the fifth-rate frigates did so well. Essentially there were two groups of sixth rates. The larger category comprised the sixth-rate frigates of 28 guns, carrying a main battery of twenty-four 9-pounder guns, as well as four smaller guns on their superstructures. The second comprised the "post ships" of between 20 and 24 guns. These were too small to be formally counted as frigates (although colloquially often grouped with them), but still required a post-captain (i.e. an officer holding the substantive rank of captain) as their commander.
Unrated vessels.
The rating system did not handle vessels smaller than the sixth rate. The remainder were simply "unrated". The larger of the unrated vessels were generally all called sloops, but that nomenclature is quite confusing for unrated vessels, especially when dealing with the finer points of "ship-sloop", "brig-sloop", "sloop-of-war" (which really just meant the same in naval parlance as "sloop") or even "corvette" (the last a French term that the British Navy did not use until the 1840s). Technically the category of "sloop-of-war" included any unrated combatant vessel—in theory, the term even extended to bomb vessels and fire ships. During the Napoleonic Wars, the Royal Navy increased the number of sloops in service by some 400% as it found that it needed vast numbers of these small vessels for escorting convoys (as in any war, the introduction of convoys created a huge need for escort vessels), combating privateers, and themselves taking prizes.
The number of guns and the rate.
The rated number of guns often differed from the number a vessel actually carried. The guns that determined a ship's rating were the carriage-mounted cannon, long-barreled, muzzle-loading guns that moved on 'trucks'—wooden wheels. The count did not include smaller (and basically anti-personnel) weapons such as swivel-mounted guns ("swivels"), which fired half-pound projectiles, or small arms. For instance, was rated for 18 guns but during construction her rating was reduced to 16 guns (6-pounders), and she also carried 14 half-pound swivels.
Vessels might also carry other guns that did not contribute to the rating. Examples of such weapons would include mortars, howitzers or boat guns, the boat guns being small guns intended for mounting on the bow of a vessel's boats to provide fire support during landings, cutting out expeditions, and the like. From 1778, however, the most important exception was the carronade.
Introduced in the late 1770s, the carronade was a short-barreled and relatively short-range gun, half the weight of equivalent long guns, and was generally mounted on a slide rather than on trucks. The new carronades were generally housed on a vessel's upperworks—quarterdeck and forecastle—some as additions to its existing ordnance and some as replacements. When the carronades replaced or were in lieu of carriage-mounted cannon they generally counted in arriving at the rating, but not all were, and so may or may not have been included in the count of guns, though rated vessels might carry up to twelve 18-, 24- or 32-pounder carronades.
For instance, was rated as a third rate of 74 guns. She carried twenty-eight 32-pounder guns on her gundeck, twenty-eight 18-pounder guns on her upperdeck, four 12-pounder guns and ten 32-pounder carronades on her quarterdeck, two 12-pounder guns and two 32-pounder carronades on her forecastle, and six 18-pounder carronades on her poop deck. In all, this 74-gun vessel carried 80 cannon: 62 guns and 18 carronades.
When carronades formed a ship's principal armament, they were included in the count of guns. For instance, was a 20-gun corvette of the French Navy that was captured and recommissioned in the Royal Navy as a sloop and post ship. She carried two 9-pounder cannon and eighteen 32-pounder carronades.
By the Napoleonic Wars there was no exact correlation between formal gun rating and the actual number of cannons any individual vessel might carry. One therefore must distinguish between the "established" armament of a vessel (which rarely altered) and the "actual" guns carried, which might change quite frequently for a variety of reasons: guns might be lost overboard during a storm, be jettisoned to speed the ship during a chase, or explode in service and become useless; they might also be stowed in the hold to allow the carriage of troops, or, for a small vessel such as , to lower the centre of gravity and thus improve stability in bad weather. Some guns would also be removed from ships during peacetime service, to reduce the stress on the ships' structure, creating a distinction between a ship's wartime complement of guns (the figure normally quoted) and her lower peacetime complement.
Royal Navy rating system in force during the Napoleonic Wars.
Notes.
<templatestyles src="Citation/styles.css"/>^* The smaller fourth-rates, primarily the 50-gun ships, were, from 1756 on, no longer classified as ships of the line. Since not big enough to stand in the line of battle, were often called frigates, though not classed as frigates by the Royal Navy. They were generally classified, like all smaller warships used primarily in the role of escort and patrol, as "cruisers", a term that covered everything from the smaller two-deckers down to the small gun-brigs and cutters.
<templatestyles src="Citation/styles.css"/>^* The larger fifth-rates were generally two-decked ships of 40 or 44 guns, and thus "not" "frigates", although the 40-gun frigates built during the Napoleonic War also fell into this category.
<templatestyles src="Citation/styles.css"/>^* The smaller sixth-rates were often popularly called frigates, though "not" classed as "frigates" by the Admiralty officially. Only the larger sixth-rates (those mounting 28 carriage guns or more) were technically frigates.
<templatestyles src="Citation/styles.css"/>^* The ton in this instance is the burthen tonnage (bm). From c.1650 the burthen of a vessel was calculated using the formula formula_0, where formula_1 was the length, in feet, from the stem to the sternpost, and formula_2 the maximum breadth of the vessel. It was a rough measurement of cargo-carrying capacity by volume, not displacement. Therefore, one should not change a measurement in "tons burthen" into a displacement in "tons" or "tonnes".
<templatestyles src="Citation/styles.css"/>^* Vessels of less than ten guns were commanded by lieutenants, while those with upwards of ten guns were commanded by commanders.
1817 changes.
In February 1817 the rating system changed. The recommendation from the Board of Admiralty to the Prince Regent was dated 25 November 1816, but the Order in Council establishing the new ratings was issued in February 1817. From February 1817 all carronades were included in the established number of guns. Until that date, carronades only "counted" if they were in place of long guns; when the carronades replaced "long" guns (e.g. on the upper deck of a sloop or post ship, thus providing its main battery), such carronades were counted.
1856 changes.
There was a further major change in the rating system in 1856. From that date, the first rate comprised all ships carrying 110 guns and upwards, or the complement of which consisted of 1,000 men or more. The second rate included one of HM's royal yachts, and otherwise comprised all ships carrying under 110 guns but more than 80 guns, or the complements of which were under 1,000 but not less than 800 men.
The third rate included all the rest of HM's royal yachts and "all such vessels as may bear the flag of pendant of any Admiral Superintendent or Captain Superintendent of one of HM's Dockyards", and otherwise comprised all ships carrying at most 80 guns but not less than 60 guns, or the complements of which were under 800 but not less than 600 men. The fourth rate comprised all frigate-built ships of which the complement was not more than 600 and not less than 410 men.
The fifth rate comprised all ships of which the complement was not more than 400 and not less than 300 men. The sixth rate consisted of all other ships bearing a captain. Of unrated vessels, the category of sloops comprised all vessels commanded by commanders. Next followed all other ships commanded by lieutenants, and having complements of not less than 60 men. Finally were "smaller vessels, not classed as above, with such smaller complements as the Lords Commissioners of the Admiralty may from time to time direct".
Other classifications.
Rating was not the only system of classification used. Through the early modern period, the term "ship" referred to a vessel that carried square sails on three masts. Sailing vessels with only two masts or a single mast were technically not "ships", and were not described as such at the time. Vessels with fewer than three masts were unrated sloops, generally two-masted vessels rigged as snows or ketches (in the first half of the 18th century), or brigs in succeeding eras. Some sloops were three-masted or "ship-rigged", and these were known as "ship sloops".
Vessels were sometimes classified according to the substantive rank of her commanding officer. For instance, when the commanding officer of a gun-brig or even a cutter was a lieutenant with the status of master-and-commander, the custom was to recategorise the vessel as a sloop. For instance, when Pitt Burnaby Greene, the commanding officer of in 1811, received his promotion to post-captain, the Navy reclassed the sloop as a post ship.
Practices in other navies.
Although the rating system described was only used by the Royal Navy, other major navies used similar means of grading their warships. For example, the French Navy used a system of five rates ("rangs") which had a similar purpose. British authors might still use "first rate" when referring to the largest ships of other nations or "third rate" to speak of a French seventy-four. By the end of the 18th century, the rating system had mostly fallen out of common use, although technically it remained in existence for nearly another century, ships of the line usually being characterized directly by their nominal number of guns, the numbers even being used as the name of the type, as in "a squadron of three seventy-fours".
United States (1905).
As of 1905, ships of the United States Navy were by law divided into classes called rates. Vessels of the first rate had a displacement tonnage in excess of 8000 tons; second rate, from 4000 to 8000 tons; third rate, from 1000 to 4000 tons; and fourth rate, of less than 1000 tons. Converted merchant vessels that were armed and equipped as cruisers were of the second rate if over 6000 tons, and of the third rate if over 1000 and less than 6000 tons. Auxiliary vessels such as colliers, supply vessels, repair ships, etc., if over 4000 tons, were of the third rate.
Auxiliary vessels of less than 4000 tons—except tugs, sailing ships, and receiving ships which were not rated—were of the fourth rate. Torpedo-boat destroyers, torpedo boats, and similar vessels were not rated. Captains commanded ships of the first rate. Captains or commanders commanded ships of the second rate. Commanders or lieutenant-commanders commanded ships of the third rate. Lieutenant-commanders or lieutenants commanded ships of the fourth rate. Lieutenant-commanders, lieutenants, ensigns, or warrant officers might command unrated vessels, depending on the size of the vessel.
Other uses.
The term "first-rate" has passed into general usage, as an adjective used to mean something of the best or highest quality available. "Second-rate" and "third-rate" are also used as adjectives to mean that something is of inferior quality.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
Excerpts.
<templatestyles src="Reflist/styles.css" />
Footnotes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\frac {k \\times b \\times \\frac 1 2 b} {94}"
},
{
"math_id": 1,
"text": "k"
},
{
"math_id": 2,
"text": "b"
}
] | https://en.wikipedia.org/wiki?curid=980911 |
9809292 | Ribbon knot | Type of mathematical knot
In the mathematical area of knot theory, a ribbon knot is a knot that bounds a self-intersecting disk with only "ribbon singularities". Intuitively, this kind of singularity can be formed by cutting a slit in the disk and passing another part of the disk through the slit. More precisely, this type of singularity is a closed arc consisting of intersection points of the disk with itself, such that the preimage of this arc consists of two arcs in the disc, one completely in the interior of the disk and the other having its two endpoints on the disk boundary.
Morse-theoretic formulation.
A slice disc "M" is a smoothly embedded formula_0 in formula_1 with formula_2. Consider the function formula_3 given by formula_4. By a small isotopy of "M" one can ensure that "f" restricts to a Morse function on "M". One says formula_5 is a ribbon knot if formula_6 has no interior local maxima.
Slice-ribbon conjecture.
Every ribbon knot is known to be a slice knot. A famous open problem, posed by Ralph Fox and known as the slice-ribbon conjecture, asks if the converse is true: is every (smoothly) slice knot ribbon?
showed that the conjecture is true for knots of bridge number two. showed it to be true for three-stranded pretzel knots with odd parameters. However, suggested that the conjecture might not be true, and provided a family of knots that could be counterexamples to it. The conjecture was further strengthened when a famous potential counter-example, the (2, 1) cable of the figure-eight knot, was shown to be not slice and thereby not a counterexample.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "D^2"
},
{
"math_id": 1,
"text": "D^4"
},
{
"math_id": 2,
"text": "M \\cap \\partial D^4 = \\partial M \\subset S^3"
},
{
"math_id": 3,
"text": "f\\colon D^4 \\to \\mathbb R"
},
{
"math_id": 4,
"text": "f(x,y,z,w) = x^2+y^2+z^2+w^2"
},
{
"math_id": 5,
"text": " \\partial M \\subset \\partial D^4 = S^3"
},
{
"math_id": 6,
"text": "f_{|M}\\colon M \\to \\mathbb R"
}
] | https://en.wikipedia.org/wiki?curid=9809292 |
9811503 | Pivot element | Non-zero element of a matrix selected by an algorithm
The pivot or pivot element is the element of a matrix, or an array, which is selected first by an algorithm (e.g. Gaussian elimination, simplex algorithm, etc.), to do certain calculations. In the case of matrix algorithms, a pivot entry is usually required to be at least distinct from zero, and often distant from it; in this case finding this element is called pivoting. Pivoting may be followed by an interchange of rows or columns to bring the pivot to a fixed position and allow the algorithm to proceed successfully, and possibly to reduce round-off error. It is often used for verifying row echelon form.
Pivoting might be thought of as swapping or sorting rows or columns in a matrix, and thus it can be represented as multiplication by permutation matrices. However, algorithms rarely move the matrix elements because this would cost too much time; instead, they just keep track of the permutations.
Overall, pivoting adds more operations to the computational cost of an algorithm. These additional operations are sometimes necessary for the algorithm to work at all. Other times these additional operations are worthwhile because they add numerical stability to the final result.
Examples of systems that require pivoting.
In the case of Gaussian elimination, the algorithm requires that pivot elements not be zero.
Interchanging rows or columns in the case of a zero pivot element is necessary. The system below requires the interchange of rows 2 and 3 to perform elimination.
formula_0
The system that results from pivoting is as follows and will allow the elimination algorithm and backwards substitution to output the solution to the system.
formula_1
Furthermore, in Gaussian elimination it is generally desirable to choose a pivot element with large absolute value. This improves the numerical stability. The following system is dramatically affected by round-off error when Gaussian elimination and backwards substitution are performed.
formula_2
This system has the exact solution of "x"1 = 10.00 and "x"2 = 1.000, but when the elimination algorithm and backwards substitution are performed using four-digit arithmetic, the small value of "a"11 causes small round-off errors to be propagated. The algorithm without pivoting yields the approximation of "x"1 ≈ 9873.3 and "x"2 ≈ 4. In this case it is desirable that we interchange the two rows so that "a"21 is in the pivot position
formula_3
Considering this system, the elimination algorithm and backwards substitution using four-digit arithmetic yield the correct values "x"1 = 10.00 and "x"2 = 1.000.
Partial, rook, and complete pivoting.
In partial pivoting, the algorithm selects the entry with largest absolute value from the column of the matrix that is currently being considered as the pivot element. More specifically, when reducing a matrix to row echelon form, partial pivoting swaps rows before the column's row reduction to make the pivot element have the largest absolute value compared to the elements below in the same column. Partial pivoting is generally sufficient to adequately reduce round-off error.
However, for certain systems and algorithms, complete pivoting (or maximal pivoting) may be required for acceptable accuracy. Complete pivoting interchanges both rows and columns in order to use the largest (by absolute value) element in the matrix as the pivot. Complete pivoting is usually not necessary to ensure numerical stability and, due to the additional cost of searching for the maximal element, the improvement in numerical stability that it provides is typically outweighed by its reduced efficiency for all but the smallest matrices. Hence, it is rarely used.
Another strategy, known as rook pivoting also interchanges both rows and columns but only guarantees that the chosen pivot is simultaneously the largest possible entry in its row and the largest possible entry in its column, as opposed to the largest possible in the entire remaining submatrix. When implemented on serial computers this strategy has expected cost of only about three times that of partial pivoting and is therefore cheaper than complete pivoting. Rook pivoting has been proved to be more stable than partial pivoting both theoretically and in practice.
Scaled pivoting.
A variation of the partial pivoting strategy is scaled pivoting. In this approach, the algorithm selects as the pivot element the entry that is largest relative to the entries in its row. This strategy is desirable when entries' large differences in magnitude lead to the propagation of round-off error. Scaled pivoting should be used in a system like the one below where a row's entries vary greatly in magnitude. In the example below, it would be desirable to interchange the two rows because the current pivot element 30 is larger than 5.291 but it is relatively small compared with the other entries in its row. Without row interchange in this case, rounding errors will be propagated as in the previous example.
formula_4
Pivot position.
A pivot position in a matrix, A, is a position in the matrix that corresponds to a row–leading 1 in the reduced row echelon form of A. Since the reduced row echelon form of A is unique, the pivot positions are uniquely determined and do not depend on whether or not row interchanges are performed in the reduction process. Also, the pivot of a row must appear to the right of the pivot in the above row in row echelon form.
References.
"This article incorporates material from Pivoting on PlanetMath, which is licensed under the ." | [
{
"math_id": 0,
"text": "\n\\left[ \\begin{array}{ccc|c}\n1 & -1 & 2 & 8 \\\\\n0 & 0 & -1 & -11 \\\\\n0 & 2 & -1 & -3\n\\end{array} \\right]\n"
},
{
"math_id": 1,
"text": "\n\\left[ \\begin{array}{ccc|c}\n1 & -1 & 2 & 8 \\\\\n0 & 2 & -1 & -3 \\\\\n0 & 0 & -1 & -11\n\\end{array} \\right]\n"
},
{
"math_id": 2,
"text": "\n\\left[ \\begin{array}{cc|c}\n0.00300 & 59.14 & 59.17 \\\\\n5.291 & -6.130 & 46.78 \\\\\n\\end{array} \\right]\n"
},
{
"math_id": 3,
"text": "\n\\left[ \\begin{array}{cc|c}\n5.291 & -6.130 & 46.78 \\\\\n0.00300 & 59.14 & 59.17 \\\\\n\\end{array} \\right].\n"
},
{
"math_id": 4,
"text": "\n\\left[ \\begin{array}{cc|c}\n30 & 591400 & 591700 \\\\\n5.291 & -6.130 & 46.78 \\\\\n\\end{array} \\right]\n"
}
] | https://en.wikipedia.org/wiki?curid=9811503 |
981153 | Exergonic process | An exergonic process is one which there is a positive flow of energy from the system to the surroundings. This is in contrast with an endergonic process. Constant pressure, constant temperature reactions are exergonic if and only if the Gibbs free energy change is negative (∆"G" < 0). "Exergonic" (from the prefix exo-, derived for the Greek word ἔξω "exō", "outside" and the suffix -ergonic, derived from the Greek word ἔργον "ergon", "work") means "releasing energy in the form of work". In thermodynamics, work is defined as the energy moving from the system (the internal region) to the surroundings (the external region) during a given process.
All physical and chemical systems in the universe follow the second law of thermodynamics and proceed in a downhill, i.e., "exergonic", direction. Thus, left to itself, any physical or chemical system will proceed, according to the second law of thermodynamics, in a direction that tends to lower the free energy of the system, and thus to expend energy in the form of work. These reactions occur spontaneously.
A chemical reaction is also exergonic when spontaneous. Thus in this type of reactions the Gibbs free energy decreases. The entropy is included in any change of the Gibbs free energy. This differs from an exothermic reaction or an endothermic reaction where the entropy is not included. The Gibbs free energy is calculated with the Gibbs–Helmholtz equation:
formula_0
where:
"T" = temperature in kelvins (K)
Δ"G" = change in the Gibbs free energy
Δ"S" = change in entropy (at 298 K) as Δ"S" = Σ{"S"(Product)} − Σ{"S"(Reagent)}
Δ"H" = change in enthalpy (at 298 K) as Δ"H" = Σ{"H"(Product)} − Σ{"H"(Reagent)}
A chemical reaction progresses spontaneously only when the Gibbs free energy decreases, in that case the Δ"G" is negative. In exergonic reactions the Δ"G" is negative and in endergonic reactions the Δ"G" is positive:
formula_1 exergon
formula_2 endergon
where:
formula_3 equals the change in the Gibbs free energy after completion of a chemical reaction.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\Delta G = \\Delta H- T \\cdot \\Delta S"
},
{
"math_id": 1,
"text": " \\Delta_\\mathrm{R} G < 0"
},
{
"math_id": 2,
"text": " \\Delta_\\mathrm{R} G > 0"
},
{
"math_id": 3,
"text": " \\Delta_\\mathrm{R} G "
}
] | https://en.wikipedia.org/wiki?curid=981153 |
9812205 | Transition dipole moment | Type of electric dipole moment
The transition dipole moment or transition moment, usually denoted formula_0 for a transition between an initial state, formula_1, and a final state, formula_2, is the electric dipole moment associated with the transition between the two states. In general the transition dipole moment is a complex vector quantity that includes the phase factors associated with the two states. Its direction gives the polarization of the transition, which determines how the system will interact with an electromagnetic wave of a given polarization, while the square of the magnitude gives the strength of the interaction due to the distribution of charge within the system. The SI unit of the transition dipole moment is the Coulomb-meter (Cm); a more conveniently sized unit is the Debye (D).
Definition.
A single charged particle.
For a transition where a single charged particle changes state from formula_3 to formula_4, the transition dipole moment formula_5 is
formula_6
where "q" is the particle's charge, r is its position, and the integral is over all space (formula_7 is shorthand for formula_8). The transition dipole moment is a vector; for example its "x"-component is
formula_9
In other words, the "transition dipole moment" can be viewed as an off-diagonal matrix element of the position operator, multiplied by the particle's charge.
Multiple charged particles.
When the transition involves more than one charged particle, the transition dipole moment is defined in an analogous way to an electric dipole moment: The sum of the positions, weighted by charge. If the "i"th particle has charge "q"i and position operator ri, then the transition dipole moment is:
formula_10
In terms of momentum.
For a single, nonrelativistic particle of mass "m", in zero magnetic field, the transition dipole moment between two energy eigenstates "ψa" and "ψb" can alternatively be written in terms of the momentum operator, using the relationship
formula_11
This relationship can be proven starting from the commutation relation between position "x" and the Hamiltonian "H":
formula_12
Then
formula_13
However, assuming that "ψa" and "ψb" are energy eigenstates with energy "Ea" and "Eb", we can also write
formula_14
Similar relations hold for "y" and "z", which together give the relationship above.
Analogy with a classical dipole.
A basic, phenomenological understanding of the transition dipole moment can be obtained by analogy with a classical dipole. While the comparison can be very useful, care must be taken to ensure that one does not fall into the trap of assuming they are the same.
In the case of two classical point charges, formula_15 and formula_16, with a displacement vector, formula_17, pointing from the negative charge to the positive charge, the electric dipole moment is given by
formula_18
In the presence of an electric field, such as that due to an electromagnetic wave, the two charges will experience a force in opposite directions, leading to a net torque on the dipole. The magnitude of the torque is proportional to both the magnitude of the charges and the separation between them, and varies with the relative angles of the field and the dipole:
formula_19
Similarly, the coupling between an electromagnetic wave and an atomic transition with transition dipole moment formula_0 depends on the charge distribution within the atom, the strength of the electric field, and the relative polarizations of the field and the transition. In addition, the transition dipole moment depends on the geometries and relative phases of the initial and final states.
Origin.
When an atom or molecule interacts with an electromagnetic wave of frequency formula_20, it can undergo a transition from an initial to a final state of energy difference formula_21 through the coupling of the electromagnetic field to the transition dipole moment. When this transition is from a lower energy state to a higher energy state, this results in the absorption of a photon. A transition from a higher energy state to a lower energy state results in the emission of a photon. If the charge, formula_22, is omitted from the electric dipole operator during this calculation, one obtains formula_23 as used in oscillator strength.
Applications.
The transition dipole moment is useful for determining if transitions are allowed under the electric dipole interaction. For example, the transition from a bonding formula_24 orbital to an antibonding formula_25 orbital is allowed because the integral defining the transition dipole moment is nonzero. Such a transition occurs between an even and an odd orbital; the dipole operator, formula_26, is an odd function of formula_17, hence the integrand is an even function. The integral of an odd function over symmetric limits returns a value of zero, while for an even function this is not "necessarily" the case. This result is reflected in the parity selection rule for electric dipole transitions. The transition moment integral
formula_27
of an electronic transition within similar atomic orbitals, such as s-s or p-p, is forbidden due to the triple integral returning an ungerade (odd) product. Such transitions only redistribute electrons within the same orbital and will return a zero product. If the triple integral returns a gerade (even) product, the transition is allowed.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbf{d}_{nm}"
},
{
"math_id": 1,
"text": "m"
},
{
"math_id": 2,
"text": "n"
},
{
"math_id": 3,
"text": " | \\psi_a \\rangle "
},
{
"math_id": 4,
"text": " | \\psi_b \\rangle "
},
{
"math_id": 5,
"text": " \\text{(t.d.m.)} "
},
{
"math_id": 6,
"text": " (\\text{t.d.m. } a \\rightarrow b) = \\langle \\psi_b | (q\\mathbf{r}) | \\psi_a \\rangle = q\\int \\psi_b^*(\\mathbf{r}) \\, \\mathbf{r} \\, \\psi_a(\\mathbf{r}) \\, d^3 \\mathbf{r}"
},
{
"math_id": 7,
"text": "\\int d^3 \\mathbf{r}"
},
{
"math_id": 8,
"text": " \\iiint dx \\, dy \\, dz"
},
{
"math_id": 9,
"text": " (\\text{x-component of t.d.m. } a \\rightarrow b) = \\langle \\psi_b | (qx) | \\psi_a \\rangle = q\\int \\psi_b^*(\\mathbf{r}) \\, x \\, \\psi_a(\\mathbf{r}) \\, d^3 \\mathbf{r}"
},
{
"math_id": 10,
"text": "\\begin{align}\n (\\text{t.d.m. } a \\rightarrow b) &= \\langle \\psi_b | (q_1\\mathbf{r}_1 + q_2\\mathbf{r}_2 + \\cdots) | \\psi_a \\rangle \\\\\n& = \\int \\psi_b^*(\\mathbf{r}_1, \\mathbf{r}_2, \\ldots) \\, (q_1\\mathbf{r}_1 + q_2\\mathbf{r}_2 + \\cdots) \\, \\psi_a(\\mathbf{r}_1, \\mathbf{r}_2, \\ldots) \\, d^3 \\mathbf{r}_1 \\, d^3 \\mathbf{r}_2 \\cdots\n\\end{align}"
},
{
"math_id": 11,
"text": "\\langle \\psi_a | \\mathbf{r} | \\psi_b \\rangle = \\frac{i \\hbar}{(E_b - E_a)m} \\langle \\psi_a | \\mathbf{p} | \\psi_b \\rangle "
},
{
"math_id": 12,
"text": "[H, x] = \\left[\\frac{p^2}{2m} + V(x,y,z), x\\right] = \\left[\\frac{p^2}{2m}, x\\right] = \\frac{1}{2m} (p_x[p_x,x] + [p_x,x]p_x) = \\frac{-i \\hbar p_x}{m}"
},
{
"math_id": 13,
"text": "\\langle \\psi_a | (Hx - xH) | \\psi_b \\rangle = \\frac{-i \\hbar}{m} \\langle \\psi_a | p_x | \\psi_b \\rangle "
},
{
"math_id": 14,
"text": "\\langle \\psi_a | (Hx - xH) | \\psi_b \\rangle = (\\langle \\psi_a | H) x | \\psi_b \\rangle - \\langle \\psi_a | x ( H | \\psi_b \\rangle) = (E_a - E_b) \\langle \\psi_a | x | \\psi_b \\rangle "
},
{
"math_id": 15,
"text": "+q"
},
{
"math_id": 16,
"text": "-q"
},
{
"math_id": 17,
"text": "\\mathbf{r}"
},
{
"math_id": 18,
"text": "\\mathbf{p} = q\\mathbf{r}."
},
{
"math_id": 19,
"text": "\\left|\\mathbf{\\tau}\\right| = \\left|q\\mathbf{r}\\right| \\left|\\mathbf{E}\\right|\\sin\\theta."
},
{
"math_id": 20,
"text": "\\omega"
},
{
"math_id": 21,
"text": "\\hbar\\omega"
},
{
"math_id": 22,
"text": "e"
},
{
"math_id": 23,
"text": "\\mathbf{R}_\\alpha"
},
{
"math_id": 24,
"text": "\\pi"
},
{
"math_id": 25,
"text": "\\pi^*"
},
{
"math_id": 26,
"text": "\\vec{\\mu}"
},
{
"math_id": 27,
"text": "\\int \\psi_1^* \\vec{\\mu} \\psi_2 d\\tau ,"
}
] | https://en.wikipedia.org/wiki?curid=9812205 |
9813143 | Semi-implicit Euler method | Modification of the Euler method for solving Hamilton's equations
In mathematics, the semi-implicit Euler method, also called symplectic Euler, semi-explicit Euler, Euler–Cromer, and Newton–Størmer–Verlet (NSV), is a modification of the Euler method for solving Hamilton's equations, a system of ordinary differential equations that arises in classical mechanics. It is a symplectic integrator and hence it yields better results than the standard Euler method.
Origin.
The method was accidentally discovered by Newton North High School senior student Abby Aspel in 1980. In a lab assignment simulating orbits using Kepler's Law, which required computation in BASIC: she accidentally reversed two lines of code by calculating velocity before position. Her simulation converged more quickly and resulted in more accurate and feasible results than what was expected. Alan Cromer then proved why her algorithm was more stable than previous methods of computation.
Setting.
The semi-implicit Euler method can be applied to a pair of differential equations of the form
formula_0
where "f" and "g" are given functions. Here, "x" and "v" may be either scalars or vectors. The equations of motion in Hamiltonian mechanics take this form if the Hamiltonian is of the form
formula_1
The differential equations are to be solved with the initial condition
formula_2
The method.
The semi-implicit Euler method produces an approximate discrete solution by iterating
formula_3
where Δ"t" is the time step and "tn" = "t"0 + "n"Δ"t" is the time after "n" steps.
The difference with the standard Euler method is that the semi-implicit Euler method uses "v""n"+1 in the equation for "x""n"+1, while the Euler method uses "vn".
Applying the method with negative time step to the computation of formula_4 from formula_5 and rearranging leads to the second variant of the semi-implicit Euler method
formula_6
which has similar properties.
The semi-implicit Euler is a first-order integrator, just as the standard Euler method. This means that it commits a global error of the order of Δt. However, the semi-implicit Euler method is a symplectic integrator, unlike the standard method. As a consequence, the semi-implicit Euler method almost conserves the energy (when the Hamiltonian is time-independent). Often, the energy increases steadily when the standard Euler method is applied, making it far less accurate.
Alternating between the two variants of the semi-implicit Euler method leads in one simplification to the Störmer-Verlet integration and in a slightly different simplification to the leapfrog integration, increasing both the order of the error and the order of preservation of energy.
The stability region of the semi-implicit method was presented by Niiranen although the semi-implicit Euler was misleadingly called symmetric Euler in his paper. The semi-implicit method models the simulated system correctly if the complex roots of the characteristic equation are within the circle shown below. For real roots the stability region extends outside the circle for which the criterion is formula_7
As can be seen, the semi-implicit method can simulate correctly both stable systems that have their roots in the left half plane and unstable systems that have their roots in the right half plane. This is clear advantage over forward (standard) Euler and backward Euler. Forward Euler tends to have less damping than the real system when the negative real parts of the roots get near the imaginary axis and backward Euler may show the system be stable even when the roots are in the right half plane.
Example.
The motion of a spring satisfying Hooke's law is given by
formula_8
The semi-implicit Euler for this equation is
formula_9
Substituting formula_10 in the second equation with the expression given by the first equation, the iteration can be expressed in the following matrix form
formula_11
and since the determinant of the matrix is 1 the transformation is area-preserving.
The iteration preserves the modified energy functional formula_12 exactly, leading to stable periodic orbits (for sufficiently small step size) that deviate by formula_13 from the exact orbits. The exact circular frequency formula_14 increases in the numerical approximation by a factor of formula_15. | [
{
"math_id": 0,
"text": "\\begin{align}\n{dx \\over dt} &= f(t,v) \\\\\n{dv \\over dt} &= g(t,x),\n\\end{align}"
},
{
"math_id": 1,
"text": " H = T(t,v) + V(t,x). \\, "
},
{
"math_id": 2,
"text": " x(t_0) = x_0, \\qquad v(t_0) = v_0. "
},
{
"math_id": 3,
"text": "\\begin{align}\n v_{n+1} &= v_n + g(t_n, x_n) \\, \\Delta t\\\\[0.3em]\n x_{n+1} &= x_n + f(t_n, v_{n+1}) \\, \\Delta t\n\\end{align}"
},
{
"math_id": 4,
"text": "(x_n, v_n)"
},
{
"math_id": 5,
"text": "(x_{n+1}, v_{n+1})"
},
{
"math_id": 6,
"text": "\\begin{align}\n x_{n+1} &= x_n + f(t_n, v_n) \\, \\Delta t\\\\[0.3ex]\n v_{n+1} &= v_n + g(t_n, x_{n+1}) \\, \\Delta t\n\\end{align}"
},
{
"math_id": 7,
"text": "s > - 2/\\Delta t"
},
{
"math_id": 8,
"text": "\\begin{align}\n \\frac{dx}{dt} &= v(t)\\\\[0.2em]\n \\frac{dv}{dt} &= -\\frac{k}{m}\\,x=-\\omega^2\\,x.\n\\end{align}"
},
{
"math_id": 9,
"text": "\\begin{align}\n v_{n+1} &= v_n - \\omega^2\\,x_n\\,\\Delta t \\\\[0.2em]\n x_{n+1} &= x_n + v_{n+1} \\,\\Delta t.\n\\end{align}"
},
{
"math_id": 10,
"text": "v_{n+1}"
},
{
"math_id": 11,
"text": "\\begin{bmatrix} x_{n+1} \\\\v_{n+1}\\end{bmatrix} = \n\\begin{bmatrix} \n1-\\omega^2 \\Delta t^2 & \\Delta t \\\\ \n-\\omega^2 \\Delta t & 1 \n\\end{bmatrix} \\begin{bmatrix} x_{n} \\\\ v_{n} \\end{bmatrix},"
},
{
"math_id": 12,
"text": "E_h(x,v)=\\tfrac12\\left(v^2+\\omega^2\\,x^2-\\omega^2\\Delta t\\,vx\\right)"
},
{
"math_id": 13,
"text": "O(\\Delta t)"
},
{
"math_id": 14,
"text": "\\omega"
},
{
"math_id": 15,
"text": "1+\\tfrac1{24}\\omega^2\\Delta t^2+O(\\Delta t^4)"
}
] | https://en.wikipedia.org/wiki?curid=9813143 |
98132 | Radio wave | Type of electromagnetic radiation
Radio waves are a type of electromagnetic radiation with the lowest frequencies and the longest wavelengths in the electromagnetic spectrum, typically with frequencies below 300 gigahertz (GHz) and wavelengths greater than , about the diameter of a grain of rice.
Like all electromagnetic waves, radio waves in a vacuum travel at the speed of light, and in the Earth's atmosphere at a slightly lower speed. Radio waves are generated by charged particles undergoing acceleration, such as time-varying electric currents. Naturally occurring radio waves are emitted by lightning and astronomical objects, and are part of the blackbody radiation emitted by all warm objects.
Radio waves are generated artificially by an electronic device called a transmitter, which is connected to an antenna, which radiates the waves. They are received by another antenna connected to a radio receiver, which processes the received signal. Radio waves are very widely used in modern technology for fixed and mobile radio communication, broadcasting, radar and radio navigation systems, communications satellites, wireless computer networks and many other applications. Different frequencies of radio waves have different propagation characteristics in the Earth's atmosphere; long waves can diffract around obstacles like mountains and follow the contour of the Earth (ground waves), shorter waves can reflect off the ionosphere and return to Earth beyond the horizon (skywaves), while much shorter wavelengths bend or diffract very little and travel on a line of sight, so their propagation distances are limited to the visual horizon.
To prevent interference between different users, the artificial generation and use of radio waves is strictly regulated by law, coordinated by an international body called the International Telecommunication Union (ITU), which defines radio waves as "electromagnetic waves of frequencies arbitrarily lower than 3,000 GHz, propagated in space without artificial guide". The radio spectrum is divided into a number of radio bands on the basis of frequency, allocated to different uses. Higher-frequency, shorter-wavelength radio waves are called microwaves.
Discovery and exploitation.
Radio waves were first predicted by the theory of electromagnetism that was proposed in 1867 by Scottish mathematical physicist James Clerk Maxwell. His mathematical theory, now called Maxwell's equations, predicted that a coupled electric and magnetic field could travel through space as an "electromagnetic wave". Maxwell proposed that light consisted of electromagnetic waves of very short wavelength. In 1887, German physicist Heinrich Hertz demonstrated the reality of Maxwell's electromagnetic waves by experimentally generating radio waves in his laboratory, showing that they exhibited the same wave properties as light: standing waves, refraction, diffraction, and polarization. Italian inventor Guglielmo Marconi developed the first practical radio transmitters and receivers around 1894–1895. He received the 1909 Nobel Prize in physics for his radio work. Radio communication began to be used commercially around 1900. The modern term ""radio wave" replaced the original name "Hertzian wave"" around 1912.
Generation and reception.
Radio waves are radiated by charged particles when they are accelerated. Natural sources of radio waves include radio noise produced by lightning and other natural processes in the Earth's atmosphere, and astronomical radio sources in space such as the Sun, galaxies and nebulas. All warm objects radiate high frequency radio waves (microwaves) as part of their black body radiation.
Radio waves are produced artificially by time-varying electric currents, consisting of electrons flowing back and forth in a specially shaped metal conductor called an antenna. An electronic device called a radio transmitter applies oscillating electric current to the antenna, and the antenna radiates the power as radio waves. Radio waves are received by another antenna attached to a radio receiver. When radio waves strike the receiving antenna they push the electrons in the metal back and forth, creating tiny oscillating currents which are detected by the receiver.
From quantum mechanics, like other electromagnetic radiation such as light, radio waves can alternatively be regarded as streams of uncharged elementary particles called "photons". In an antenna transmitting radio waves, the electrons in the antenna emit the energy in discrete packets called radio photons, while in a receiving antenna the electrons absorb the energy as radio photons. An antenna is a coherent emitter of photons, like a laser, so the radio photons are all in phase. However, from Planck's relation formula_0, the energy of individual radio photons is extremely small, from 10−22 to 10−30 joules. So the antenna of even a very low power transmitter emits an enormous number of photons every second. Therefore, except for certain molecular electron transition processes such as atoms in a maser emitting microwave photons, radio wave emission and absorption is usually regarded as a continuous classical process, governed by Maxwell's equations.
Properties.
Radio waves in vacuum travel at the speed of light formula_1. When passing through a material medium, they are slowed depending on the medium's permeability and permittivity. Air is tenuous enough that in the Earth's atmosphere radio waves travel at very nearly the speed of light.
The wavelength formula_2 is the distance from one peak (crest) of the wave's electric field to the next, and is inversely proportional to the frequency formula_3 of the wave. The relation of frequency and wavelength in a radio wave traveling in vacuum or air is
formula_4
where
formula_5
Equivalently, formula_1, the distance that a radio wave travels in vacuum in one second, is , which is the wavelength of a 1 hertz radio signal. A 1 megahertz radio wave (mid-AM band) has a wavelength of .
Polarization.
Like other electromagnetic waves, a radio wave has a property called polarization, which is defined as the direction of the wave's oscillating electric field perpendicular to the direction of motion. A plane-polarized radio wave has an electric field that oscillates in a plane perpendicular to the direction of motion. In a horizontally polarized radio wave the electric field oscillates in a horizontal direction. In a vertically polarized wave the electric field oscillates in a vertical direction. In a circularly polarized wave the electric field at any point rotates about the direction of travel, once per cycle. A right circularly polarized wave rotates in a right-hand sense about the direction of travel, while a left circularly polarized wave rotates in the opposite sense. The wave's magnetic field is perpendicular to the electric field, and the electric and magnetic field are oriented in a right-hand sense with respect to the direction of radiation.
An antenna emits polarized radio waves, with the polarization determined by the direction of the metal antenna elements. For example, a dipole antenna consists of two collinear metal rods. If the rods are horizontal, it radiates horizontally polarized radio waves, while if the rods are vertical, it radiates vertically polarized waves. An antenna receiving the radio waves must have the same polarization as the transmitting antenna, or it will suffer a severe loss of reception. Many natural sources of radio waves, such as the sun, stars and blackbody radiation from warm objects, emit unpolarized waves, consisting of incoherent short wave trains in an equal mixture of polarization states.
The polarization of radio waves is determined by a quantum mechanical property of the photons called their spin. A photon can have one of two possible values of spin; it can spin in a right-hand sense about its direction of motion, or in a left-hand sense. Right circularly polarized radio waves consist of photons spinning in a right hand sense. Left circularly polarized radio waves consist of photons spinning in a left hand sense. Plane polarized radio waves consist of photons in a quantum superposition of right and left hand spin states. The electric field consists of a superposition of right and left rotating fields, resulting in a plane oscillation.
Propagation characteristics.
Radio waves are more widely used for communication than other electromagnetic waves mainly because of their desirable propagation properties, stemming from their large wavelength. Radio waves have the ability to pass through the atmosphere in any weather, foliage, and through most building materials. By diffraction, longer wavelengths can bend around obstructions, and unlike other electromagnetic waves they tend to be scattered rather than absorbed by objects larger than their wavelength.
The study of radio propagation, how radio waves move in free space and over the surface of the Earth, is vitally important in the design of practical radio systems. Radio waves passing through different environments experience reflection, refraction, polarization, diffraction, and absorption. Different frequencies experience different combinations of these phenomena in the Earth's atmosphere, making certain radio bands more useful for specific purposes than others. Practical radio systems mainly use three different techniques of radio propagation to communicate:
At microwave frequencies, atmospheric gases begin absorbing radio waves, so the range of practical radio communication systems decreases with increasing frequency. Below about 20 GHz atmospheric attenuation is mainly due to water vapor. Above 20 GHz, in the millimeter wave band, other atmospheric gases begin to absorb the waves, limiting practical transmission distances to a kilometer or less. Above 300 GHz, in the terahertz band, virtually all the power is absorbed within a few meters, so the atmosphere is effectively opaque.
Radio communication.
In radio communication systems, information is transported across space using radio waves. At the sending end, the information to be sent, in the form of a time-varying electrical signal, is applied to a radio transmitter. The information, called the modulation signal, can be an audio signal representing sound from a microphone, a video signal representing moving images from a video camera, or a digital signal representing data from a computer. In the transmitter, an electronic oscillator generates an alternating current oscillating at a radio frequency, called the "carrier wave" because it creates the radio waves that "carry" the information through the air. The information signal is used to modulate the carrier, altering some aspect of it, encoding the information on the carrier. The modulated carrier is amplified and applied to an antenna. The oscillating current pushes the electrons in the antenna back and forth, creating oscillating electric and magnetic fields, which radiate the energy away from the antenna as radio waves. The radio waves carry the information to the receiver location.
At the receiver, the oscillating electric and magnetic fields of the incoming radio wave push the electrons in the receiving antenna back and forth, creating a tiny oscillating voltage which is a weaker replica of the current in the transmitting antenna. This voltage is applied to the radio receiver, which extracts the information signal. The receiver first uses a bandpass filter to separate the desired radio station's radio signal from all the other radio signals picked up by the antenna, then amplifies the signal so it is stronger, then finally extracts the information-bearing modulation signal in a demodulator. The recovered signal is sent to a loudspeaker or earphone to produce sound, or a television display screen to produce a visible image, or other devices. A digital data signal is applied to a computer or microprocessor, which interacts with a human user.
The radio waves from many transmitters pass through the air simultaneously without interfering with each other. They can be separated in the receiver because each transmitter's radio waves oscillate at a different rate, in other words each transmitter has a different frequency, measured in kilohertz (kHz), megahertz (MHz) or gigahertz (GHz). The bandpass filter in the receiver consists of one or more tuned circuits which act like a resonator, similarly to a tuning fork. The tuned circuit has a natural resonant frequency at which it oscillates. The resonant frequency is set equal to the frequency of the desired radio station. The oscillating radio signal from the desired station causes the tuned circuit to oscillate in sympathy, and it passes the signal on to the rest of the receiver. Radio signals at other frequencies are blocked by the tuned circuit and not passed on.
Biological and environmental effects.
Radio waves are "non-ionizing radiation", which means they do not have enough energy to separate electrons from atoms or molecules, ionizing them, or break chemical bonds, causing chemical reactions or DNA damage. The main effect of absorption of radio waves by materials is to heat them, similarly to the infrared waves radiated by sources of heat such as a space heater or wood fire. The oscillating electric field of the wave causes polar molecules to vibrate back and forth, increasing the temperature; this is how a microwave oven cooks food. Radio waves have been applied to the body for 100 years in the medical therapy of diathermy for deep heating of body tissue, to promote increased blood flow and healing. More recently they have been used to create higher temperatures in hyperthermia therapy and to kill cancer cells.
However, unlike infrared waves, which are mainly absorbed at the surface of objects and cause surface heating, radio waves are able to penetrate the surface and deposit their energy inside materials and biological tissues. The depth to which radio waves penetrate decreases with their frequency, and also depends on the material's resistivity and permittivity; it is given by a parameter called the "skin depth" of the material, which is the depth within which 63% of the energy is deposited. For example, the 2.45 GHz radio waves (microwaves) in a microwave oven penetrate most foods approximately 2.5 to 3.8 cm.
Looking into a source of radio waves at close range, such as the waveguide of a working radio transmitter, can cause damage to the lens of the eye by heating. A strong enough beam of radio waves can penetrate the eye and heat the lens enough to cause cataracts.
Since the heating effect is in principle no different from other sources of heat, most research into possible health hazards of exposure to radio waves has focused on "nonthermal" effects; whether radio waves have any effect on tissues besides that caused by heating. Radiofrequency electromagnetic fields have been classified by the International Agency for Research on Cancer (IARC) as having "limited evidence" for its effects on humans and animals. There is weak mechanistic evidence of cancer risk via personal exposure to RF-EMF from mobile telephones.
Radio waves can be shielded against by a conductive metal sheet or screen, an enclosure of sheet or screen is called a Faraday cage. A metal screen shields against radio waves as well as a solid sheet as long as the holes in the screen are smaller than about <templatestyles src="Fraction/styles.css" />1⁄20 of wavelength of the waves.
Measurement.
Since radio frequency radiation has both an electric and a magnetic component, it is often convenient to express intensity of radiation field in terms of units specific to each component. The unit "volts per meter" (V/m) is used for the electric component, and the unit "amperes per meter" (A/m) is used for the magnetic component. One can speak of an electromagnetic field, and these units are used to provide information about the levels of electric and magnetic field strength at a measurement location.
Another commonly used unit for characterizing an RF electromagnetic field is "power density". Power density is most accurately used when the point of measurement is far enough away from the RF emitter to be located in what is referred to as the far field zone of the radiation pattern. In closer proximity to the transmitter, i.e., in the "near field" zone, the physical relationships between the electric and magnetic components of the field can be complex, and it is best to use the field strength units discussed above. Power density is measured in terms of power per unit area, for example, with the unit milliwatt per square centimeter (mW/cm2). When speaking of frequencies in the microwave range and higher, power density is usually used to express intensity since exposures that might occur would likely be in the far field zone.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "E = h\\nu"
},
{
"math_id": 1,
"text": "c"
},
{
"math_id": 2,
"text": "\\lambda"
},
{
"math_id": 3,
"text": "f"
},
{
"math_id": 4,
"text": "\\lambda = \\frac{\\;c \\;}{f}~,"
},
{
"math_id": 5,
"text": " c \\approx 2.9979 \\times 10^8 \\text{ m/s}~."
}
] | https://en.wikipedia.org/wiki?curid=98132 |
9815 | Évariste Galois | French mathematician (1811–1832)
Évariste Galois (; ; 25 October 1811 – 31 May 1832) was a French mathematician and political activist. While still in his teens, he was able to determine a necessary and sufficient condition for a polynomial to be solvable by radicals, thereby solving a problem that had been open for 350 years. His work laid the foundations for Galois theory and group theory, two major branches of abstract algebra.
Galois was a staunch republican and was heavily involved in the political turmoil that surrounded the French Revolution of 1830. As a result of his political activism, he was arrested repeatedly, serving one jail sentence of several months. For reasons that remain obscure, shortly after his release from prison, Galois fought in a duel and died of the wounds he suffered.
Life.
Early life.
Galois was born on 25 October 1811 to Nicolas-Gabriel Galois and Adélaïde-Marie (née Demante). His father was a Republican and was head of Bourg-la-Reine's liberal party. His father became mayor of the village after Louis XVIII returned to the throne in 1814. His mother, the daughter of a jurist, was a fluent reader of Latin and classical literature and was responsible for her son's education for his first twelve years.
In October 1823, he entered the Lycée Louis-le-Grand where his teacher Louis Paul Émile Richard recognized his brilliance. At the age of 14, he began to take a serious interest in mathematics.
Galois found a copy of Adrien-Marie Legendre's "Éléments de Géométrie", which, it is said, he read "like a novel" and mastered at the first reading. At 15, he was reading the original papers of Joseph-Louis Lagrange, such as the "Réflexions sur la résolution algébrique des équations" which likely motivated his later work on equation theory, and "Leçons sur le calcul des fonctions", work intended for professional mathematicians, yet his classwork remained uninspired and his teachers accused him of putting on the airs of a genius.
Budding mathematician.
In 1828, Galois attempted the entrance examination for the École Polytechnique, the most prestigious institution for mathematics in France at the time, without the usual preparation in mathematics, and failed for lack of explanations on the oral examination. In that same year, he entered the École Normale (then known as l'École préparatoire), a far inferior institution for mathematical studies at that time, where he found some professors sympathetic to him.
In the following year Galois's first paper, on continued fractions,
was published. It was at around the same time that he began making fundamental discoveries in the theory of polynomial equations. He submitted two papers on this topic to the Academy of Sciences. Augustin-Louis Cauchy refereed these papers, but refused to accept them for publication for reasons that still remain unclear. However, in spite of many claims to the contrary, it is widely held that Cauchy recognized the importance of Galois's work, and that he merely suggested combining the two papers into one in order to enter it in the competition for the academy's Grand Prize in Mathematics. Cauchy, an eminent mathematician of the time though with political views that were diametrically opposed to those of Galois, considered Galois's work to be a likely winner.
On 28 July 1829, Galois's father died by suicide after a bitter political dispute with the village priest. A couple of days later, Galois made his second and last attempt to enter the Polytechnique and failed yet again. It is undisputed that Galois was more than qualified; accounts differ on why he failed. More plausible accounts state that Galois made too many logical leaps and baffled the incompetent examiner, which enraged Galois. The recent death of his father may have also influenced his behavior.
Having been denied admission to the École polytechnique, Galois took the Baccalaureate examinations in order to enter the École normale. He passed, receiving his degree on 29 December 1829. His examiner in mathematics reported, "This pupil is sometimes obscure in expressing his ideas, but he is intelligent and shows a remarkable spirit of research."
Galois submitted his memoir on equation theory several times, but it was never published in his lifetime. Though his first attempt was refused by Cauchy, in February 1830 following Cauchy's suggestion he submitted it to the academy's secretary Joseph Fourier, to be considered for the Grand Prix of the academy. Unfortunately, Fourier died soon after, and the memoir was lost. The prize would be awarded that year to Niels Henrik Abel posthumously and also to Carl Gustav Jacob Jacobi. Despite the lost memoir, Galois published three papers that year. One laid the foundations for Galois theory. The second was about the numerical resolution of equations (root finding in modern terminology). The third was an important one in number theory, in which the concept of a finite field was first articulated.
Political firebrand.
Galois lived during a time of political turmoil in France. Charles X had succeeded Louis XVIII in 1824, but in 1827 his party suffered a major electoral setback and by 1830 the opposition liberal party became the majority. Charles, faced with political opposition from the chambers, staged a coup d'état, and issued his notorious July Ordinances, touching off the July Revolution which ended with Louis Philippe becoming king. While their counterparts at the Polytechnique were making history in the streets, Galois, at the École Normale, was locked in by the school's director. Galois was incensed and wrote a blistering letter criticizing the director, which he submitted to the "Gazette des Écoles", signing the letter with his full name. Although the "Gazette"'s editor omitted the signature for publication, Galois was expelled.
Although his expulsion would have formally taken effect on 4 January 1831, Galois quit school immediately and joined the staunchly Republican artillery unit of the National Guard. He divided his time between his mathematical work and his political affiliations. Due to controversy surrounding the unit, soon after Galois became a member, on 31 December 1830, the artillery of the National Guard was disbanded out of fear that they might destabilize the government. At around the same time, nineteen officers of Galois's former unit were arrested and charged with conspiracy to overthrow the government.
In April 1831, the officers were acquitted of all charges, and on 9 May 1831, a banquet was held in their honor, with many illustrious people present, such as Alexandre Dumas. The proceedings grew riotous. At some point, Galois stood and proposed a toast in which he said, "To Louis Philippe," with a dagger above his cup. The republicans at the banquet interpreted Galois's toast as a threat against the king's life and cheered. He was arrested the following day at his mother's house and held in detention at Sainte-Pélagie prison until 15 June 1831, when he had his trial. Galois's defense lawyer cleverly claimed that Galois actually said, "To Louis-Philippe, "if he betrays"," but that the qualifier was drowned out in the cheers. The prosecutor asked a few more questions, and perhaps influenced by Galois's youth, the jury acquitted him that same day.
On the following Bastille Day (14 July 1831), Galois was at the head of a protest, wearing the uniform of the disbanded artillery, and came heavily armed with several pistols, a loaded rifle, and a dagger. He was again arrested. During his stay in prison, Galois at one point drank alcohol for the first time at the goading of his fellow inmates. One of these inmates, François-Vincent Raspail, recorded what Galois said while drunk in a letter from 25 July. Excerpted from the letter:
<templatestyles src="Template:Blockquote/styles.css" />And I tell you, I will die in a duel on the occasion of some coquette de bas étage. Why? Because she will invite me to avenge her honor which another has compromised.<br>
Do you know what I lack, my friend? I can confide it only to you: it is someone whom I can love and love only in spirit. I've lost my father and no one has ever replaced him, do you hear me...?
Raspail continues that Galois, still in a delirium, attempted suicide, and that he would have succeeded if his fellow inmates had not forcibly stopped him. Months later, when Galois's trial occurred on 23 October, he was sentenced to six months in prison for illegally wearing a uniform. While in prison, he continued to develop his mathematical ideas. He was released on 29 April 1832.
Final days.
Galois returned to mathematics after his expulsion from the École Normale, although he continued to spend time in political activities. After his expulsion became official in January 1831, he attempted to start a private class in advanced algebra which attracted some interest, but this waned, as it seemed that his political activism had priority. Siméon Denis Poisson asked him to submit his work on the theory of equations, which he did on 17 January 1831. Around 4 July 1831, Poisson declared Galois's work "incomprehensible", declaring that "[Galois's] argument is neither sufficiently clear nor sufficiently developed to allow us to judge its rigor"; however, the rejection report ends on an encouraging note: "We would then suggest that the author should publish the whole of his work in order to form a definitive opinion."
While Poisson's report was made before Galois's 14 July arrest, it took until October to reach Galois in prison. It is unsurprising, in the light of his character and situation at the time, that Galois reacted violently to the rejection letter, and decided to abandon publishing his papers through the academy and instead publish them privately through his friend Auguste Chevalier. Apparently, however, Galois did not ignore Poisson's advice, as he began collecting all his mathematical manuscripts while still in prison, and continued polishing his ideas until his release on 29 April 1832, after which he was somehow talked into a duel.
Galois's fatal duel took place on 30 May. The true motives behind the duel are obscure. There has been much speculation about them. What is known is that, five days before his death, he wrote a letter to Chevalier which clearly alludes to a broken love affair.
Some archival investigation on the original letters suggests that the woman of romantic interest was Stéphanie-Félicie Poterin du Motel, the daughter of the physician at the hostel where Galois stayed during the last months of his life. Fragments of letters from her, copied by Galois himself (with many portions, such as her name, either obliterated or deliberately omitted), are available. The letters hint that Poterin du Motel had confided some of her troubles to Galois, and this might have prompted him to provoke the duel himself on her behalf. This conjecture is also supported by other letters Galois later wrote to his friends the night before he died. Galois's cousin, Gabriel Demante, when asked if he knew the cause of the duel, mentioned that Galois "found himself in the presence of a supposed uncle and a supposed fiancé, each of whom provoked the duel." Galois himself exclaimed: "I am the victim of an infamous coquette and her two dupes."
As to his opponent in the duel, Alexandre Dumas names Pescheux d'Herbinville, who was actually one of the nineteen artillery officers whose acquittal was celebrated at the banquet that occasioned Galois's first arrest. However, Dumas is alone in this assertion, and if he were correct it is unclear why d'Herbinville would have been involved. It has been speculated that he was Poterin du Motel's "supposed fiancé" at the time (she ultimately married someone else), but no clear evidence has been found supporting this conjecture. On the other hand, extant newspaper clippings from only a few days after the duel give a description of his opponent (identified by the initials "L.D.") that appear to more accurately apply to one of Galois's Republican friends, most probably Ernest Duchatelet, who was imprisoned with Galois on the same charges. Given the conflicting information available, the true identity of his killer may well be lost to history.
Whatever the reasons behind the duel, Galois was so convinced of his impending death that he stayed up all night writing letters to his Republican friends and composing what would become his mathematical testament, the famous letter to Auguste Chevalier outlining his ideas, and three attached manuscripts. Mathematician Hermann Weyl said of this testament, "This letter, if judged by the novelty and profundity of ideas it contains, is perhaps the most substantial piece of writing in the whole literature of mankind." However, the legend of Galois pouring his mathematical thoughts onto paper the night before he died seems to have been exaggerated. In these final papers, he outlined the rough edges of some work he had been doing in analysis and annotated a copy of the manuscript submitted to the academy and other papers.
Early in the morning of 30 May 1832, he was shot in the abdomen, was abandoned by his opponents and his own seconds, and was found by a passing farmer. He died the following morning at ten o'clock in the Hôpital Cochin (probably of peritonitis), after refusing the offices of a priest. His funeral ended in riots. There were plans to initiate an uprising during his funeral, but during the same time the leaders heard of General Jean Maximilien Lamarque's death and the rising was postponed without any uprising occurring until 5 June. Only Galois's younger brother was notified of the events prior to Galois's death. Galois was 20 years old. His last words to his younger brother Alfred were:
<templatestyles src="Template:Blockquote/styles.css" />"Ne pleure pas, Alfred ! J'ai besoin de tout mon courage pour mourir à vingt ans !"(Don't weep, Alfred! I need all my courage to die at twenty!)
On 2 June, Évariste Galois was buried in a common grave of the Montparnasse Cemetery whose exact location is unknown. In the cemetery of his native town – Bourg-la-Reine – a cenotaph in his honour was erected beside the graves of his relatives.
Évariste Galois died in 1832. Joseph Liouville began studying Galois's unpublished papers in 1842 and acknowledged their value in 1843. It is not clear what happened in the 10 years between 1832 and 1842 nor what eventually inspired Joseph Liouville to begin reading Galois's papers. Jesper Lützen explores this subject at some length in Chapter XIV "Galois Theory" of his book about Joseph Liouville without reaching any definitive conclusions.
It is certainly possible that mathematicians (including Liouville) did not want to publicize Galois's papers because Galois was a republican political activist who died 5 days before the June Rebellion, an unsuccessful anti-monarchist insurrection of Parisian republicans. In Galois's obituary, his friend Auguste Chevalier almost accused academicians at the École Polytechnique of having killed Galois since, if they had not rejected his work, he would have become a mathematician and would not have devoted himself to the republican political activism for which some believed he was killed.
Given that France was still living in the shadow of the Reign of Terror and the Napoleonic era, Liouville might have waited until the June Rebellion's political turmoil subsided before turning his attention to Galois's papers.
Liouville finally published Galois's manuscripts in the October–November 1846 issue of the "Journal de Mathématiques Pures et Appliquées". Galois's most famous contribution was a novel proof that there is no quintic formula – that is, that fifth and higher degree equations are not generally solvable by radicals. Although Niels Henrik Abel had already proved the impossibility of a "quintic formula" by radicals in 1824 and Paolo Ruffini had published a solution in 1799 that turned out to be flawed, Galois's methods led to deeper research into what is now called Galois Theory, which can be used to determine, for "any" polynomial equation, whether it has a solution by radicals.
Contributions to mathematics.
From the closing lines of a letter from Galois to his friend Auguste Chevalier, dated 29 May 1832, two days before Galois's death:
<templatestyles src="Template:Blockquote/styles.css" />"Tu prieras publiquement Jacobi ou Gauss de donner leur avis, non sur la vérité, mais sur l'importance des théorèmes."
"Après cela, il y aura, j'espère, des gens qui trouveront leur profit à déchiffrer tout ce gâchis."
Within the 60 or so pages of Galois's collected works are many important ideas that have had far-reaching consequences for nearly all branches of mathematics.
His work has been compared to that of Niels Henrik Abel (1802–1829), a contemporary mathematician who also died at a very young age, and much of their work had significant overlap.
Algebra.
While many mathematicians before Galois gave consideration to what are now known as groups, it was Galois who was the first to use the word "group" (in French "groupe") in a sense close to the technical sense that is understood today, making him among the founders of the branch of algebra known as group theory. He called the decomposition of a group into its left and right cosets a "proper decomposition" if the left and right cosets coincide, which is what today is known as a normal subgroup. He also introduced the concept of a finite field (also known as a Galois field in his honor) in essentially the same form as it is understood today.
In his last letter to Chevalier and attached manuscripts, the second of three, he made basic studies of linear groups over finite fields:
Galois theory.
Galois's most significant contribution to mathematics is his development of Galois theory. He realized that the algebraic solution to a polynomial equation is related to the structure of a group of permutations associated with the roots of the polynomial, the Galois group of the polynomial. He found that an equation could be solved in radicals if one can find a series of subgroups of its Galois group, each one normal in its successor with abelian quotient, that is, its Galois group is solvable. This proved to be a fertile approach, which later mathematicians adapted to many other fields of mathematics besides the theory of equations to which Galois originally applied it.
Analysis.
Galois also made some contributions to the theory of Abelian integrals and continued fractions.
As written in his last letter, Galois passed from the study of elliptic functions to consideration of the integrals of the most general algebraic differentials, today called Abelian integrals. He classified these integrals into three categories.
Continued fractions.
In his first paper in 1828, Galois proved that the regular continued fraction which represents a quadratic surd "ζ" is purely periodic if and only if "ζ" is a reduced surd, that is, formula_0 and its conjugate formula_1
satisfies formula_2.
In fact, Galois showed more than this. He also proved that if "ζ" is a reduced quadratic surd and "η" is its conjugate, then the continued fractions for "ζ" and for (−1/"η") are both purely periodic, and the repeating block in one of those continued fractions is the mirror image of the repeating block in the other. In symbols we have
formula_3
where "ζ" is any reduced quadratic surd, and "η" is its conjugate.
From these two theorems of Galois a result already known to Lagrange can be deduced. If "r" > 1 is a rational number that is not a perfect square, then
formula_4
In particular, if "n" is any non-square positive integer, the regular continued fraction expansion of √"n" contains a repeating block of length "m", in which the first "m" − 1 partial denominators form a palindromic string.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\zeta > 1"
},
{
"math_id": 1,
"text": "\\eta"
},
{
"math_id": 2,
"text": "-1 < \\eta < 0"
},
{
"math_id": 3,
"text": "\n\\begin{align}\n\\zeta& = [\\,\\overline{a_0;a_1,a_2,\\dots,a_{m-1}}\\,]\\\\[3pt]\n\\frac{-1}{\\eta}& = [\\,\\overline{a_{m-1};a_{m-2},a_{m-3},\\dots,a_0}\\,]\\,\n\\end{align}\n"
},
{
"math_id": 4,
"text": "\n\\sqrt{r} = \\left[\\,a_0;\\overline{a_1,a_2,\\dots,a_2,a_1,2a_0}\\,\\right].\n"
}
] | https://en.wikipedia.org/wiki?curid=9815 |
9815338 | Modular multiplicative inverse | Concept in modular arithmetic
In mathematics, particularly in the area of arithmetic, a modular multiplicative inverse of an integer a is an integer x such that the product ax is congruent to 1 with respect to the modulus m. In the standard notation of modular arithmetic this congruence is written as
formula_0
which is the shorthand way of writing the statement that m divides (evenly) the quantity "ax" − 1, or, put another way, the remainder after dividing ax by the integer m is 1. If a does have an inverse modulo m, then there are an infinite number of solutions of this congruence, which form a congruence class with respect to this modulus. Furthermore, any integer that is congruent to a (i.e., in a's congruence class) has any element of x's congruence class as a modular multiplicative inverse. Using the notation of formula_1 to indicate the congruence class containing w, this can be expressed by saying that the "modulo multiplicative inverse" of the congruence class formula_2 is the congruence class formula_3 such that:
formula_4
where the symbol formula_5 denotes the multiplication of equivalence classes modulo m.
Written in this way, the analogy with the usual concept of a multiplicative inverse in the set of rational or real numbers is clearly represented, replacing the numbers by congruence classes and altering the binary operation appropriately.
As with the analogous operation on the real numbers, a fundamental use of this operation is in solving, when possible, linear congruences of the form
formula_6
Finding modular multiplicative inverses also has practical applications in the field of cryptography, e.g. public-key cryptography and the RSA algorithm. A benefit for the computer implementation of these applications is that there exists a very fast algorithm (the extended Euclidean algorithm) that can be used for the calculation of modular multiplicative inverses.
Modular arithmetic.
For a given positive integer m, two integers, a and b, are said to be congruent modulo m if m divides their difference. This binary relation is denoted by,
formula_7
This is an equivalence relation on the set of integers, formula_8, and the equivalence classes are called congruence classes modulo m or residue classes modulo m. Let formula_2 denote the congruence class containing the integer a, then
formula_9
A linear congruence is a modular congruence of the form
formula_6
Unlike linear equations over the reals, linear congruences may have zero, one or several solutions. If x is a solution of a linear congruence then every element in formula_3 is also a solution, so, when speaking of the number of solutions of a linear congruence we are referring to the number of different congruence classes that contain solutions.
If d is the greatest common divisor of a and m then the linear congruence "ax" ≡ "b" (mod "m") has solutions if and only if d divides b. If d divides b, then there are exactly d solutions.
A modular multiplicative inverse of an integer a with respect to the modulus m is a solution of the linear congruence
formula_10
The previous result says that a solution exists if and only if gcd("a", "m") = 1, that is, a and m must be relatively prime (i.e. coprime). Furthermore, when this condition holds, there is exactly one solution, i.e., when it exists, a modular multiplicative inverse is unique: If b and b' are both modular multiplicative inverses of a respect to the modulus m, then
formula_11
therefore
formula_12
If "a" ≡ 0 (mod "m"), then gcd("a", "m") = "a", and a won't even have a modular multiplicative inverse. Therefore, "b ≡ b"' (mod "m").
When "ax" ≡ 1 (mod "m") has a solution it is often denoted in this way −
formula_13
but this can be considered an abuse of notation since it could be misinterpreted as the reciprocal of formula_14 (which, contrary to the modular multiplicative inverse, is not an integer except when a is 1 or -1). The notation would be proper if a is interpreted as a token standing for the congruence class formula_2, as the multiplicative inverse of a congruence class is a congruence class with the multiplication defined in the next section.
Integers modulo m.
The congruence relation, modulo m, partitions the set of integers into m congruence classes. Operations of addition and multiplication can be defined on these m objects in the following way: To either add or multiply two congruence classes, first pick a representative (in any way) from each class, then perform the usual operation for integers on the two representatives and finally take the congruence class that the result of the integer operation lies in as the result of the operation on the congruence classes. In symbols, with formula_15 and formula_5 representing the operations on congruence classes, these definitions are
formula_16
and
formula_17
These operations are well-defined, meaning that the end result does not depend on the choices of representatives that were made to obtain the result.
The m congruence classes with these two defined operations form a ring, called the ring of integers modulo m. There are several notations used for these algebraic objects, most often formula_18 or formula_19, but several elementary texts and application areas use a simplified notation formula_20 when confusion with other algebraic objects is unlikely.
The congruence classes of the integers modulo m were traditionally known as "residue classes modulo m", reflecting the fact that all the elements of a congruence class have the same remainder (i.e., "residue") upon being divided by m. Any set of m integers selected so that each comes from a different congruence class modulo m is called a complete system of residues modulo m. The division algorithm shows that the set of integers, {0, 1, 2, ..., "m" − 1} form a complete system of residues modulo m, known as the least residue system modulo m. In working with arithmetic problems it is sometimes more convenient to work with a complete system of residues and use the language of congruences while at other times the point of view of the congruence classes of the ring formula_18 is more useful.
Multiplicative group of integers modulo m.
Not every element of a complete residue system modulo m has a modular multiplicative inverse, for instance, zero never does. After removing the elements of a complete residue system that are not relatively prime to m, what is left is called a reduced residue system, all of whose elements have modular multiplicative inverses. The number of elements in a reduced residue system is formula_21, where formula_22 is the Euler totient function, i.e., the number of positive integers less than m that are relatively prime to m.
In a general ring with unity not every element has a multiplicative inverse and those that do are called units. As the product of two units is a unit, the units of a ring form a group, the group of units of the ring and often denoted by "R"× if R is the name of the ring. The group of units of the ring of integers modulo m is called the multiplicative group of integers modulo m, and it is isomorphic to a reduced residue system. In particular, it has order (size), formula_21.
In the case that m is a prime, say p, then formula_23 and all the non-zero elements of formula_24 have multiplicative inverses, thus formula_24 is a finite field. In this case, the multiplicative group of integers modulo p form a cyclic group of order "p" − 1.
Example.
For any integer formula_25, it's always the case that formula_26 is the modular multiplicative inverse of formula_27 with respect to the modulus formula_28, since formula_29. Examples are formula_30, formula_31, formula_32 and so on.
The following example uses the modulus 10: Two integers are congruent mod 10 if and only if their difference is divisible by 10, for instance
formula_33 since 10 divides 32 − 2 = 30, and
formula_34 since 10 divides 111 − 1 = 110.
Some of the ten congruence classes with respect to this modulus are:
formula_35
formula_36
formula_37 and
formula_38
The linear congruence 4"x" ≡ 5 (mod 10) has no solutions since the integers that are congruent to 5 (i.e., those in formula_39) are all odd while 4"x" is always even. However, the linear congruence 4"x" ≡ 6 (mod 10) has two solutions, namely, "x" = 4 and "x" = 9. The gcd(4, 10) = 2 and 2 does not divide 5, but does divide 6.
Since gcd(3, 10) = 1, the linear congruence 3"x" ≡ 1 (mod 10) will have solutions, that is, modular multiplicative inverses of 3 modulo 10 will exist. In fact, 7 satisfies this congruence (i.e., 21 − 1 = 20). However, other integers also satisfy the congruence, for instance 17 and −3 (i.e., 3(17) − 1 = 50 and 3(−3) − 1 = −10). In particular, every integer in formula_40 will satisfy the congruence since these integers have the form 7 + 10"r" for some integer r and
formula_41
is divisible by 10. This congruence has only this one congruence class of solutions. The solution in this case could have been obtained by checking all possible cases, but systematic algorithms would be needed for larger moduli and these will be given in the next section.
The product of congruence classes formula_39 and formula_42 can be obtained by selecting an element of formula_39, say 25, and an element of formula_42, say −2, and observing that their product (25)(−2) = −50 is in the congruence class formula_43. Thus, formula_44. Addition is defined in a similar way. The ten congruence classes together with these operations of addition and multiplication of congruence classes form the ring of integers modulo 10, i.e., formula_45.
A complete residue system modulo 10 can be the set {10, −9, 2, 13, 24, −15, 26, 37, 8, 9} where each integer is in a different congruence class modulo 10. The unique least residue system modulo 10 is {0, 1, 2, ..., 9}. A reduced residue system modulo 10 could be {1, 3, 7, 9}. The product of any two congruence classes represented by these numbers is again one of these four congruence classes. This implies that these four congruence classes form a group, in this case the cyclic group of order four, having either 3 or 7 as a (multiplicative) generator. The represented congruence classes form the group of units of the ring formula_45. These congruence classes are precisely the ones which have modular multiplicative inverses.
Computation.
Extended Euclidean algorithm.
A modular multiplicative inverse of a modulo m can be found by using the extended Euclidean algorithm.
The Euclidean algorithm determines the greatest common divisor (gcd) of two integers, say a and m. If a has a multiplicative inverse modulo m, this gcd must be 1. The last of several equations produced by the algorithm may be solved for this gcd. Then, using a method called "back substitution", an expression connecting the original parameters and this gcd can be obtained. In other words, integers x and y can be found to satisfy Bézout's identity,
formula_46
Rewritten, this is
formula_47
that is,
formula_0
so, a modular multiplicative inverse of a has been calculated. A more efficient version of the algorithm is the extended Euclidean algorithm, which, by using auxiliary equations, reduces two passes through the algorithm (back substitution can be thought of as passing through the algorithm in reverse) to just one.
In big O notation, this algorithm runs in time O(log2("m")), assuming , and is considered to be very fast and generally more efficient than its alternative, exponentiation.
Using Euler's theorem.
As an alternative to the extended Euclidean algorithm, Euler's theorem may be used to compute modular inverses.
According to Euler's theorem, if a is coprime to m, that is, gcd("a", "m") = 1, then
formula_48
where formula_22 is Euler's totient function. This follows from the fact that a belongs to the multiplicative group formula_49× if and only if a is coprime to m. Therefore, a modular multiplicative inverse can be found directly:
formula_50
In the special case where m is a prime, formula_51 and a modular inverse is given by
formula_52
This method is generally slower than the extended Euclidean algorithm, but is sometimes used when an implementation for modular exponentiation is already available. Some disadvantages of this method include:
One notable "advantage" of this technique is that there are no conditional branches which depend on the value of a, and thus the value of a, which may be an important secret in public-key cryptography, can be protected from side-channel attacks. For this reason, the standard implementation of Curve25519 uses this technique to compute an inverse.
Multiple inverses.
It is possible to compute the inverse of multiple numbers ai, modulo a common m, with a single invocation of the Euclidean algorithm and three multiplications per additional input. The basic idea is to form the product of all the ai, invert that, then multiply by aj for all "j" ≠ "i" to leave only the desired "a".
More specifically, the algorithm is (all arithmetic performed modulo m):
It is possible to perform the multiplications in a tree structure rather than linearly to exploit parallel computing.
Applications.
Finding a modular multiplicative inverse has many applications in algorithms that rely on the theory of modular arithmetic. For instance, in cryptography the use of modular arithmetic permits some operations to be carried out more quickly and with fewer storage requirements, while other operations become more difficult. Both of these features can be used to advantage. In particular, in the RSA algorithm, encrypting and decrypting a message is done using a pair of numbers that are multiplicative inverses with respect to a carefully selected modulus. One of these numbers is made public and can be used in a rapid encryption procedure, while the other, used in the decryption procedure, is kept hidden. Determining the hidden number from the public number is considered to be computationally infeasible and this is what makes the system work to ensure privacy.
As another example in a different context, consider the exact division problem in computer science where you have a list of odd word-sized numbers each divisible by "k" and you wish to divide them all by "k". One solution is as follows:
On many machines, particularly those without hardware support for division, division is a slower operation than multiplication, so this approach can yield a considerable speedup. The first step is relatively slow but only needs to be done once.
Modular multiplicative inverses are used to obtain a solution of a system of linear congruences that is guaranteed by the Chinese Remainder Theorem.
For example, the system
X ≡ 4 (mod 5)
X ≡ 4 (mod 7)
X ≡ 6 (mod 11)
has common solutions since 5,7 and 11 are pairwise coprime. A solution is given by
X = "t"1 (7 × 11) × 4 + "t"2 (5 × 11) × 4 + "t"3 (5 × 7) × 6
where
"t"1 = 3 is the modular multiplicative inverse of 7 × 11 (mod 5),
"t"2 = 6 is the modular multiplicative inverse of 5 × 11 (mod 7) and
"t"3 = 6 is the modular multiplicative inverse of 5 × 7 (mod 11).
Thus,
X = 3 × (7 × 11) × 4 + 6 × (5 × 11) × 4 + 6 × (5 × 7) × 6 = 3504
and in its unique reduced form
X ≡ 3504 ≡ 39 (mod 385)
since 385 is the LCM of 5,7 and 11.
Also, the modular multiplicative inverse figures prominently in the definition of the Kloosterman sum.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "ax \\equiv 1 \\pmod{m},"
},
{
"math_id": 1,
"text": "\\overline{w}"
},
{
"math_id": 2,
"text": "\\overline{a}"
},
{
"math_id": 3,
"text": "\\overline{x}"
},
{
"math_id": 4,
"text": "\\overline{a} \\cdot_m \\overline{x} = \\overline{1},"
},
{
"math_id": 5,
"text": "\\cdot_m"
},
{
"math_id": 6,
"text": "ax \\equiv b \\pmod{m}."
},
{
"math_id": 7,
"text": "a \\equiv b \\pmod{m}."
},
{
"math_id": 8,
"text": "\\mathbb{Z}"
},
{
"math_id": 9,
"text": "\\overline{a} = \\{b \\in \\mathbb{Z} \\mid a \\equiv b \\pmod{m} \\}."
},
{
"math_id": 10,
"text": "ax \\equiv 1 \\pmod{m}."
},
{
"math_id": 11,
"text": "ab \\equiv ab' \\equiv 1 \\pmod{m} ,"
},
{
"math_id": 12,
"text": "a(b-b') \\equiv 0 \\pmod{m}."
},
{
"math_id": 13,
"text": "x \\equiv a^{-1} \\pmod{m},"
},
{
"math_id": 14,
"text": "a"
},
{
"math_id": 15,
"text": "+_m"
},
{
"math_id": 16,
"text": "\\overline{a} +_m \\overline{b} = \\overline{a + b}"
},
{
"math_id": 17,
"text": "\\overline{a} \\cdot_m \\overline{b} = \\overline{ab}."
},
{
"math_id": 18,
"text": "\\mathbb{Z}/m\\mathbb{Z}"
},
{
"math_id": 19,
"text": "\\mathbb{Z}/m"
},
{
"math_id": 20,
"text": "\\mathbb{Z}_m"
},
{
"math_id": 21,
"text": "\\phi(m)"
},
{
"math_id": 22,
"text": "\\phi"
},
{
"math_id": 23,
"text": "\\phi(p) = p-1"
},
{
"math_id": 24,
"text": "\\mathbb{Z}/p\\mathbb{Z}"
},
{
"math_id": 25,
"text": "n>1"
},
{
"math_id": 26,
"text": "n^2-n+1"
},
{
"math_id": 27,
"text": "n+1"
},
{
"math_id": 28,
"text": "n^2"
},
{
"math_id": 29,
"text": "(n+1)(n^2-n+1)=n^3+1"
},
{
"math_id": 30,
"text": "3\\times3 \\equiv 1 \\pmod{4}"
},
{
"math_id": 31,
"text": "4\\times7 \\equiv 1 \\pmod{9}"
},
{
"math_id": 32,
"text": "5\\times13 \\equiv 1 \\pmod{16}"
},
{
"math_id": 33,
"text": "32 \\equiv 2 \\pmod{10}"
},
{
"math_id": 34,
"text": "111 \\equiv 1 \\pmod{10}"
},
{
"math_id": 35,
"text": "\\overline{0} = \\{ \\cdots, -20, -10, 0, 10, 20, \\cdots \\}"
},
{
"math_id": 36,
"text": "\\overline{1} = \\{ \\cdots, -19, -9, 1, 11, 21, \\cdots \\}"
},
{
"math_id": 37,
"text": "\\overline{5} = \\{ \\cdots, -15, -5, 5, 15, 25, \\cdots \\}"
},
{
"math_id": 38,
"text": "\\overline{9} = \\{ \\cdots, -11, -1, 9, 19, 29, \\cdots \\}."
},
{
"math_id": 39,
"text": "\\overline{5}"
},
{
"math_id": 40,
"text": "\\overline{7}"
},
{
"math_id": 41,
"text": "3(7 + 10 r) - 1 = 21 + 30 r -1 = 20 + 30 r = 10(2 + 3r), "
},
{
"math_id": 42,
"text": "\\overline{8}"
},
{
"math_id": 43,
"text": "\\overline{0}"
},
{
"math_id": 44,
"text": "\\overline{5} \\cdot_{10} \\overline{8} = \\overline{0}"
},
{
"math_id": 45,
"text": "\\mathbb{Z}/10\\mathbb{Z}"
},
{
"math_id": 46,
"text": "ax + my = \\gcd(a, m)= 1."
},
{
"math_id": 47,
"text": "ax - 1 = (-y)m,"
},
{
"math_id": 48,
"text": "a^{\\phi(m)} \\equiv 1 \\pmod{m},"
},
{
"math_id": 49,
"text": "(\\mathbb{Z}/m\\mathbb{Z})"
},
{
"math_id": 50,
"text": "a^{\\phi(m)-1} \\equiv a^{-1} \\pmod{m}."
},
{
"math_id": 51,
"text": "\\phi (m) = m - 1"
},
{
"math_id": 52,
"text": "a^{-1} \\equiv a^{m-2} \\pmod{m}."
},
{
"math_id": 53,
"text": "\\phi (m)"
},
{
"math_id": 54,
"text": "b_i = \\prod_{j=1}^i a_j = a_i b_{i-1}"
}
] | https://en.wikipedia.org/wiki?curid=9815338 |
981643 | Pulse-repetition frequency | Number of pulses of a repeating signal
The pulse-repetition frequency (PRF) is the number of pulses of a repeating signal in a specific time unit. The term is used within a number of technical disciplines, notably radar.
In radar, a radio signal of a particular carrier frequency is turned on and off; the term "frequency" refers to the carrier, while the PRF refers to the number of switches. Both are measured in terms of cycle per second, or hertz. The PRF is normally much lower than the frequency. For instance, a typical World War II radar like the Type 7 GCI radar had a basic carrier frequency of 209 MHz (209 million cycles per second) and a PRF of 300 or 500 pulses per second. A related measure is the pulse width, the amount of time the transmitter is turned on during each pulse.
After producing a brief pulse of radio signal, the transmitter is turned off in order for the receiver units to detect the reflections of that signal off distant targets. Since the radio signal has to travel out to the target and back again, the required inter-pulse quiet period is a function of the radar's desired range. Longer periods are required for longer range signals, requiring lower PRFs. Conversely, higher PRFs produce shorter maximum ranges, but broadcast more pulses, and thus radio energy, in a given time. This creates stronger reflections that make detection easier. Radar systems must balance these two competing requirements.
Using older electronics, PRFs were generally fixed to a specific value, or might be switched among a limited set of possible values. This gives each radar system a characteristic PRF, which can be used in electronic warfare to identify the type or class of a particular platform such as a ship or aircraft, or in some cases, a particular unit. Radar warning receivers in aircraft include a library of common PRFs which can identify not only the type of radar, but in some cases the mode of operation. This allowed pilots to be warned when an SA-2 SAM battery had "locked on", for instance. Modern radar systems are generally able to smoothly change their PRF, pulse width and carrier frequency, making identification much more difficult.
Sonar and lidar systems also have PRFs, as does any pulsed system. In the case of sonar, the term pulse-repetition rate (PRR) is more common, although it refers to the same concept.
Introduction.
Electromagnetic (e.g. radio or light) waves are conceptually pure single frequency phenomena while pulses may be mathematically thought of as composed of a number of pure frequencies that sum and nullify in interactions that create a pulse train of the specific amplitudes, PRRs, base frequencies, phase characteristics, et cetera (See Fourier Analysis). The first term (PRF) is more common in device technical literature (Electrical Engineering and some sciences), and the latter (PRR) more commonly used in military-aerospace terminology (especially United States armed forces terminologies) and equipment specifications such as training and technical manuals for radar and sonar systems.
The reciprocal of PRF (or PRR) is called the "pulse-repetition time" ("PRT"), "pulse-repetition interval" ("PRI"), or "inter-pulse period" ("IPP"), which is the elapsed time from the beginning of one pulse to the beginning of the next pulse. The IPP term is normally used when referring to the quantity of PRT periods to be processed digitally. Each PRT having a fixed number of range gates, but not all of them being used. For example, the APY-1 radar used 128 IPP's with a fixed 50 range gates, producing 128 Doppler filters using an FFT. The different number of range gates on each of the five PRF's all being less than 50.
Within radar technology PRF is important since it determines the maximum target range ("R"max) and maximum Doppler velocity ("V"max) that can be accurately determined by the radar. Conversely, a high PRR/PRF can enhance target discrimination of nearer objects, such as a periscope or fast moving missile. This leads to use of low PRRs for search radar, and very high PRFs for fire control radars. Many dual-purpose and navigation radars—especially naval designs with variable PRRs—allow a skilled operator to adjust PRR to enhance and clarify the radar picture—for example in bad sea states where wave action generates false returns, and in general for less clutter, or perhaps a better return signal off a prominent landscape feature (e.g., a cliff).
Definition.
Pulse-repetition frequency (PRF) is the number of times a pulsed activity occurs every second.
This is similar to cycle per second used to describe other types of waveforms.
PRF is inversely proportional to time period formula_0 which is the property of a pulsed wave.
formula_1
PRF is usually associated with pulse spacing, which is the distance that the pulse travels before the next pulse occurs.
formula_2
Physics.
PRF is crucial to perform measurements for certain physics phenomenon.
For example, a tachometer may use a strobe light with an adjustable PRF to measure rotational velocity. The PRF for the strobe light is adjusted upward from a low value until the rotating object appears to stand still. The PRF of the tachometer would then match the speed of the rotating object.
Other types of measurements involve distance using the delay time for reflected echo pulses from light, microwaves, and sound transmissions.
Measurement.
PRF is crucial for systems and devices that measure distance.
Different PRF allow systems to perform very different functions.
A radar system uses a radio frequency electromagnetic signal reflected from a target to determine information about that target.
PRF is required for radar operation. This is the rate at which transmitter pulses are sent into air or space.
Range ambiguity.
A radar system determines range through the time delay between pulse transmission and reception by the relation:
formula_3
For accurate range determination a pulse must be transmitted and reflected before the next pulse is transmitted. This gives rise to the maximum unambiguous range limit:
formula_4
The maximum range also defines a range ambiguity for all detected targets. Because of the periodic nature of pulsed radar systems, it is impossible for some radar system to determine the difference between targets separated by integer multiples of the maximum range using a single PRF. More sophisticated radar systems avoid this problem through the use of multiple PRFs either simultaneously on different frequencies or on a single frequency with a changing PRT.
The range ambiguity resolution process is used to identify true range when PRF is above this limit.
Low PRF.
Systems using PRF below 3 kHz are considered low PRF because direct range can be measured to a distance of at least 50 km. Radar systems using low PRF typically produce unambiguous range.
Unambiguous Doppler processing becomes an increasing challenge due to coherency limitations as PRF falls below 3 kHz.
For example, an L-Band radar with 500 Hz pulse rate produces ambiguous velocity above 75 m/s (170 mile/hour), while detecting true range up to 300 km. This combination is appropriate for civilian aircraft radar and weather radar.
formula_5
formula_6
Low PRF radar have reduced sensitivity in the presence of low-velocity clutter that interfere with aircraft detection near terrain. Moving target indicator is generally required for acceptable performance near terrain, but this introduces radar scalloping issues that complicate the receiver. Low PRF radar intended for aircraft and spacecraft detection are heavily degraded by weather phenomenon, which cannot be compensated using moving target indicator.
Medium PRF.
Range and velocity can both be identified using medium PRF, but neither one can be identified directly. Medium PRF is from 3 kHz to 30 kHz, which corresponds with radar range from 5 km to 50 km. This is the ambiguous range, which is much smaller than the maximum range. Range ambiguity resolution is used to determine true range in medium PRF radar.
Medium PRF is used with Pulse-Doppler radar, which is required for look-down/shoot-down capability in military systems. Doppler radar return is generally not ambiguous until velocity exceeds the speed of sound.
A technique called ambiguity resolution is required to identify true range and speed. Doppler signals fall between 1.5 kHz, and 15 kHz, which is audible, so audio signals from medium-PRF radar systems can be used for passive target classification.
For example, an L band radar system using a PRF of 10 kHz with a duty cycle of 3.3% can identify true range to a distance of 450 km (30 * C / 10,000 km/s). This is the instrumented range. Unambiguous velocity is 1,500 m/s (3,300 mile/hour).
formula_7
formula_8
The unambiguous velocity of an L-Band radar using a PRF of 10 kHz would be 1,500 m/s (3,300 mile/hour) (10,000 x C / (2 x 10^9)). True velocity can be found for objects moving under 45,000 m/s if the band pass filter admits the signal (1,500/0.033).
Medium PRF has unique radar scalloping issues that require redundant detection schemes.
High PRF.
Systems using PRF above 30 kHz function better known as interrupted continuous-wave (ICW) radar because direct velocity can be measured up to 4.5 km/s at L band, but range resolution becomes more difficult.
High PRF is limited to systems that require close-in performance, like proximity fuses and law enforcement radar.
For example, if 30 samples are taken during the quiescent phase between transmit pulses using a 30 kHz PRF, then true range can be determined to a maximum of 150 km using 1 microsecond samples (30 x C / 30,000 km/s). Reflectors beyond this range might be detectable, but the true range cannot be identified.
formula_9
formula_10
It becomes increasingly difficult to take multiple samples between transmit pulses at these pulse frequencies, so range measurements are limited to short distances.
Sonar.
Sonar systems operate much like radar, except that the medium is liquid or air, and the frequency of the signal is either audio or ultra-sonic. Like radar, lower frequencies propagate relatively higher energies longer distances with less resolving ability. Higher frequencies, which damp out faster, provide increased resolution of nearby objects.
Signals propagate at the speed of sound in the medium (almost always water), and maximum PRF depends upon the size of the object being examined. For example, the speed of sound in water is 1,497 m/s, and the human body is about 0.5 m thick, so the PRF for ultrasound images of the human body should be less than about 2 kHz (1,497/0.5).
As another example, ocean depth is approximately 2 km, so sound takes over a second to return from the sea floor. Sonar is a very slow technology with very low PRF for this reason.
Laser.
Light waves can be used as radar frequencies, in which case the system is known as lidar. This is short for "LIght Detection And Ranging," similar to the original meaning of the initialism "RADAR," which was RAdio Detection And Ranging. Both have since become commonly-used english words, and are therefore acronyms rather than initialisms.
Laser range or other light signal frequency range finders operate just like radar at much higher frequencies. Non-laser light detection is utilized extensively in automated machine control systems (e.g. electric eyes controlling a garage door, conveyor sorting gates, etc.), and those that use pulse-rate detection and ranging are at heart, the same type of system as a radar—without the bells and whistles of the human interface.
Unlike lower radio signal frequencies, light does not bend around the curve of the earth or reflect off the ionosphere like C-band search radar signals, and so lidar is useful only in line of sight applications like higher frequency radar systems.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\Tau "
},
{
"math_id": 1,
"text": " \\Tau = \\frac{1}{\\text{PRF}}"
},
{
"math_id": 2,
"text": " \\text{Pulse Spacing} = \\frac{\\text{Propagation Speed}}{\\text{PRF}}"
},
{
"math_id": 3,
"text": "\\text{Range} = \\frac{c\\tau}{2}"
},
{
"math_id": 4,
"text": "\\text{Max Range} = \\frac{c\\tau_\\text{PRT}}{2} = \\frac{c}{2\\,\\text{PRF}} \\qquad \\begin{cases} \\tau_\\text{PRT} = \\frac{1}{\\text{PRF}} \\end{cases}"
},
{
"math_id": 5,
"text": "\\text{300 km range} = \\frac{C}{2 \\times 500}"
},
{
"math_id": 6,
"text": "\\text{75 m/s velocity} = \\frac{500 \\times C}{2 \\times 10^9}"
},
{
"math_id": 7,
"text": "\\text{450 km} = \\frac{C}{0.033 \\times 2 \\times 10,000}"
},
{
"math_id": 8,
"text": "\\text{1,500 m/s} = \\frac{10,000 \\times C}{2 \\times 10^9}"
},
{
"math_id": 9,
"text": "\\text{150 km} = \\frac{30 \\times C}{2 \\times 30,000}"
},
{
"math_id": 10,
"text": "\\text{4,500 m/s} = \\frac{30,000 \\times C}{2 \\times 10^9}"
}
] | https://en.wikipedia.org/wiki?curid=981643 |
981655 | Integer square root | Greatest integer less than or equal to square root
In number theory, the integer square root (isqrt) of a non-negative integer n is the non-negative integer m which is the greatest integer less than or equal to the square root of n,
formula_0
For example, formula_1
Introductory remark.
Let formula_2 and formula_3 be non-negative integers.
Algorithms that compute (the decimal representation of) formula_4 run forever on each input formula_2 which is not a perfect square.
Algorithms that compute formula_5 do not run forever. They are nevertheless capable of computing formula_4 up to any desired accuracy formula_3.
Choose any formula_3 and compute formula_6.
For example (setting formula_7):
formula_8
Compare the results with formula_9
It appears that the multiplication of the input by formula_10 gives an accuracy of k decimal digits.
To compute the (entire) decimal representation of formula_4, one can execute formula_11 an infinite number of times, increasing formula_2 by a factor formula_12 at each pass.
Assume that in the next program (formula_13) the procedure formula_11 is already defined and — for the sake of the argument — that all variables can hold integers of unlimited magnitude.
Then formula_14 will print the entire decimal representation of formula_4.
// Print sqrt(y), without halting
void sqrtForever(unsigned int y)
unsigned int result = isqrt(y);
printf("%d.", result); // print result, followed by a decimal point
while (true) // repeat forever ...
y = y * 100; // theoretical example: overflow is ignored
result = isqrt(y);
printf("%d", result % 10); // print last digit of result
The conclusion is that algorithms which compute are computationally equivalent to algorithms which compute .
Basic algorithms.
The integer square root of a non-negative integer formula_2 can be defined as
formula_15
For example, formula_16 because formula_17.
Algorithm using linear search.
The following C programs are straightforward implementations.
Linear search using addition.
In the program above (linear search, ascending) one can replace multiplication by addition, using the equivalence
formula_18
// Integer square root
// (linear search, ascending) using addition
unsigned int isqrt(unsigned int y)
unsigned int L = 0;
unsigned int a = 1;
unsigned int d = 3;
while (a <= y)
a = a + d; // (a + 1) ^ 2
d = d + 2;
L = L + 1;
return L;
Algorithm using binary search.
Linear search sequentially checks every value until it hits the smallest formula_19 where formula_20.
A speed-up is achieved by using binary search instead. The following C-program is an implementation.
// Integer square root (using binary search)
unsigned int isqrt(unsigned int y)
unsigned int L = 0;
unsigned int M;
unsigned int R = y + 1;
while (L != R - 1)
M = (L + R) / 2;
if (M * M <= y)
L = M;
else
R = M;
return L;
Numerical example
For example, if one computes formula_21 using binary search, one obtains the formula_22 sequence
formula_23
This computation takes 21 iteration steps, whereas linear search (ascending, starting from formula_24) needs steps.
Algorithm using Newton's method.
One way of calculating formula_25 and formula_26 is to use Heron's method, which is a special case of Newton's method, to find a solution for the equation formula_27, giving the iterative formula
formula_28
The sequence formula_29 converges quadratically to formula_25 as formula_30.
Stopping criterion.
One can prove that formula_31 is the largest possible number for which the stopping criterion
formula_32
ensures formula_33 in the algorithm above.
In implementations which use number formats that cannot represent all rational numbers exactly (for example, floating point), a stopping constant less than 1 should be used to protect against round-off errors.
Domain of computation.
Although formula_25 is irrational for many formula_34, the sequence formula_29 contains only rational terms when formula_35 is rational. Thus, with this method it is unnecessary to exit the field of rational numbers in order to calculate formula_26, a fact which has some theoretical advantages.
Using only integer division.
For computing formula_36 for very large integers "n", one can use the quotient of Euclidean division for both of the division operations. This has the advantage of only using integers for each intermediate value, thus making the use of floating point representations of large numbers unnecessary. It is equivalent to using the iterative formula
formula_37
By using the fact that
formula_38
one can show that this will reach formula_36 within a finite number of iterations.
In the original version, one has formula_39 for formula_40, and formula_41 for formula_42. So in the integer version, one has formula_43 and formula_44 until the final solution formula_45 is reached. For the final solution formula_45, one has formula_46 and formula_47, so the stopping criterion is formula_48.
However, formula_36 is not necessarily a fixed point of the above iterative formula. Indeed, it can be shown that formula_36 is a fixed point if and only if formula_49 is not a perfect square. If formula_49 is a perfect square, the sequence ends up in a period-two cycle between formula_36 and formula_50 instead of converging.
Example implementation in C.
// Square root of integer
unsigned int int_sqrt(unsigned int s)
// Zero yields zero
// One yields one
if (s <= 1)
return s;
// Initial estimate (must be too high)
unsigned int x0 = s / 2;
// Update
unsigned int x1 = (x0 + s / x0) / 2;
while (x1 < x0) // Bound check
x0 = x1;
x1 = (x0 + s / x0) / 2;
return x0;
Numerical example.
For example, if one computes the integer square root of 2000000 using the algorithm above, one obtains the sequence
formula_51
In total 13 iteration steps are needed. Although Heron's method converges quadratically close to the solution, less than one bit precision per iteration is gained at the beginning. This means that the choice of the initial estimate is critical for the performance of the algorithm.
When a fast computation for the integer part of the binary logarithm or for the bit-length is available (like e.g. codice_0 in C++20), one should better start at
formula_52
which is the least power of two bigger than formula_53. In the example of the integer square root of 2000000, formula_54, formula_55, and the resulting sequence is
formula_56
In this case only four iteration steps are needed.
Digit-by-digit algorithm.
The traditional pen-and-paper algorithm for computing the square root formula_25 is based on working from higher digit places to lower, and as each new digit pick the largest that will still yield a square formula_57. If stopping after the one's place, the result computed will be the integer square root.
Using bitwise operations.
If working in base 2, the choice of digit is simplified to that between 0 (the "small candidate") and 1 (the "large candidate"), and digit manipulations can be expressed in terms of binary shift operations. With codice_1 being multiplication, codice_2 being left shift, and codice_3 being logical right shift, a recursive algorithm to find the integer square root of any natural number is:
def integer_sqrt(n: int) -> int:
assert n >= 0, "sqrt works for only non-negative inputs"
if n < 2:
return n
# Recursive call:
small_cand = integer_sqrt(n » 2) « 1
large_cand = small_cand + 1
if large_cand * large_cand > n:
return small_cand
else:
return large_cand
def integer_sqrt_iter(n: int) -> int:
assert n >= 0, "sqrt works for only non-negative inputs"
if n < 2:
return n
# Find the shift amount. See also find first set,
# shift = ceil(log2(n) * 0.5) * 2 = ceil(ffs(n) * 0.5) * 2
shift = 2
while (n » shift) != 0:
shift += 2
# Unroll the bit-setting loop.
result = 0
while shift >= 0:
result = result « 1
large_cand = (
result + 1
) # Same as result ^ 1 (xor), because the last bit is always 0.
if large_cand * large_cand <= n » shift:
result = large_cand
shift -= 2
return result
Traditional pen-and-paper presentations of the digit-by-digit algorithm include various optimizations not present in the code above, in particular the trick of pre-subtracting the square of the previous digits which makes a general multiplication step unnecessary. See for an example.
In programming languages.
Some programming languages dedicate an explicit operation to the integer square root calculation in addition to the general case or can be extended by libraries to this end.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
External links.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\operatorname{isqrt}(n) = \\lfloor \\sqrt n \\rfloor."
},
{
"math_id": 1,
"text": "\\operatorname{isqrt}(27) = \\lfloor \\sqrt{27} \\rfloor = \\lfloor 5.19615242270663 ... \\rfloor = 5."
},
{
"math_id": 2,
"text": "y"
},
{
"math_id": 3,
"text": "k"
},
{
"math_id": 4,
"text": "\\sqrt y"
},
{
"math_id": 5,
"text": "\\lfloor \\sqrt y \\rfloor"
},
{
"math_id": 6,
"text": "\\lfloor \\sqrt {y \\times 100^k} \\rfloor"
},
{
"math_id": 7,
"text": "y = 2"
},
{
"math_id": 8,
"text": "\\begin{align}\n& k = 0: \\lfloor \\sqrt {2 \\times 100^{0}} \\rfloor = \\lfloor \\sqrt {2} \\rfloor = 1 \\\\\n& k = 1: \\lfloor \\sqrt {2 \\times 100^{1}} \\rfloor = \\lfloor \\sqrt {200} \\rfloor = 14 \\\\\n& k = 2: \\lfloor \\sqrt {2 \\times 100^{2}} \\rfloor = \\lfloor \\sqrt {20000} \\rfloor = 141 \\\\\n& k = 3: \\lfloor \\sqrt {2 \\times 100^{3}} \\rfloor = \\lfloor \\sqrt {2000000} \\rfloor = 1414 \\\\\n& \\vdots \\\\\n& k = 8: \\lfloor \\sqrt {2 \\times 100^{8}} \\rfloor = \\lfloor \\sqrt {20000000000000000} \\rfloor = 141421356 \\\\\n& \\vdots \\\\\n\\end{align}"
},
{
"math_id": 9,
"text": "\\sqrt {2} = 1.41421356237309504880168872420969807856967187537694 ..."
},
{
"math_id": 10,
"text": "100^k"
},
{
"math_id": 11,
"text": "\\operatorname{isqrt}(y)"
},
{
"math_id": 12,
"text": "100"
},
{
"math_id": 13,
"text": "\\operatorname{sqrtForever}"
},
{
"math_id": 14,
"text": "\\operatorname{sqrtForever}(y)"
},
{
"math_id": 15,
"text": "\\lfloor \\sqrt y \\rfloor = x : x^2 \\leq y <(x+1)^2, x \\in \\mathbb{N}"
},
{
"math_id": 16,
"text": "\\operatorname{isqrt}(27) = \\lfloor \\sqrt{27} \\rfloor = 5"
},
{
"math_id": 17,
"text": "6^2 > 27 \\text{ and } 5^2 \\ngtr 27"
},
{
"math_id": 18,
"text": "(L+1)^2 = L^2 + 2L + 1 = L^2 + 1 + \\sum_{i=1}^L 2."
},
{
"math_id": 19,
"text": "x"
},
{
"math_id": 20,
"text": "x^2 > y"
},
{
"math_id": 21,
"text": "\\operatorname{isqrt}(2000000)"
},
{
"math_id": 22,
"text": "[L,R]"
},
{
"math_id": 23,
"text": "\\begin{align}\n& [0,2000001] \\rightarrow [0,1000000] \\rightarrow [0,500000] \\rightarrow [0,250000] \\rightarrow [0,125000] \\rightarrow [0,62500] \\rightarrow [0,31250] \\rightarrow [0,15625] \\\\\n& \\rightarrow [0,7812] \\rightarrow [0,3906] \\rightarrow [0,1953] \\rightarrow [976,1953] \\rightarrow [976,1464] \\rightarrow [1220,1464] \\rightarrow [1342,1464] \\rightarrow [1403,1464] \\\\\n& \\rightarrow [1403,1433] \\rightarrow [1403,1418] \\rightarrow [1410,1418] \\rightarrow [1414,1418] \\rightarrow [1414,1416] \\rightarrow [1414,1415]\n\\end{align}"
},
{
"math_id": 24,
"text": "0"
},
{
"math_id": 25,
"text": "\\sqrt{n}"
},
{
"math_id": 26,
"text": "\\operatorname{isqrt}(n)"
},
{
"math_id": 27,
"text": "x^2 - n = 0"
},
{
"math_id": 28,
"text": "x_{k+1} = \\frac{1}{2}\\!\\left(x_k + \\frac{n}{x_k}\\right), \\quad k \\ge 0, \\quad x_0 > 0."
},
{
"math_id": 29,
"text": "\\{x_k\\}"
},
{
"math_id": 30,
"text": "k\\to\\infty"
},
{
"math_id": 31,
"text": "c=1"
},
{
"math_id": 32,
"text": "|x_{k+1} - x_{k}| < c"
},
{
"math_id": 33,
"text": "\\lfloor x_{k+1} \\rfloor=\\lfloor \\sqrt n \\rfloor"
},
{
"math_id": 34,
"text": "n"
},
{
"math_id": 35,
"text": "x_0"
},
{
"math_id": 36,
"text": "\\lfloor \\sqrt n \\rfloor"
},
{
"math_id": 37,
"text": "x_{k+1} = \\left\\lfloor \\frac{1}{2}\\!\\left(x_k + \\left\\lfloor \\frac{n}{x_k} \\right\\rfloor \\right) \\right\\rfloor, \\quad k \\ge 0, \\quad x_0 > 0, \\quad x_0 \\in \\mathbb{Z}."
},
{
"math_id": 38,
"text": "\\left\\lfloor \\frac{1}{2}\\!\\left(x_k + \\left\\lfloor \\frac{n}{x_k} \\right\\rfloor \\right) \\right\\rfloor = \\left\\lfloor \\frac{1}{2}\\!\\left(x_k + \\frac{n}{x_k} \\right) \\right\\rfloor,"
},
{
"math_id": 39,
"text": "x_k \\ge \\sqrt n"
},
{
"math_id": 40,
"text": "k \\ge 1"
},
{
"math_id": 41,
"text": "x_k > x_{k+1}"
},
{
"math_id": 42,
"text": "x_k > \\sqrt n"
},
{
"math_id": 43,
"text": "\\lfloor x_k \\rfloor \\ge \\lfloor\\sqrt n\\rfloor"
},
{
"math_id": 44,
"text": "x_k \\ge \\lfloor x_k \\rfloor > x_{k+1} \\ge \\lfloor x_{k+1}\\rfloor"
},
{
"math_id": 45,
"text": "x_s"
},
{
"math_id": 46,
"text": "\\lfloor \\sqrt n\\rfloor\\le\\lfloor x_s\\rfloor \\le \\sqrt n"
},
{
"math_id": 47,
"text": "\\lfloor x_{s+1} \\rfloor \\ge \\lfloor x_s \\rfloor"
},
{
"math_id": 48,
"text": "\\lfloor x_{k+1} \\rfloor \\ge \\lfloor x_k \\rfloor"
},
{
"math_id": 49,
"text": "n + 1"
},
{
"math_id": 50,
"text": "\\lfloor \\sqrt n \\rfloor + 1"
},
{
"math_id": 51,
"text": "\\begin{align}\n& 1000000 \\rightarrow 500001 \\rightarrow 250002 \\rightarrow 125004 \\rightarrow 62509 \\rightarrow 31270 \\rightarrow 15666 \\rightarrow 7896 \\\\\n& \\rightarrow 4074 \\rightarrow 2282 \\rightarrow 1579 \\rightarrow 1422 \\rightarrow 1414 \\rightarrow 1414\n\\end{align}"
},
{
"math_id": 52,
"text": "x_0 = 2^{\\lfloor (\\log_2 n) /2 \\rfloor+1},"
},
{
"math_id": 53,
"text": "\\sqrt n"
},
{
"math_id": 54,
"text": "\\lfloor \\log_2 n \\rfloor = 20"
},
{
"math_id": 55,
"text": "x_0 = 2^{11} = 2048"
},
{
"math_id": 56,
"text": "2048 \\rightarrow 1512 \\rightarrow 1417 \\rightarrow 1414 \\rightarrow 1414."
},
{
"math_id": 57,
"text": "\\leq n"
}
] | https://en.wikipedia.org/wiki?curid=981655 |
981694 | Selberg trace formula | Mathematical theorem
In mathematics, the Selberg trace formula, introduced by , is an expression for the character of the unitary representation of a Lie group G on the space "L"2(Γ\"G") of square-integrable functions, where Γ is a cofinite discrete group. The character is given by the trace of certain functions on G.
The simplest case is when Γ is cocompact, when the representation breaks up into discrete summands. Here the trace formula is an extension of the Frobenius formula for the character of an induced representation of finite groups. When Γ is the cocompact subgroup Z of the real numbers "G"
R, the Selberg trace formula is essentially the Poisson summation formula.
The case when Γ\"G" is not compact is harder, because there is a continuous spectrum, described using Eisenstein series. Selberg worked out the non-compact case when G is the group SL(2, R); the extension to higher rank groups is the Arthur–Selberg trace formula.
When Γ is the fundamental group of a Riemann surface, the Selberg trace formula describes the spectrum of differential operators such as the Laplacian in terms of geometric data involving the lengths of geodesics on the Riemann surface. In this case the Selberg trace formula is formally similar to the explicit formulas relating the zeros of the Riemann zeta function to prime numbers, with the zeta zeros corresponding to eigenvalues of the Laplacian, and the primes corresponding to geodesics. Motivated by the analogy, Selberg introduced the Selberg zeta function of a Riemann surface, whose analytic properties are encoded by the Selberg trace formula.
Early history.
Cases of particular interest include those for which the space is a compact Riemann surface S. The initial publication in 1956 of Atle Selberg dealt with this case, its Laplacian differential operator and its powers. The traces of powers of a Laplacian can be used to define the Selberg zeta function. The interest of this case was the analogy between the formula obtained, and the explicit formulae of prime number theory. Here the closed geodesics on S play the role of prime numbers.
At the same time, interest in the traces of Hecke operators was linked to the Eichler–Selberg trace formula, of Selberg and Martin Eichler, for a Hecke operator acting on a vector space of cusp forms of a given weight, for a given congruence subgroup of the modular group. Here the trace of the identity operator is the dimension of the vector space, i.e. the dimension of the space of modular forms of a given type: a quantity traditionally calculated by means of the Riemann–Roch theorem.
Applications.
The trace formula has applications to arithmetic geometry and number theory. For instance, using the trace theorem, Eichler and Shimura calculated the Hasse–Weil L-functions associated to modular curves; Goro Shimura's methods by-passed the analysis involved in the trace formula. The development of parabolic cohomology (from Eichler cohomology) provided a purely algebraic setting based on group cohomology, taking account of the cusps characteristic of non-compact Riemann surfaces and modular curves.
The trace formula also has purely differential-geometric applications. For instance, by a result of Buser, the length spectrum of a Riemann surface is an isospectral invariant, essentially by the trace formula.
Selberg trace formula for compact hyperbolic surfaces.
A compact hyperbolic surface X can be written as the space of orbits formula_0 where Γ is a subgroup of PSL(2, R), and H is the upper half plane, and Γ acts on H by linear fractional transformations.
The Selberg trace formula for this case is easier than the general case because the surface is compact so there is no continuous spectrum, and the group Γ has no parabolic or elliptic elements (other than the identity).
Then the spectrum for the Laplace–Beltrami operator on X is discrete and real, since the Laplace operator is self adjoint with compact resolvent; that is
formula_1
where the eigenvalues "μn" correspond to Γ-invariant eigenfunctions u in "C"∞(H) of the Laplacian; in other words
formula_2
Using the variable substitution
formula_3
the eigenvalues are labeled
formula_4
Then the Selberg trace formula is given by
formula_5
The right hand side is a sum over conjugacy classes of the group Γ, with the first term corresponding to the identity element and the remaining terms forming a sum over the other conjugacy classes {"T" } (which are all hyperbolic in this case). The function h has to satisfy the following:
"h"("r");
The function g is the Fourier transform of h, that is,
formula_7
The general Selberg trace formula for cocompact quotients.
General statement.
Let "G" be a unimodular locally compact group, and formula_8 a discrete cocompact subgroup of "G" and formula_9 a compactly supported continuous function on "G". The trace formula in this setting is the following equality:
formula_10
where formula_11 is the set of conjugacy classes in formula_8, formula_12 is the unitary dual of "G" and:
The left-hand side of the formula is called the "geometric side" and the right-hand side the "spectral side". The terms formula_25 are orbital integrals.
Proof.
Define the following operator on compactly supported functions on formula_21:
formula_26
It extends continuously to formula_27 and for formula_28 we have:
formula_29
after a change of variables. Assuming formula_30 is compact, the operator formula_31 is trace-class and the trace formula is the result of computing its trace in two ways as explained below.
The trace of formula_31can be expressed as the integral of the kernel formula_32 along the diagonal, that is:
formula_33
Let formula_11 denote a collection of representatives of conjugacy classes in formula_8, and formula_34 and formula_35 the respective centralizers of formula_16.
Then the above integral can, after manipulation, be written
formula_36
This gives the "geometric side" of the trace formula.
The "spectral side" of the trace formula comes from computing the trace of formula_31 using the decomposition of the regular representation of formula_19 into its irreducible components. Thus
formula_37
where formula_38 is the set of irreducible unitary representations of formula_19 (recall that the positive integer formula_20 is the multiplicity of formula_18 in the unitary representation formula_39 on formula_27).
The case of semisimple Lie groups and symmetric spaces.
When formula_19 is a semisimple Lie group with a maximal compact subgroup formula_40 and formula_41 is the associated symmetric space the conjugacy classes in formula_8 can be described in geometric terms using the compact Riemannian manifold (more generally orbifold) formula_42. The orbital integrals and the traces in irreducible summands can then be computed further and in particular one can recover the case of the trace formula for hyperbolic surfaces in this way.
Later work.
The general theory of Eisenstein series was largely motivated by the requirement to separate out the continuous spectrum, which is characteristic of the non-compact case.
The trace formula is often given for algebraic groups over the adeles rather than for Lie groups, because this makes the corresponding discrete subgroup Γ into an algebraic group over a field which is technically easier to work with. The case of SL2(C) is discussed in and . Gel'fand et al also treat SL2("F") where F is a locally compact topological field with ultrametric norm, so a finite extension of the p-adic numbers Q"p" or of the formal Laurent series F"q"(("T")); they also handle the adelic case in characteristic 0, combining all completions R and Q"p" of the rational numbers Q.
Contemporary successors of the theory are the Arthur–Selberg trace formula applying to the case of general semisimple "G", and the many studies of the trace formula in the Langlands philosophy (dealing with technical issues such as endoscopy). The Selberg trace formula can be derived from the Arthur–Selberg trace formula with some effort.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\Gamma \\backslash \\mathbf{H},"
},
{
"math_id": 1,
"text": " 0 = \\mu_0 < \\mu_1 \\leq \\mu_2 \\leq \\cdots "
},
{
"math_id": 2,
"text": "\\begin{cases}\nu(\\gamma z) = u(z), \\qquad \\forall \\gamma \\in \\Gamma \\\\\ny^2 \\left (u_{xx} + u_{yy} \\right) + \\mu_{n} u = 0.\n\\end{cases}"
},
{
"math_id": 3,
"text": " \\mu = s(1-s), \\qquad s=\\tfrac{1}{2}+ir "
},
{
"math_id": 4,
"text": " r_{n}, n \\geq 0. "
},
{
"math_id": 5,
"text": "\\sum_{n=0}^\\infty h(r_n) = \\frac{\\mu(X)}{4 \\pi } \\int_{-\\infty}^\\infty r \\, h(r) \\tanh(\\pi r)\\,dr + \\sum_{ \\{T\\} } \\frac{ \\log N(T_0) }{ N(T)^{\\frac{1}{2}} - N(T)^{-\\frac{1}{2}} } g(\\log N(T)). "
},
{
"math_id": 6,
"text": "\\vert h(r) \\vert \\leq M \\left( 1+\\left| \\operatorname{Re}(r) \\right| \\right )^{-2-\\delta}."
},
{
"math_id": 7,
"text": " h(r) = \\int_{-\\infty}^\\infty g(u) e^{iru} \\, du. "
},
{
"math_id": 8,
"text": "\\Gamma"
},
{
"math_id": 9,
"text": "\\phi"
},
{
"math_id": 10,
"text": "\\sum_{\\gamma\\in\\{\\Gamma\\}} a_\\Gamma^G(\\gamma)\\int_{G^\\gamma\\setminus G}\\phi(x^{-1}\\gamma x)\\,dx = \\sum_{\\pi\\in\\widehat G}a_\\Gamma^G(\\pi)\\operatorname{tr}\\pi(\\phi)"
},
{
"math_id": 11,
"text": "\\{\\Gamma\\}"
},
{
"math_id": 12,
"text": "\\widehat G"
},
{
"math_id": 13,
"text": "\\gamma \\in \\Gamma"
},
{
"math_id": 14,
"text": " a_\\Gamma^G(\\gamma) = \\text{volume}(\\Gamma^\\gamma\\setminus G^\\gamma)."
},
{
"math_id": 15,
"text": "G_\\gamma, \\Gamma_\\gamma"
},
{
"math_id": 16,
"text": "\\gamma"
},
{
"math_id": 17,
"text": "G,\\Gamma"
},
{
"math_id": 18,
"text": "\\pi"
},
{
"math_id": 19,
"text": "G"
},
{
"math_id": 20,
"text": "a_\\Gamma^G(\\pi)"
},
{
"math_id": 21,
"text": "\\Gamma\\backslash G"
},
{
"math_id": 22,
"text": "L^2(\\Gamma\\backslash G"
},
{
"math_id": 23,
"text": "\\pi(\\phi)"
},
{
"math_id": 24,
"text": "\\int_G \\phi(g)\\pi(g) dg"
},
{
"math_id": 25,
"text": "\\int_{G^\\gamma\\setminus G}\\phi(x^{-1}\\gamma x)\\,dx"
},
{
"math_id": 26,
"text": "R(\\phi) = \\int_G \\phi(x)R(x)\\,dx,"
},
{
"math_id": 27,
"text": "L^2(\\Gamma\\setminus G)"
},
{
"math_id": 28,
"text": "f\\in L^2(\\Gamma\\setminus G)"
},
{
"math_id": 29,
"text": "(R(\\phi)f)(x) = \\int_G\\phi(y)f(xy)\\,dy = \\int_{\\Gamma\\setminus G}\\left(\\sum_{\\gamma\\in\\Gamma}\\phi(x^{-1}\\gamma y)\\right)f(y)\\,dy"
},
{
"math_id": 30,
"text": "\\Gamma\\setminus G"
},
{
"math_id": 31,
"text": "R(\\phi)"
},
{
"math_id": 32,
"text": "K(x,y)=\\sum_{\\gamma\\in\\Gamma}\\phi(x^{-1}\\gamma y)"
},
{
"math_id": 33,
"text": "\\operatorname{tr}R(\\phi) = \\int_{\\Gamma\\setminus G}\\sum_{\\gamma\\in\\Gamma}\\phi(x^{-1}\\gamma x)\\,dx."
},
{
"math_id": 34,
"text": "\\Gamma^\\gamma"
},
{
"math_id": 35,
"text": "G^\\gamma"
},
{
"math_id": 36,
"text": "\\operatorname{tr}R(\\phi) = \\sum_{\\gamma\\in\\{\\Gamma\\}} a_\\Gamma^G(\\gamma)\\int_{G^\\gamma\\setminus G}\\phi(x^{-1}\\gamma x)\\,dx."
},
{
"math_id": 37,
"text": "\\operatorname{tr}R(\\phi) = \\sum_{\\pi\\in\\hat G}a_\\Gamma^G(\\pi)\\operatorname{tr}\\pi(\\phi)"
},
{
"math_id": 38,
"text": "\\hat G"
},
{
"math_id": 39,
"text": "R"
},
{
"math_id": 40,
"text": "K"
},
{
"math_id": 41,
"text": "X=G/K"
},
{
"math_id": 42,
"text": "\\Gamma \\backslash X"
}
] | https://en.wikipedia.org/wiki?curid=981694 |
9817744 | Quantum concentration | The quantum concentration "n"Q is the particle concentration (i.e. the number of particles per unit volume) of a system where the interparticle distance is equal to the thermal de Broglie wavelength.
Quantum effects become appreciable when the particle concentration is greater than or equal to the quantum concentration, which is defined as:
formula_0
where:
The quantum concentration for room temperature protons is about 1/cubic-Angstrom.
As the quantum concentration depends on temperature, high temperatures will put most systems in the classical limit unless they have a very high density e.g. a White dwarf.
For an ideal gas the Sackur–Tetrode equation can be written in terms of the quantum concentration as
formula_3
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "n_{\\rm Q}=\\left(\\frac{M k_B T}{2 \\pi \\hbar^2}\\right)^{3/2}"
},
{
"math_id": 1,
"text": "k_B"
},
{
"math_id": 2,
"text": "\\hbar"
},
{
"math_id": 3,
"text": "S(T,V,N)=N k_{\\rm B}\\left[\\frac{5}{2}+\\ln\\left(\\frac{n_{\\rm Q}}{n}\\right)\\right]"
}
] | https://en.wikipedia.org/wiki?curid=9817744 |
981793 | Ehresmann's lemma | On when a smooth map between smooth manifolds is a locally trivial fibration
In mathematics, or specifically, in differential topology, Ehresmann's lemma or Ehresmann's fibration theorem states that if a smooth mapping formula_0, where formula_1 and formula_2 are smooth manifolds, is
then it is a locally trivial fibration. This is a foundational result in differential topology due to Charles Ehresmann, and has many variants. | [
{
"math_id": 0,
"text": " f\\colon M \\rightarrow N"
},
{
"math_id": 1,
"text": " M "
},
{
"math_id": 2,
"text": "N"
}
] | https://en.wikipedia.org/wiki?curid=981793 |
981855 | Blaschke product | Concept in complex analysis
In complex analysis, the Blaschke product is a bounded analytic function in the open unit disc constructed to have zeros at a (finite or infinite) sequence of prescribed complex numbers
formula_0
inside the unit disc, with the property that the magnitude of the function is constant along the boundary of the disc.
Blaschke products were introduced by Wilhelm Blaschke (1915). They are related to Hardy spaces.
Definition.
A sequence of points formula_2 inside the unit disk is said to satisfy the Blaschke condition when
formula_3
Given a sequence obeying the Blaschke condition, the Blaschke product is defined as
formula_4
with factors
formula_5
provided formula_6. Here formula_7 is the complex conjugate of formula_8. When formula_9 take formula_10.
The Blaschke product formula_1 defines a function analytic in the open unit disc, and zero exactly at the formula_11 (with multiplicity counted): furthermore it is in the Hardy class formula_12.
The sequence of formula_11 satisfying the convergence criterion above is sometimes called a Blaschke sequence.
Szegő theorem.
A theorem of Gábor Szegő states that if formula_13, the Hardy space with integrable norm, and if formula_14 is not identically zero, then the zeroes of formula_14 (certainly countable in number) satisfy the Blaschke condition.
Finite Blaschke products.
Finite Blaschke products can be characterized (as analytic functions on the unit disc) in the following way: Assume that formula_14 is an analytic function on the open unit disc such
that formula_14 can be extended to a continuous function on the closed unit disc
formula_15
that maps the unit circle to itself. Then formula_14 is equal to a finite Blaschke product
formula_16
where formula_17 lies on the unit circle and formula_18 is the multiplicity of the zero formula_19,
formula_20. In particular, if formula_14 satisfies the condition above and has no zeros inside the unit circle, then formula_14 is constant (this fact is also a consequence of the maximum principle for harmonic functions, applied to the harmonic function formula_21.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "a_0,\\ a_1, \\ldots "
},
{
"math_id": 1,
"text": "B(z)"
},
{
"math_id": 2,
"text": "(a_n)"
},
{
"math_id": 3,
"text": "\\sum_n (1-|a_n|) <\\infty."
},
{
"math_id": 4,
"text": "B(z)=\\prod_n B(a_n,z)"
},
{
"math_id": 5,
"text": "B(a,z)=\\frac{|a|}{a}\\;\\frac{a-z}{1 - \\overline{a}z}"
},
{
"math_id": 6,
"text": "a\\neq 0"
},
{
"math_id": 7,
"text": "\\overline{a}"
},
{
"math_id": 8,
"text": "a"
},
{
"math_id": 9,
"text": "a=0"
},
{
"math_id": 10,
"text": "B(0,z)=z"
},
{
"math_id": 11,
"text": "a_n"
},
{
"math_id": 12,
"text": "H^\\infty"
},
{
"math_id": 13,
"text": "f\\in H^1"
},
{
"math_id": 14,
"text": "f"
},
{
"math_id": 15,
"text": "\\overline{\\Delta}= \\{z \\in \\mathbb{C} \\mid |z|\\le 1\\} "
},
{
"math_id": 16,
"text": " B(z)=\\zeta\\prod_{i=1}^n\\left({{z-a_i}\\over {1-\\overline{a_i}z}}\\right)^{m_i} "
},
{
"math_id": 17,
"text": "\\zeta"
},
{
"math_id": 18,
"text": "m_i"
},
{
"math_id": 19,
"text": "a_i"
},
{
"math_id": 20,
"text": "|a_i|<1"
},
{
"math_id": 21,
"text": "\\log(|f(z)|)"
}
] | https://en.wikipedia.org/wiki?curid=981855 |
981915 | Singular point of an algebraic variety | In the mathematical field of algebraic geometry, a singular point of an algebraic variety "V" is a point "P" that is 'special' (so, singular), in the geometric sense that at this point the tangent space at the variety may not be regularly defined. In case of varieties defined over the reals, this notion generalizes the notion of local non-flatness. A point of an algebraic variety that is not singular is said to be regular. An algebraic variety that has no singular point is said to be non-singular or smooth.
Definition.
A plane curve defined by an implicit equation
formula_0,
where "F" is a smooth function is said to be "singular" at a point if the Taylor series of "F" has order at least 2 at this point.
The reason for this is that, in differential calculus, the tangent at the point ("x"0, "y"0) of such a curve is defined by the equation
formula_1
whose left-hand side is the term of degree one of the Taylor expansion. Thus, if this term is zero, the tangent may not be defined in the standard way, either because it does not exist or a special definition must be provided.
In general for a hypersurface
formula_2
the singular points are those at which all the partial derivatives simultaneously vanish. A general algebraic variety "V" being defined as the common zeros of several polynomials, the condition on a point "P" of "V" to be a singular point is that the Jacobian matrix of the first-order partial derivatives of the polynomials has a rank at "P" that is lower than the rank at other points of the variety.
Points of "V" that are not singular are called non-singular or regular. It is always true that almost all points are non-singular, in the sense that the non-singular points form a set that is both open and dense in the variety (for the Zariski topology, as well as for the usual topology, in the case of varieties defined over the complex numbers).
In case of a real variety (that is the set of the points with real coordinates of a variety defined by polynomials with real coefficients), the variety is a manifold near every regular point. But it is important to note that a real variety may be a manifold and have singular points. For example the equation "y"3 + 2"x"2"y" − "x"4 = 0 defines a real analytic manifold but has a singular point at the origin. This may be explained by saying that the curve has two complex conjugate branches that cut the real branch at the origin.
Singular points of smooth mappings.
As the notion of singular points is a purely local property, the above definition can be extended to cover the wider class of smooth mappings (functions from "M" to R"n" where all derivatives exist). Analysis of these singular points can be reduced to the algebraic variety case by considering the jets of the mapping. The "k"th jet is the Taylor series of the mapping truncated at degree "k" and deleting the constant term.
Nodes.
In classical algebraic geometry, certain special singular points were also called nodes. A node is a singular point where the Hessian matrix is non-singular; this implies that the singular point has multiplicity two and the tangent cone is not singular outside its vertex.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "F(x,y)=0"
},
{
"math_id": 1,
"text": "(x-x_0)F'_x(x_0,y_0) + (y-y_0)F'_y(x_0,y_0)=0,"
},
{
"math_id": 2,
"text": "F(x,y,z,\\ldots) = 0"
}
] | https://en.wikipedia.org/wiki?curid=981915 |
982000 | Star refinement | In mathematics, specifically in the study of topology and open covers of a topological space "X", a star refinement is a particular kind of refinement of an open cover of "X". A related concept is the notion of barycentric refinement.
Star refinements are used in the definition of fully normal space and in one definition of uniform space. It is also useful for stating a characterization of paracompactness.
Definitions.
The general definition makes sense for arbitrary coverings and does not require a topology. Let formula_0 be a set and let formula_1 be a covering of formula_2 that is, formula_3 Given a subset formula_4 of formula_2 the star of formula_4 with respect to formula_1 is the union of all the sets formula_5 that intersect formula_6 that is,
formula_7
Given a point formula_8 we write formula_9 instead of formula_10
A covering formula_1 of formula_0 is a refinement of a covering formula_11 of formula_0 if every formula_5 is contained in some formula_12 The following are two special kinds of refinement. The covering formula_1 is called a barycentric refinement of formula_11 if for every formula_13 the star formula_9 is contained in some formula_12 The covering formula_1 is called a star refinement of formula_11 if for every formula_5 the star formula_14 is contained in some formula_12
Properties and Examples.
Every star refinement of a cover is a barycentric refinement of that cover. The converse is not true, but a barycentric refinement of a barycentric refinement is a star refinement.
Given a metric space formula_2 let formula_15 be the collection of all open balls formula_16 of a fixed radius formula_17 The collection formula_18 is a barycentric refinement of formula_19 and the collection formula_20 is a star refinement of formula_21
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "X"
},
{
"math_id": 1,
"text": "\\mathcal U"
},
{
"math_id": 2,
"text": "X,"
},
{
"math_id": 3,
"text": "X = \\bigcup \\mathcal U."
},
{
"math_id": 4,
"text": "S"
},
{
"math_id": 5,
"text": "U \\in \\mathcal U"
},
{
"math_id": 6,
"text": "S,"
},
{
"math_id": 7,
"text": "\\operatorname{st}(S, \\mathcal U) = \\bigcup\\big\\{U \\in \\mathcal U: S\\cap U \\neq \\varnothing\\big\\}."
},
{
"math_id": 8,
"text": "x \\in X,"
},
{
"math_id": 9,
"text": "\\operatorname{st}(x,\\mathcal U)"
},
{
"math_id": 10,
"text": "\\operatorname{st}(\\{x\\}, \\mathcal U)."
},
{
"math_id": 11,
"text": "\\mathcal V"
},
{
"math_id": 12,
"text": "V \\in \\mathcal V."
},
{
"math_id": 13,
"text": "x \\in X"
},
{
"math_id": 14,
"text": "\\operatorname{st}(U, \\mathcal U)"
},
{
"math_id": 15,
"text": "\\mathcal V=\\{B_\\epsilon(x): x\\in X\\}"
},
{
"math_id": 16,
"text": "B_\\epsilon(x)"
},
{
"math_id": 17,
"text": "\\epsilon>0."
},
{
"math_id": 18,
"text": "\\mathcal U=\\{B_{\\epsilon/2}(x): x\\in X\\}"
},
{
"math_id": 19,
"text": "\\mathcal V,"
},
{
"math_id": 20,
"text": "\\mathcal W=\\{B_{\\epsilon/3}(x): x\\in X\\}"
},
{
"math_id": 21,
"text": "\\mathcal V."
}
] | https://en.wikipedia.org/wiki?curid=982000 |