id
stringlengths 2
8
| title
stringlengths 1
130
| text
stringlengths 0
252k
| formulas
listlengths 1
823
| url
stringlengths 38
44
|
---|---|---|---|---|
9444220 | Multiple comparisons problem | Statistical interpretation with many tests
In statistics, the multiple comparisons, multiplicity or multiple testing problem occurs when one considers a set of statistical inferences simultaneously or estimates a subset of parameters selected based on the observed values.
The larger the number of inferences made, the more likely erroneous inferences become. Several statistical techniques have been developed to address this problem, for example, by requiring a stricter significance threshold for individual comparisons, so as to compensate for the number of inferences being made. Methods for family-wise error rate give the probability of false positives resulting from the multiple comparisons problem.
History.
The problem of multiple comparisons received increased attention in the 1950s with the work of statisticians such as Tukey and Scheffé. Over the ensuing decades, many procedures were developed to address the problem. In 1996, the first international conference on multiple comparison procedures took place in Tel Aviv. This is an active research area with work being done by, for example Emmanuel Candès and Vladimir Vovk.
Definition.
Multiple comparisons arise when a statistical analysis involves multiple simultaneous statistical tests, each of which has a potential to produce a "discovery". A stated confidence level generally applies only to each test considered individually, but often it is desirable to have a confidence level for the whole family of simultaneous tests. Failure to compensate for multiple comparisons can have important real-world consequences, as illustrated by the following examples:
In both examples, as the number of comparisons increases, it becomes more likely that the groups being compared will appear to differ in terms of at least one attribute. Our confidence that a result will generalize to independent data should generally be weaker if it is observed as part of an analysis that involves multiple comparisons, rather than an analysis that involves only a single comparison.
For example, if one test is performed at the 5% level and the corresponding null hypothesis is true, there is only a 5% risk of incorrectly rejecting the null hypothesis. However, if 100 tests are each conducted at the 5% level and all corresponding null hypotheses are true, the expected number of incorrect rejections (also known as false positives or Type I errors) is 5. If the tests are statistically independent from each other (i.e. are performed on independent samples), the probability of at least one incorrect rejection is approximately 99.4%.
The multiple comparisons problem also applies to confidence intervals. A single confidence interval with a 95% coverage probability level will contain the true value of the parameter in 95% of samples. However, if one considers 100 confidence intervals simultaneously, each with 95% coverage probability, the expected number of non-covering intervals is 5. If the intervals are statistically independent from each other, the probability that at least one interval does not contain the population parameter is 99.4%.
Techniques have been developed to prevent the inflation of false positive rates and non-coverage rates that occur with multiple statistical tests.
Classification of multiple hypothesis tests.
The following table defines the possible outcomes when testing multiple null hypotheses.
Suppose we have a number "m" of null hypotheses, denoted by: "H"1, "H"2, ..., "H""m".
Using a statistical test, we reject the null hypothesis if the test is declared significant. We do not reject the null hypothesis if the test is non-significant.
Summing each type of outcome over all "Hi" yields the following random variables:
In m hypothesis tests of which <math>m_0</math> are true null hypotheses, R is an observable random variable, and S, T, U, and V are unobservable random variables.
Controlling procedures.
Probability that at least one null hypothesis is wrongly rejected, for formula_0, as a function of the number of independent tests formula_1.
Multiple testing correction.
Multiple testing correction refers to making statistical tests more stringent in order to counteract the problem of multiple testing. The best known such adjustment is the Bonferroni correction, but other methods have been developed. Such methods are typically designed to control the family-wise error rate or the false discovery rate.
If "m" independent comparisons are performed, the "family-wise error rate" (FWER), is given by
formula_2
Hence, unless the tests are perfectly positively dependent (i.e., identical), formula_3 increases as the number of comparisons increases.
If we do not assume that the comparisons are independent, then we can still say:
formula_4
which follows from Boole's inequality. Example: formula_5
There are different ways to assure that the family-wise error rate is at most formula_6. The most conservative method, which is free of dependence and distributional assumptions, is the Bonferroni correction formula_7. A marginally less conservative correction can be obtained by solving the equation for the family-wise error rate of formula_1 independent comparisons for formula_8. This yields formula_9, which is known as the Šidák correction. Another procedure is the Holm–Bonferroni method, which uniformly delivers more power than the simple Bonferroni correction, by testing only the lowest p-value (formula_10) against the strictest criterion, and the higher p-values (formula_11) against progressively less strict criteria.
formula_12.
For continuous problems, one can employ Bayesian logic to compute formula_1 from the prior-to-posterior volume ratio. Continuous generalizations of the Bonferroni and Šidák correction are presented in.
Large-scale multiple testing.
Traditional methods for multiple comparisons adjustments focus on correcting for modest numbers of comparisons, often in an analysis of variance. A different set of techniques have been developed for "large-scale multiple testing", in which thousands or even greater numbers of tests are performed. For example, in genomics, when using technologies such as microarrays, expression levels of tens of thousands of genes can be measured, and genotypes for millions of genetic markers can be measured. Particularly in the field of genetic association studies, there has been a serious problem with non-replication — a result being strongly statistically significant in one study but failing to be replicated in a follow-up study. Such non-replication can have many causes, but it is widely considered that failure to fully account for the consequences of making multiple comparisons is one of the causes. It has been argued that advances in measurement and information technology have made it far easier to generate large datasets for exploratory analysis, often leading to the testing of large numbers of hypotheses with no prior basis for expecting many of the hypotheses to be true. In this situation, very high false positive rates are expected unless multiple comparisons adjustments are made.
For large-scale testing problems where the goal is to provide definitive results, the family-wise error rate remains the most accepted parameter for ascribing significance levels to statistical tests. Alternatively, if a study is viewed as exploratory, or if significant results can be easily re-tested in an independent study, control of the false discovery rate (FDR) is often preferred. The FDR, loosely defined as the expected proportion of false positives among all significant tests, allows researchers to identify a set of "candidate positives" that can be more rigorously evaluated in a follow-up study.
The practice of trying many unadjusted comparisons in the hope of finding a significant one is a known problem, whether applied unintentionally or deliberately, is sometimes called "p-hacking".
Assessing whether any alternative hypotheses are true.
A basic question faced at the outset of analyzing a large set of testing results is whether there is evidence that any of the alternative hypotheses are true. One simple meta-test that can be applied when it is assumed that the tests are independent of each other is to use the Poisson distribution as a model for the number of significant results at a given level α that would be found when all null hypotheses are true. If the observed number of positives is substantially greater than what should be expected, this suggests that there are likely to be some true positives among the significant results.
For example, if 1000 independent tests are performed, each at level α = 0.05, we expect 0.05 × 1000 = 50 significant tests to occur when all null hypotheses are true. Based on the Poisson distribution with mean 50, the probability of observing more than 61 significant tests is less than 0.05, so if more than 61 significant results are observed, it is very likely that some of them correspond to situations where the alternative hypothesis holds. A drawback of this approach is that it overstates the evidence that some of the alternative hypotheses are true when the test statistics are positively correlated, which commonly occurs in practice. . On the other hand, the approach remains valid even in the presence of correlation among the test statistics, as long as the Poisson distribution can be shown to provide a good approximation for the number of significant results. This scenario arises, for instance, when mining significant frequent itemsets from transactional datasets. Furthermore, a careful two stage analysis can bound the FDR at a pre-specified level.
Another common approach that can be used in situations where the test statistics can be standardized to Z-scores is to make a normal quantile plot of the test statistics. If the observed quantiles are markedly more dispersed than the normal quantiles, this suggests that some of the significant results may be true positives.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\alpha_\\text{per comparison}=0.05"
},
{
"math_id": 1,
"text": "m"
},
{
"math_id": 2,
"text": " \\bar{\\alpha} = 1-\\left( 1-\\alpha_{\\{\\text{per comparison}\\}} \\right)^m."
},
{
"math_id": 3,
"text": "\\bar{\\alpha}"
},
{
"math_id": 4,
"text": " \\bar{\\alpha} \\le m \\cdot \\alpha_{\\{\\text{per comparison}\\}},"
},
{
"math_id": 5,
"text": " 0.2649=1-(1-.05)^6 \\le .05 \\times 6 = 0.3"
},
{
"math_id": 6,
"text": "\\alpha"
},
{
"math_id": 7,
"text": " \\alpha_\\mathrm{\\{per\\ comparison\\}}={\\alpha}/m"
},
{
"math_id": 8,
"text": "\\alpha_\\mathrm{\\{per\\ comparison\\}}"
},
{
"math_id": 9,
"text": "\\alpha_{\\{\\text{per comparison}\\}} = 1-{(1-{\\alpha})}^{1/m}"
},
{
"math_id": 10,
"text": "i=1"
},
{
"math_id": 11,
"text": "i>1"
},
{
"math_id": 12,
"text": " \\alpha_\\mathrm{\\{per\\ comparison\\}}={\\alpha}/(m-i+1)"
}
] | https://en.wikipedia.org/wiki?curid=9444220 |
944442 | Brans–Dicke theory | Proposed theory of gravitation
In physics, the Brans–Dicke theory of gravitation (sometimes called the Jordan–Brans–Dicke theory) is a competitor to Einstein's general theory of relativity. It is an example of a scalar–tensor theory, a gravitational theory in which the gravitational interaction is mediated by a scalar field as well as the tensor field of general relativity. The gravitational constant formula_0 is not presumed to be constant but instead formula_1 is replaced by a scalar field formula_2 which can vary from place to place and with time.
The theory was developed in 1961 by Robert H. Dicke and Carl H. Brans building upon, among others, the earlier 1959 work of Pascual Jordan. At present, both Brans–Dicke theory and general relativity are generally held to be in agreement with observation. Brans–Dicke theory represents a minority viewpoint in physics.
Comparison with general relativity.
Both Brans–Dicke theory and general relativity are examples of a class of relativistic classical field theories of gravitation, called "metric theories". In these theories, spacetime is equipped with a metric tensor, formula_3, and the gravitational field is represented (in whole or in part) by the Riemann curvature tensor formula_4, which is determined by the metric tensor.
All metric theories satisfy the Einstein equivalence principle, which in modern geometric language states that in a very small region (too small to exhibit measurable curvature effects), all the laws of physics known in special relativity are valid in "local Lorentz frames". This implies in turn that metric theories all exhibit the gravitational redshift effect.
As in general relativity, the source of the gravitational field is considered to be the stress–energy tensor or "matter tensor". However, the way in which the immediate presence of mass-energy in some region affects the gravitational field in that region differs from general relativity. So does the way in which spacetime curvature affects the motion of matter. In the Brans–Dicke theory, in addition to the metric, which is a "rank two tensor field", there is a "scalar field", formula_2, which has the physical effect of changing the "effective gravitational constant" from place to place. (This feature was actually a key of Dicke and Brans; see the paper by Brans cited below, which sketches the origins of the theory.)
The field equations of Brans–Dicke theory contain a parameter, formula_5, called the "Brans–Dicke coupling constant". This is a true dimensionless constant which must be chosen once and for all. However, it can be chosen to fit observations. Such parameters are often called "tunable parameters". In addition, the present ambient value of the effective gravitational constant must be chosen as a boundary condition. General relativity contains no dimensionless parameters whatsoever, and therefore is easier to falsify (show whether false) than Brans–Dicke theory. Theories with tunable parameters are sometimes deprecated on the principle that, of two theories which both agree with observation, the more parsimonious is preferable. On the other hand, it seems as though they are a necessary feature of some theories, such as the weak mixing angle of the Standard Model.
Brans–Dicke theory is "less stringent" than general relativity in another sense: it admits more solutions. In particular, exact vacuum solutions to the Einstein field equation of general relativity, augmented by the trivial scalar field formula_6, become exact vacuum solutions in Brans–Dicke theory, but some spacetimes which are "not" vacuum solutions to the Einstein field equation become, with the appropriate choice of scalar field, vacuum solutions of Brans–Dicke theory. Similarly, an important class of spacetimes, the pp-wave metrics, are also exact null dust solutions of both general relativity and Brans–Dicke theory, but here too, Brans–Dicke theory allows additional "wave solutions" having geometries which are incompatible with general relativity.
Like general relativity, Brans–Dicke theory predicts light deflection and the precession of perihelia of planets orbiting the Sun. However, the precise formulas which govern these effects, according to Brans–Dicke theory, depend upon the value of the coupling constant formula_5. This means that it is possible to set an observational lower bound on the possible value of formula_5 from observations of the solar system and other gravitational systems. The value of formula_7 consistent with experiment has risen with time. In 1973 formula_8 was consistent with known data. By 1981 formula_9 was consistent with known data. In 2003 evidence – derived from the Cassini–Huygens experiment – shows that the value of formula_5 must exceed 40,000.
It is also often taught that general relativity is obtained from the Brans–Dicke theory in the limit formula_10. But Faraoni claims that this breaks down when the trace of the stress-energy momentum vanishes, i.e. formula_11, an example of which is the Campanelli-Lousto wormhole solution. Some have argued that only general relativity satisfies the strong equivalence principle.
The field equations.
The field equations of the Brans–Dicke theory are
formula_12
formula_13
where
formula_5 is the dimensionless Dicke coupling constant;
formula_3 is the metric tensor;
formula_14 is the Einstein tensor, a kind of average curvature;
formula_15 is the Ricci tensor, a kind of trace of the curvature tensor;
formula_16 is the Ricci scalar, the trace of the Ricci tensor;
formula_17 is the stress–energy tensor;
formula_18 is the trace of the stress–energy tensor;
formula_2 is the scalar field;
formula_19 is the Laplace–Beltrami operator or covariant wave operator, formula_20.
The first equation describes how the stress–energy tensor and scalar field formula_2 together affect spacetime curvature. The left-hand side, the Einstein tensor, can be thought of as a kind of average curvature. It is a matter of pure mathematics that, in any metric theory, the Riemann tensor can always be written as the sum of the Weyl curvature (or "conformal curvature tensor") and a piece constructed from the Einstein tensor.
The second equation says that the trace of the stress–energy tensor acts as the source for the scalar field formula_2. Since electromagnetic fields contribute only a traceless term to the stress–energy tensor, this implies that in a region of spacetime containing only an electromagnetic field (plus the gravitational field), the right-hand side vanishes, and formula_2 obeys the (curved spacetime) wave equation. Therefore, changes in formula_2 propagate through "electrovacuum" regions; in this sense, we say that formula_2 is a "long-range field".
For comparison, the field equation of general relativity is simply
formula_21
This means that in general relativity, the Einstein curvature at some event is entirely determined by the stress–energy tensor at that event; the other piece, the Weyl curvature, is the part of the gravitational field which can propagate as a gravitational wave across a vacuum region. But in the Brans–Dicke theory, the Einstein tensor is determined partly by the immediate presence of mass–energy and momentum, and partly by the long-range scalar field formula_2.
The "vacuum field equations" of both theories are obtained when the stress–energy tensor vanishes. This models situations in which no non-gravitational fields are present.
The action principle.
The following Lagrangian contains the complete description of the Brans–Dicke theory:
formula_22
where formula_23 is the determinant of the metric, formula_24 is the four-dimensional volume form, and formula_25 is the "matter term", or "matter Lagrangian density".
The matter term includes the contribution of ordinary matter (e.g. gaseous matter) and also electromagnetic fields. In a vacuum region, the matter term vanishes identically; the remaining term is the "gravitational term". To obtain the vacuum field equations, we must vary the gravitational term in the Lagrangian with respect to the metric formula_3; this gives the first field equation above. When we vary with respect to the scalar field formula_2, we obtain the second field equation.
Note that, unlike for the General Relativity field equations, the formula_26 term does not vanish, as the result is not a total derivative. It can be shown that
formula_27
To prove this result, use
formula_28
By evaluating the formula_29s in Riemann normal coordinates, 6 individual terms vanish. 6 further terms combine when manipulated using Stokes' theorem to provide the desired formula_30.
For comparison, the Lagrangian defining general relativity is
formula_31
Varying the gravitational term with respect to formula_3 gives the vacuum Einstein field equation.
In both theories, the full field equations can be obtained by variations of the full Lagrangian.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "G"
},
{
"math_id": 1,
"text": "1 / G"
},
{
"math_id": 2,
"text": "\\phi"
},
{
"math_id": 3,
"text": "g_{ab}"
},
{
"math_id": 4,
"text": "R_{abcd}"
},
{
"math_id": 5,
"text": "\\omega"
},
{
"math_id": 6,
"text": "\\phi=1"
},
{
"math_id": 7,
"text": "\\omega "
},
{
"math_id": 8,
"text": "\\omega > 5"
},
{
"math_id": 9,
"text": "\\omega > 30"
},
{
"math_id": 10,
"text": "\\omega \\rightarrow \\infty"
},
{
"math_id": 11,
"text": "T^{\\mu}_{\\mu} = 0"
},
{
"math_id": 12,
"text": "G_{ab} = \\frac{8\\pi}{\\phi} T_{ab} + \\frac{\\omega}{\\phi^2}\n\\left(\\partial_a\\phi \\partial_b\\phi - \\frac{1}{2} g_{ab} \\partial_c\\phi\\partial^c\\phi\\right)\n+ \\frac{1}{\\phi}(\\nabla_a\\nabla_b\\phi - g_{ab} \\Box\\phi),"
},
{
"math_id": 13,
"text": "\\Box\\phi = \\frac{8\\pi}{3+2\\omega}T,"
},
{
"math_id": 14,
"text": "G_{ab} = R_{ab} - \\tfrac{1}{2} R g_{ab}"
},
{
"math_id": 15,
"text": "R_{ab} = R^m{}_{a m b}"
},
{
"math_id": 16,
"text": "R = R^m{}_{m}"
},
{
"math_id": 17,
"text": "T_{ab}"
},
{
"math_id": 18,
"text": "T = T_a^a"
},
{
"math_id": 19,
"text": "\\Box"
},
{
"math_id": 20,
"text": "\\Box \\phi = \\phi^{;a}{}_{;a}"
},
{
"math_id": 21,
"text": "G_{ab} = \\frac{8 \\pi G}{c^4}T_{ab}."
},
{
"math_id": 22,
"text": "S = \\frac{1}{16 \\pi}\\int d^4x\\sqrt{-g}\n\\left(\\phi R - \\frac{\\omega}{\\phi}\\partial_a\\phi\\partial^a\\phi\\right) + \\int d^4 x \\sqrt{-g} \\,\\mathcal{L}_\\mathrm{M},"
},
{
"math_id": 23,
"text": "g"
},
{
"math_id": 24,
"text": "\\sqrt{-g} \\, d^4 x"
},
{
"math_id": 25,
"text": "\\mathcal{L}_\\mathrm{M}"
},
{
"math_id": 26,
"text": "\\delta R_{ab}/\\delta g_{cd}"
},
{
"math_id": 27,
"text": "\\frac{\\delta(\\phi R)}{\\delta g^{ab}} = \\phi R_{ab} + g_{ab}g^{cd}\\phi_{;c;d} - \\phi_{;a;b}."
},
{
"math_id": 28,
"text": "\\delta (\\phi R) = R \\delta \\phi + \\phi R_{mn} \\delta g^{mn} + \\phi \\nabla_s (g^{mn} \\delta\\Gamma^s_{nm} - g^{ms}\\delta\\Gamma^r_{rm} )."
},
{
"math_id": 29,
"text": "\\delta\\Gamma"
},
{
"math_id": 30,
"text": "(g_{ab}g^{cd}\\phi_{;c;d} - \\phi_{;a;b})\\delta g^{ab}"
},
{
"math_id": 31,
"text": "S = \\int d^4x \\sqrt{-g} \\, \\left(\\frac{R}{16\\pi G} + \\mathcal{L}_\\mathrm{M}\\right)."
}
] | https://en.wikipedia.org/wiki?curid=944442 |
9445837 | Irrational rotation | Rotation of a circle by an angle of π times an irrational number
In the mathematical theory of dynamical systems, an irrational rotation is a map
formula_0
where "θ" is an irrational number. Under the identification of a circle with R/Z, or with the interval [0, 1] with the boundary points glued together, this map becomes a rotation of a circle by a proportion "θ" of a full revolution (i.e., an angle of 2"πθ" radians). Since "θ" is irrational, the rotation has infinite order in the circle group and the map "T""θ" has no periodic orbits.
Alternatively, we can use multiplicative notation for an irrational rotation by introducing the map
formula_1
The relationship between the additive and multiplicative notations is the group isomorphism
formula_2.
It can be shown that "φ" is an isometry.
There is a strong distinction in circle rotations that depends on whether "θ" is rational or irrational. Rational rotations are less interesting examples of dynamical systems because if formula_3 and formula_4, then formula_5 when formula_6. It can also be shown that
formula_7 when formula_8.
Significance.
Irrational rotations form a fundamental example in the theory of dynamical systems. According to the Denjoy theorem, every orientation-preserving "C"2-diffeomorphism of the circle with an irrational rotation number "θ" is topologically conjugate to "T""θ". An irrational rotation is a measure-preserving ergodic transformation, but it is not mixing. The Poincaré map for the dynamical system associated with the Kronecker foliation on a torus with angle "θ"> is the irrational rotation by "θ". C*-algebras associated with irrational rotations, known as irrational rotation algebras, have been extensively studied.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "T_\\theta : [0,1] \\rightarrow [0,1],\\quad T_\\theta(x) \\triangleq x + \\theta \\mod 1 ,"
},
{
"math_id": 1,
"text": " T_\\theta :S^1 \\to S^1, \\quad \\quad \\quad T_\\theta(x)=xe^{2\\pi i\\theta} "
},
{
"math_id": 2,
"text": " \\varphi:([0,1],+) \\to (S^1, \\cdot) \\quad \\varphi(x)=xe^{2\\pi i\\theta}"
},
{
"math_id": 3,
"text": "\\theta = \\frac{a}{b}"
},
{
"math_id": 4,
"text": "\\gcd(a,b) = 1"
},
{
"math_id": 5,
"text": "T_\\theta^b(x) = x"
},
{
"math_id": 6,
"text": "x \\isin [0,1]"
},
{
"math_id": 7,
"text": "T_\\theta^i(x) \\ne x"
},
{
"math_id": 8,
"text": "1 \\le i < b"
},
{
"math_id": 9,
"text": " \\text{lim} _ {N \\to \\infty} \\frac{1}{N} \\sum_{n=0}^{N-1} \\chi_{[a,b)}(T_\\theta ^n (t))=b-a "
},
{
"math_id": 10,
"text": " F:\\mathbb{R}\\to \\mathbb{R} "
},
{
"math_id": 11,
"text": " \\pi \\circ F=f \\circ \\pi "
},
{
"math_id": 12,
"text": " \\pi (t)=t \\bmod 1 "
}
] | https://en.wikipedia.org/wiki?curid=9445837 |
944638 | Earth's energy budget | Concept for energy flows to and from Earth
Earth's energy budget (or Earth's energy balance) is the balance between the energy that Earth receives from the Sun and the energy the Earth loses back into outer space. Smaller energy sources, such as Earth's internal heat, are taken into consideration, but make a tiny contribution compared to solar energy. The energy budget also takes into account how energy moves through the climate system. The Sun heats the equatorial tropics more than the polar regions. Therefore, the amount of solar irradiance received by a certain region is unevenly distributed. As the energy seeks equilibrium across the planet, it drives interactions in Earth's climate system, i.e., Earth's water, ice, atmosphere, rocky crust, and all living things. The result is Earth's climate.
Earth's energy budget depends on many factors, such as atmospheric aerosols, greenhouse gases, surface albedo, clouds, and land use patterns. When the incoming and outgoing energy fluxes are in balance, Earth is in radiative equilibrium and the climate system will be "relatively" stable. Global warming occurs when earth receives more energy than it gives back to space, and global cooling takes place when the outgoing energy is greater.
Multiple types of measurements and observations show a warming imbalance since at least year 1970. The rate of heating from this human-caused event is without precedent. The main origin of changes in the Earth's energy is from human-induced changes in the composition of the atmosphere. During 2005 to 2019 the Earth's energy imbalance (EEI) averaged about 460 TW or globally .
It takes time for any changes in the energy budget to result in any significant changes in the global surface temperature. This is due to the thermal inertia of the oceans, land and cryosphere. Most climate models make accurate calculations of this inertia, energy flows and storage amounts.
Definition.
Earth's energy budget includes the "major energy flows of relevance for the climate system". These are "the top-of-atmosphere energy budget; the surface energy budget; changes in the global energy inventory and internal flows of energy within the climate system".
Earth's energy flows.
In spite of the enormous transfers of energy into and from the Earth, it maintains a relatively constant temperature because, as a whole, there is little net gain or loss: Earth emits via atmospheric and terrestrial radiation (shifted to longer electromagnetic wavelengths) to space about the same amount of energy as it receives via solar insolation (all forms of electromagnetic radiation).
The main origin of changes in the Earth's energy is from human-induced changes in the composition of the atmosphere, amounting to about 460 TW or globally .
Incoming solar energy (shortwave radiation).
The total amount of energy received per second at the top of Earth's atmosphere (TOA) is measured in watts and is given by the solar constant times the cross-sectional area of the Earth corresponded to the radiation. Because the surface area of a sphere is four times the cross-sectional area of a sphere (i.e. the area of a circle), the globally and yearly averaged TOA flux is one quarter of the solar constant and so is approximately 340 watts per square meter (W/m2). Since the absorption varies with location as well as with diurnal, seasonal and annual variations, the numbers quoted are multi-year averages obtained from multiple satellite measurements.
Of the ~340 W/m2 of solar radiation received by the Earth, an average of ~77 W/m2 is reflected back to space by clouds and the atmosphere and ~23 W/m2 is reflected by the surface albedo, leaving ~240 W/m2 of solar energy input to the Earth's energy budget. This amount is called the absorbed solar radiation (ASR). It implies a value of about 0.3 for the mean net albedo of Earth, also called its Bond albedo (A):
formula_0
Outgoing longwave radiation.
Thermal energy leaves the planet in the form of "outgoing longwave radiation" (OLR). Longwave radiation is electromagnetic thermal radiation emitted by Earth's surface and atmosphere. Longwave radiation is in the infrared band. But, the terms are not synonymous, as infrared radiation can be either "shortwave" or "longwave". Sunlight contains significant amounts of shortwave infrared radiation. A threshold wavelength of 4 microns is sometimes used to distinguish longwave and shortwave radiation.
Generally, absorbed solar energy is converted to different forms of heat energy. Some of the solar energy absorbed by the surface is converted to thermal radiation at wavelengths in the "atmospheric window"; this radiation is able to pass through the atmosphere unimpeded and directly escape to space, contributing to OLR. The remainder of absorbed solar energy is transported upwards through the atmosphere through a variety of heat transfer mechanisms, until the atmosphere emits that energy as thermal energy which is able to escape to space, again contributing to OLR. For example, heat is transported into the atmosphere via evapotranspiration and latent heat fluxes or conduction/convection processes, as well as via radiative heat transport. Ultimately, all outgoing energy is radiated into space in the form of longwave radiation.
The transport of longwave radiation from Earth's surface through its multi-layered atmosphere is governed by radiative transfer equations such as Schwarzschild's equation for radiative transfer (or more complex equations if scattering is present) and obeys Kirchhoff's law of thermal radiation.
A one-layer model produces an approximate description of OLR which yields temperatures at the surface (Ts=288 Kelvin) and at the middle of the troposphere ("T"a=242 K) that are close to observed average values:
formula_1
In this expression "σ" is the Stefan–Boltzmann constant and "ε" represents the emissivity of the atmosphere, which is less than 1 because the atmosphere does not emit within the wavelength range known as the atmospheric window.
Aerosols, clouds, water vapor, and trace greenhouse gases contribute to an effective value of about "ε" = 0.78. The strong (fourth-power) temperature sensitivity maintains a near-balance of the outgoing energy flow to the incoming flow via small changes in the planet's absolute temperatures.
As viewed from Earth's surrounding space, greenhouse gases influence the planet's atmospheric emissivity ("ε"). Changes in atmospheric composition can thus shift the overall radiation balance. For example, an increase in heat trapping by a growing concentration of greenhouse gases (i.e. an "enhanced greenhouse effect") forces a decrease in OLR and a warming (restorative) energy imbalance. Ultimately when the amount of greenhouse gases increases or decreases, in-situ surface temperatures rise or fall until the absorbed solar radiation equals the outgoing longwave radiation, or ASR equals OLR.
Earth's internal heat sources and other minor effects.
The geothermal heat flow from the Earth's interior is estimated to be 47 terawatts (TW) and split approximately equally between radiogenic heat and heat left over from the Earth's formation. This corresponds to an average flux of 0.087 W/m2 and represents only 0.027% of Earth's total energy budget at the surface, being dwarfed by the of incoming solar radiation.
Human production of energy is even lower at an average 18 TW, corresponding to an estimated 160,000 TW-hr, for all of year 2019. However, consumption is growing rapidly and energy production with fossil fuels also produces an increase in atmospheric greenhouse gases, leading to a more than 20 times larger imbalance in the incoming/outgoing flows that originate from solar radiation.
Photosynthesis also has a significant effect: An estimated 140 TW (or around 0.08%) of incident energy gets captured by photosynthesis, giving energy to plants to produce biomass. A similar flow of thermal energy is released over the course of a year when plants are used as food or fuel.
Other minor sources of energy are usually ignored in the calculations, including accretion of interplanetary dust and solar wind, light from stars other than the Sun and the thermal radiation from space. Earlier, Joseph Fourier had claimed that deep space radiation was significant in a paper often cited as the first on the greenhouse effect.
Budget analysis.
In simplest terms, Earth's energy budget is balanced when the incoming flow equals the outgoing flow. Since a portion of incoming energy is directly reflected, the balance can also be stated as absorbed incoming solar (shortwave) radiation equal to outgoing longwave radiation:
formula_2
Internal flow analysis.
To describe some of the internal flows within the budget, let the insolation received at the top of the atmosphere be 100 units (= 340 W/m2), as shown in the accompanying Sankey diagram. Called the albedo of Earth, around 35 units in this example are directly reflected back to space: 27 from the top of clouds, 2 from snow and ice-covered areas, and 6 by other parts of the atmosphere. The 65 remaining units (ASR = 220 W/m2) are absorbed: 14 within the atmosphere and 51 by the Earth's surface.
The 51 units reaching and absorbed by the surface are emitted back to space through various forms of terrestrial energy: 17 directly radiated to space and 34 absorbed by the atmosphere (19 through latent heat of vaporisation, 9 via convection and turbulence, and 6 as absorbed infrared by greenhouse gases). The 48 units absorbed by the atmosphere (34 units from terrestrial energy and 14 from insolation) are then finally radiated back to space. This simplified example neglects some details of mechanisms that recirculate, store, and thus lead to further buildup of heat near the surface.
Ultimately the 65 units (17 from the ground and 48 from the atmosphere) are emitted as OLR. They approximately balance the 65 units (ASR) absorbed from the sun in order to maintain a net-zero gain of energy by Earth.
Heat storage reservoirs.
Land, ice, and oceans are active material constituents of Earth's climate system along with the atmosphere. They have far greater mass and heat capacity, and thus much more thermal inertia. When radiation is directly absorbed or the surface temperature changes, thermal energy will flow as sensible heat either into or out of the bulk mass of these components via conduction/convection heat transfer processes. The transformation of water between its solid/liquid/vapor states also acts as a source or sink of potential energy in the form of latent heat. These processes buffer the surface conditions against some of the rapid radiative changes in the atmosphere. As a result, the daytime versus nighttime difference in surface temperatures is relatively small. Likewise, Earth's climate system as a whole shows a slow response to shifts in the atmospheric radiation balance.
The top few meters of Earth's oceans harbor more thermal energy than its entire atmosphere. Like atmospheric gases, fluidic ocean waters transport vast amounts of such energy over the planet's surface. Sensible heat also moves into and out of great depths under conditions that favor downwelling or upwelling.
Over 90 percent of the extra energy that has accumulated on Earth from ongoing global warming since 1970 has been stored in the ocean. About one-third has propagated to depths below 700 meters. The overall rate of growth has also risen during recent decades, reaching close to 500 TW (1 W/m2) as of 2020. That led to about 14 zettajoules (ZJ) of heat gain for the year, exceeding the 570 exajoules (=160,000 TW-hr) of total primary energy consumed by humans by a factor of at least 20.
Heating/cooling rate analysis.
Generally speaking, changes to Earth's energy flux balance can be thought of as being the result of external forcings (both natural and anthropogenic, radiative and non-radiative), system feedbacks, and internal system variability. Such changes are primarily expressed as observable shifts in temperature (T), clouds (C), water vapor (W), aerosols (A), trace greenhouse gases (G), land/ocean/ice surface reflectance (S), and as minor shifts in insolaton (I) among other possible factors. Earth's heating/cooling rate can then be analyzed over selected timeframes (Δt) as the net change in energy (ΔE) associated with these attributes:
formula_3
Here the term ΔET, corresponding to the Planck response, is negative-valued when temperature rises due to its strong direct influence on OLR.
The recent increase in trace greenhouse gases produces an enhanced greenhouse effect, and thus a positive ΔEG forcing term. By contrast, a large volcanic eruption (e.g. Mount Pinatubo 1991, El Chichón 1982) can inject sulfur-containing compounds into the upper atmosphere. High concentrations of stratospheric sulfur aerosols may persist for up to a few years, yielding a negative forcing contribution to ΔEA. Various other types of anthropogenic aerosol emissions make both positive and negative contributions to ΔEA. Solar cycles produce ΔEI smaller in magnitude than those of recent ΔEG trends from human activity.
Climate forcings are complex since they can produce direct and indirect feedbacks that intensify (positive feedback) or weaken (negative feedback) the original forcing. These often follow the temperature response. Water vapor trends as a positive feedback with respect to temperature changes due to evaporation shifts and the Clausius-Clapeyron relation. An increase in water vapor results in positive ΔEW due to further enhancement of the greenhouse effect. A slower positive feedback is the ice-albedo feedback. For example, the loss of Arctic ice due to rising temperatures makes the region less reflective, leading to greater absorption of energy and even faster ice melt rates, thus positive influence on ΔES. Collectively, feedbacks tend to amplify global warming or cooling.
Clouds are responsible for about half of Earth's albedo and are powerful expressions of internal variability of the climate system. They may also act as feedbacks to forcings, and could be forcings themselves if for example a result of cloud seeding activity. Contributions to ΔEC vary regionally and depending upon cloud type. Measurements from satellites are gathered in concert with simulations from models in an effort to improve understanding and reduce uncertainty.
Earth's energy imbalance (EEI).
The Earth's energy imbalance (EEI) is defined as "the persistent and positive (downward) net top of atmosphere energy flux associated with greenhouse gas forcing of the climate system".
If Earth's incoming energy flux (ASR) is larger or smaller than the outgoing energy flux (OLR), then the planet will gain (warm) or lose (cool) net heat energy in accordance with the law of energy conservation:
formula_4.
Positive EEI thus defines the overall rate of planetary heating and is typically expressed as watts per square meter (W/m2). During 2005 to 2019 the Earth's energy imbalance averaged about 460 TW or globally 0.90 ± 0.15 W per m2.
When Earth's energy imbalance (EEI) shifts by a sufficiently large amount, the shift is measurable by orbiting satellite-based instruments. Imbalances that fail to reverse over time will also drive long-term temperature changes in the atmospheric, oceanic, land, and ice components of the climate system. Temperature, sea level, ice mass and related shifts thus also provide measures of EEI.
The biggest changes in EEI arise from changes in the composition of the atmosphere through human activities, thereby interfering with the natural flow of energy through the climate system. The main changes are from increases in carbon dioxide and other greenhouse gases, that produce heating (positive EEI), and pollution. The latter refers to atmospheric aerosols of various kinds, some of which absorb energy while others reflect energy and produce cooling (or lower EEI).
It is not (yet) possible to measure the absolute magnitude of EEI directly at top of atmosphere, although "changes over time" as observed by satellite-based instruments are thought to be accurate. The only practical way to estimate the absolute magnitude of EEI is through an inventory of the changes in energy in the climate system. The biggest of these energy reservoirs is the ocean.
Energy inventory assessments.
The planetary heat content that resides in the climate system can be compiled given the heat capacity, density and temperature distributions of each of its components. Most regions are now reasonably well sampled and monitored, with the most significant exception being the deep ocean.
Estimates of the absolute magnitude of EEI have likewise been calculated using the measured temperature changes during recent multi-decadal time intervals. For the 2006 to 2020 period EEI was about and showed a significant increase above the mean of for the 1971 to 2020 period.
EEI has been positive because temperatures have increased almost everywhere for over 50 years. Global surface temperature (GST) is calculated by averaging temperatures measured at the surface of the sea along with air temperatures measured over land. Reliable data extending to at least 1880 shows that GST has undergone a steady increase of about 0.18 °C per decade since about year 1970.
Ocean waters are especially effective absorbents of solar energy and have a far greater total heat capacity than the atmosphere. Research vessels and stations have sampled sea temperatures at depth and around the globe since before 1960. Additionally, after the year 2000, an expanding network of nearly 4000 Argo robotic floats has measured the temperature anomaly, or equivalently the ocean heat content change (ΔOHC). Since at least 1990, OHC has increased at a steady or accelerating rate. ΔOHC represents the largest portion of EEI since oceans have thus far taken up over 90% of the net excess energy entering the system over time (Δt):
formula_5.
Earth's outer crust and thick ice-covered regions have taken up relatively little of the excess energy. This is because excess heat at their surfaces flows inward only by means of thermal conduction, and thus penetrates only several tens of centimeters on the daily cycle and only several tens of meters on the annual cycle. Much of the heat uptake goes either into melting ice and permafrost or into evaporating more water from soils.
Measurements at top of atmosphere (TOA).
Several satellites measure the energy absorbed and radiated by Earth, and thus by inference the energy imbalance. These are located top of atmosphere (TOA) and provide data covering the globe. The NASA Earth Radiation Budget Experiment (ERBE) project involved three such satellites: the Earth Radiation Budget Satellite (ERBS), launched October 1984; NOAA-9, launched December 1984; and NOAA-10, launched September 1986.
NASA's Clouds and the Earth's Radiant Energy System (CERES) instruments are part of its Earth Observing System (EOS) since March 2000. CERES is designed to measure both solar-reflected (short wavelength) and Earth-emitted (long wavelength) radiation. The CERES data showed increases in EEI from in 2005 to in 2019. Contributing factors included more water vapor, less clouds, increasing greenhouse gases, and declining ice that were partially offset by rising temperatures. Subsequent investigation of the behavior using the GFDL CM4/AM4 climate model concluded there was a less than 1% chance that internal climate variability alone caused the trend.
Other researchers have used data from CERES, AIRS, CloudSat, and other EOS instruments to look for trends of radiative forcing embedded within the EEI data. Their analysis showed a forcing rise of from years 2003 to 2018. About 80% of the increase was associated with the rising concentration of greenhouse gases which reduced the outgoing longwave radiation.
Further satellite measurements including TRMM and CALIPSO data have indicated additional precipitation, which is sustained by increased energy leaving the surface through evaporation (the latent heat flux), offsetting some of the increase in the longwave greenhouse flux to the surface.
It is noteworthy that radiometric calibration uncertainties limit the capability of the current generation of satellite-based instruments, which are otherwise stable and precise. As a result, relative changes in EEI are quantifiable with an accuracy which is not also achievable for any single measurement of the absolute imbalance.
Geodetic and hydrographic surveys.
Observations since 1994 show that ice has retreated from every part of Earth at an accelerating rate. Mean global sea level has likewise risen as a consequence of the ice melt in combination with the overall rise in ocean temperatures.
These shifts have contributed measurable changes to the geometric shape and gravity of the planet.
Changes to the mass distribution of water within the hydrosphere and cryosphere have been deduced using gravimetric observations by the GRACE satellite instruments. These data have been compared against ocean surface topography and further hydrographic observations using computational models that account for thermal expansion, salinity changes, and other factors. Estimates thereby obtained for ΔOHC and EEI have agreed with the other (mostly) independent assessments within uncertainties.
Importance as a climate change metric.
Climate scientists Kevin Trenberth, James Hansen, and colleagues have identified the monitoring of Earth's energy imbalance as an important metric to help policymakers guide the pace for mitigation and adaptation measures. Because of climate system inertia, longer-term EEI (Earth's energy imbalance) trends can forecast further changes that are "in the pipeline".
Scientists found that the EEI is the most important metric related to climate change. It is the net result of all the processes and feedbacks in play in the climate system. Knowing how much extra energy affects weather systems and rainfall is vital to understand the increasing weather extremes.
In 2012, NASA scientists reported that to stop global warming atmospheric CO2 concentration would have to be reduced to 350 ppm or less, assuming all other climate forcings were fixed. As of 2020, atmospheric CO2 reached 415 ppm and all long-lived greenhouse gases exceeded a 500 ppm CO2-equivalent concentration due to continued growth in human emissions.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " ASR = (1-A) \\times 340~\\mathrm{W}~\\mathrm{m}^{-2} \\simeq 240~\\mathrm{W}~\\mathrm{m}^{-2}."
},
{
"math_id": 1,
"text": " OLR \\simeq \\epsilon \\sigma T_\\text{a}^4 + (1-\\epsilon) \\sigma T_\\text{s}^4."
},
{
"math_id": 2,
"text": " ASR = OLR."
},
{
"math_id": 3,
"text": "\n\\begin{align}\n\\Delta E / \\Delta t &= ( \\ \\Delta E_T + \\Delta E_C + \\Delta E_W + \\Delta E_A + \\Delta E_G + \\Delta E_S + \\Delta E_I +... \\ ) / \\Delta t \\\\ \n\\\\\n&= ASR - OLR.\n\\end{align}"
},
{
"math_id": 4,
"text": "EEI \\equiv ASR - OLR"
},
{
"math_id": 5,
"text": "EEI \\gtrsim \\Delta OHC / \\Delta t"
}
] | https://en.wikipedia.org/wiki?curid=944638 |
9447566 | Rational consequence relation | In logic, a rational consequence relation is a non-monotonic consequence relation satisfying certain properties listed below.
A rational consequence relation is a logical framework that refines traditional deductive reasoning to better model real-world scenarios. It incorporates rules like reflexivity, left logical equivalence, right-hand weakening, cautious monotony, disjunction on the left-hand side, logical and on the right-hand side, and rational monotony. These rules enable the relation to handle everyday situations more effectively by allowing for non-monotonic reasoning, where conclusions can be drawn based on usual rather than absolute implications. This approach is particularly useful in cases where adding more information can change the outcome, providing a more nuanced understanding than monotone consequence relations.
Properties.
A rational consequence relation formula_0 satisfies:
; REF : Reflexivity formula_1
and the so-called Gabbay–Makinson rules:
; LLE : Left logical equivalence formula_2
; RWE : Right-hand weakening formula_3
; CMO : Cautious monotonicity formula_4
; DIS : Logical or (i.e. disjunction) on left hand side formula_5
; AND : Logical and on right hand side formula_6
; RMO : Rational monotonicity formula_7
Uses.
The rational consequence relation is non-monotonic, and the relation formula_8 is intended to carry the meaning "theta usually implies phi" or "phi usually follows from theta". In this sense it is more useful for modeling some everyday situations than a monotone consequence relation because the latter relation models facts in a more strict boolean fashion—something either follows under all circumstances or it does not.
Example: cake.
The statement "If a cake contains sugar then it tastes good" implies under a monotone consequence relation the statement "If a cake contains sugar and soap then it tastes good." Clearly this doesn't match our own understanding of cakes. By asserting "If a cake contains sugar then it usually tastes good" a rational consequence relation allows for a more realistic model of the real world, and certainly it does not automatically follow that "If a cake contains sugar and soap then it usually tastes good."
Note that if we also have the information "If a cake contains sugar then it usually contains butter" then we may legally conclude (under CMO) that "If a cake contains sugar and butter then it usually tastes good.". Equally in the absence of a statement such as "If a cake contains sugar then usually it contains no soap" then we may legally conclude from RMO that "If the cake contains sugar and soap then it usually tastes good."
If this latter conclusion seems ridiculous to you then it is likely that you are subconsciously asserting your own preconceived knowledge about cakes when evaluating the validity of the statement. That is, from your experience you know that cakes that contain soap are likely to taste bad so you add to the system your own knowledge such as "Cakes that contain sugar do not usually contain soap.", even though this knowledge is absent from it. If the conclusion seems silly to you then you might consider replacing the word "soap" with the word "eggs" to see if it changes your feelings.
Example: drugs.
Consider the sentences:
We may consider it reasonable to conclude:
This would not be a valid conclusion under a monotonic deduction system (omitting of course the word 'usually'), since the third sentence would contradict the first two. In contrast the conclusion follows immediately using the Gabbay–Makinson rules: applying the rule CMO to the last two sentences yields the result.
Consequences.
The following consequences follow from the above rules:
;MP : Modus ponens formula_9
MP is proved via the rules AND and RWE.
;CON : Conditionalisation formula_10
;CC : Cautious cut formula_11
The notion of "cautious cut" simply encapsulates the operation of conditionalisation, followed by MP. It may seem redundant in this sense, but it is often used in proofs so it is useful to have a name for it to act as a shortcut.
;SCL : Supraclassity formula_12
SCL is proved trivially via REF and RWE.
Rational consequence relations via atom preferences.
Let formula_13 be a finite language. An atom is a formula of the form formula_14 (where formula_15 and formula_16). Notice that there is a unique valuation which makes any given atom true (and conversely each valuation satisfies precisely one atom). Thus an atom can be used to represent a preference about what we believe ought to be true.
Let formula_17 be the set of all atoms in L. For formula_18 SL, define formula_19.
Let formula_20 be a sequence of subsets of formula_17. For formula_21, formula_22 in SL, let the relation formula_23 be such that formula_24 if one of the following holds:
Then the relation formula_23 is a rational consequence relation. This may easily be verified by checking directly that it satisfies the GM-conditions.
The idea behind the sequence of atom sets is that the earlier sets account for the most likely situations such as "young people are usually law abiding" whereas the later sets account for the less likely situations such as "young joyriders are usually not law abiding".
The representation theorem.
It can be proven that any rational consequence relation on a finite language is representable via a sequence of atom preferences above. That is, for any such rational consequence relation formula_0 there is a sequence formula_20 of subsets of formula_17 such that the associated rational consequence relation formula_23 is the same relation: formula_36 | [
{
"math_id": 0,
"text": "\\vdash"
},
{
"math_id": 1,
"text": "\\theta \\vdash \\theta"
},
{
"math_id": 2,
"text": "\\frac{\\theta \\vdash \\psi \\quad\\quad \\theta \\equiv \\phi}{\\phi \\vdash \\psi}"
},
{
"math_id": 3,
"text": "\\frac{\\theta \\vdash \\phi \\quad\\quad \\phi \\models \\psi}{\\theta \\vdash \\psi}"
},
{
"math_id": 4,
"text": "\\frac{\\theta \\vdash \\phi \\quad\\quad \\theta \\vdash \\psi}{\\theta \\wedge \\psi \\vdash \\phi}"
},
{
"math_id": 5,
"text": "\\frac{\\theta \\vdash \\psi \\quad\\quad \\phi \\vdash \\psi}{\\theta \\vee \\phi \\vdash \\psi}"
},
{
"math_id": 6,
"text": "\\frac{\\theta \\vdash \\phi \\quad\\quad \\theta \\vdash \\psi}{\\theta \\vdash \\phi \\wedge \\psi}"
},
{
"math_id": 7,
"text": "\\frac{\\phi \\not\\vdash \\neg\\theta \\quad\\quad \\phi \\vdash \\psi}{\\phi \\wedge \\theta \\vdash \\psi}"
},
{
"math_id": 8,
"text": "\\theta \\vdash \\phi"
},
{
"math_id": 9,
"text": "\\frac{\\theta \\vdash \\phi \\quad\\quad \\theta \\vdash \\left( \\phi \\rightarrow \\psi \\right)}{\\theta \\vdash \\psi}"
},
{
"math_id": 10,
"text": "\\frac{\\theta \\wedge \\phi \\vdash \\psi}{\\theta \\vdash \\left(\\phi \\rightarrow \\psi \\right)}"
},
{
"math_id": 11,
"text": "\\frac{\\theta \\vdash \\phi \\quad\\quad \\theta \\wedge \\phi \\vdash \\psi}{\\theta \\vdash \\psi}"
},
{
"math_id": 12,
"text": "\\frac{\\theta \\models \\phi}{\\theta \\vdash \\phi}"
},
{
"math_id": 13,
"text": "L = \\{p_1, \\ldots , p_n\\}"
},
{
"math_id": 14,
"text": "\\bigwedge_{i=1}^n p^\\epsilon_i"
},
{
"math_id": 15,
"text": "p^1 = p"
},
{
"math_id": 16,
"text": "p^{-1} = \\neg p"
},
{
"math_id": 17,
"text": "At^L"
},
{
"math_id": 18,
"text": "\\theta \\in"
},
{
"math_id": 19,
"text": "S_\\theta = \\{\\alpha \\in At^L | \\alpha \\models^{SC} \\theta \\}"
},
{
"math_id": 20,
"text": "\\vec{s} = s_1, \\ldots , s_m"
},
{
"math_id": 21,
"text": "\\theta"
},
{
"math_id": 22,
"text": "\\phi"
},
{
"math_id": 23,
"text": "\\vdash_\\vec{s}"
},
{
"math_id": 24,
"text": "\\theta \\vdash_{\\vec{s}} \\phi"
},
{
"math_id": 25,
"text": "S_\\theta \\cap s_i = \\emptyset"
},
{
"math_id": 26,
"text": "1 \\leq i \\leq m"
},
{
"math_id": 27,
"text": "S_\\theta \\cap s_i \\neq \\emptyset"
},
{
"math_id": 28,
"text": "S_\\theta \\cap s_i \\subseteq S_\\phi"
},
{
"math_id": 29,
"text": "s_2"
},
{
"math_id": 30,
"text": "s_2 \\setminus s_1"
},
{
"math_id": 31,
"text": "s_3"
},
{
"math_id": 32,
"text": "s_3 \\setminus s_2 \\setminus s_1"
},
{
"math_id": 33,
"text": "s_m"
},
{
"math_id": 34,
"text": "s_m \\setminus \\bigcup_{i=1}^{m-1} s_i"
},
{
"math_id": 35,
"text": "s_i"
},
{
"math_id": 36,
"text": "{\\vdash_\\vec{s}} = {\\vdash}"
}
] | https://en.wikipedia.org/wiki?curid=9447566 |
9448193 | Boolean network | Discrete set of boolean variables
A Boolean network consists of a discrete set of boolean variables each of which has a Boolean function (possibly different for each variable) assigned to it which takes inputs from a subset of those variables and output that determines the state of the variable it is assigned to. This set of functions in effect determines a topology (connectivity) on the set of variables, which then become nodes in a network. Usually, the dynamics of the system is taken as a discrete time series where the state of the entire network at time "t"+1 is determined by evaluating each variable's function on the state of the network at time "t". This may be done synchronously or asynchronously.
Boolean networks have been used in biology to model regulatory networks. Although Boolean networks are a crude simplification of genetic reality where genes are not simple binary switches, there are several cases where they correctly convey the correct pattern of expressed and suppressed genes.
The seemingly mathematical easy (synchronous) model was only fully understood in the mid 2000s.
Classical model.
A Boolean network is a particular kind of sequential dynamical system, where time and states are discrete, i.e. both the set of variables and the set of states in the time series each have a bijection onto an integer series.
A random Boolean network (RBN) is one that is randomly selected from the set of all possible boolean networks of a particular size, "N". One then can study statistically, how the expected properties of such networks depend on various statistical properties of the ensemble of all possible networks. For example, one may study how the RBN behavior changes as the average connectivity is changed.
The first Boolean networks were proposed by Stuart A. Kauffman in 1969, as random models of genetic regulatory networks but their mathematical understanding only started in the 2000s.
Attractors.
Since a Boolean network has only 2"N" possible states, a trajectory will sooner or later reach a previously visited state, and thus, since the dynamics are deterministic, the trajectory will fall into a steady state or cycle called an attractor (though in the broader field of dynamical systems a cycle is only an attractor if perturbations from it lead back to it). If the attractor has only a single state it is called a "point attractor", and if the attractor consists of more than one state it is called a "cycle attractor". The set of states that lead to an attractor is called the "basin" of the attractor. States which occur only at the beginning of trajectories (no trajectories lead "to" them), are called "garden-of-Eden" states and the dynamics of the network flow from these states towards attractors. The time it takes to reach an attractor is called "transient time".
With growing computer power and increasing understanding of the seemingly simple model, different authors gave different estimates for the mean number and length of the attractors, here a brief summary of key publications.
Stability.
In dynamical systems theory, the structure and length of the attractors of a network corresponds to the dynamic phase of the network. The stability of Boolean networks depends on the connections of their nodes. A Boolean network can exhibit stable, critical or chaotic behavior. This phenomenon is governed by a critical value of the average number of connections of nodes (formula_0), and can be characterized by the Hamming distance as distance measure. In the unstable regime, the distance between two initially close states on average grows exponentially in time, while in the stable regime it decreases exponentially. In this, with "initially close states" one means that the Hamming distance is small compared with the number of nodes (formula_1) in the network.
For N-K-model the network is stable if formula_2, critical if formula_3, and unstable if formula_4.
The state of a given node formula_5 is updated according to its truth table, whose outputs are randomly populated. formula_6 denotes the probability of assigning an off output to a given series of input signals.
If formula_7 for every node, the transition between the stable and chaotic range depends on formula_8. According to Bernard Derrida and Yves Pomeau
, the critical value of the average number of connections is formula_9.
If formula_10 is not constant, and there is no correlation between the in-degrees and out-degrees, the conditions of stability is determined by formula_11 The network is stable if formula_12, critical if formula_13, and unstable if formula_14.
The conditions of stability are the same in the case of networks with scale-free topology where the in-and out-degree distribution is a power-law distribution: formula_15, and formula_16, since every out-link from a node is an in-link to another.
Sensitivity shows the probability that the output of the Boolean function of a given node changes if its input changes. For random Boolean networks,
formula_17. In the general case, stability of the network is governed by the largest eigenvalue formula_18 of matrix formula_19, where formula_20, and formula_21 is the adjacency matrix of the network. The network is stable if formula_22, critical if formula_23, unstable if formula_24.
Variations of the model.
Other topologies.
One theme is to study different underlying graph topologies.
Other updating schemes.
Classical Boolean networks (sometimes called CRBN, i.e. Classic Random Boolean Network) are synchronously updated. Motivated by the fact that genes don't usually change their state simultaneously, different alternatives have been introduced. A common classification is the following:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "K_{c}"
},
{
"math_id": 1,
"text": "N"
},
{
"math_id": 2,
"text": "K<K_{c}"
},
{
"math_id": 3,
"text": "K=K_{c}"
},
{
"math_id": 4,
"text": "K>K_{c}"
},
{
"math_id": 5,
"text": " n_{i} "
},
{
"math_id": 6,
"text": " p_{i} "
},
{
"math_id": 7,
"text": " p_{i}=p=const. "
},
{
"math_id": 8,
"text": " p "
},
{
"math_id": 9,
"text": " K_{c}=1/[2p(1-p)] "
},
{
"math_id": 10,
"text": " K "
},
{
"math_id": 11,
"text": " \\langle K^{in}\\rangle "
},
{
"math_id": 12,
"text": "\\langle K^{in}\\rangle <K_{c}"
},
{
"math_id": 13,
"text": "\\langle K^{in}\\rangle =K_{c}"
},
{
"math_id": 14,
"text": "\\langle K^{in}\\rangle >K_{c}"
},
{
"math_id": 15,
"text": " P(K) \\propto K^{-\\gamma} "
},
{
"math_id": 16,
"text": "\\langle K^{in} \\rangle=\\langle K^{out} \\rangle "
},
{
"math_id": 17,
"text": " q_{i}=2p_{i}(1-p_{i}) "
},
{
"math_id": 18,
"text": " \\lambda_{Q} "
},
{
"math_id": 19,
"text": " Q "
},
{
"math_id": 20,
"text": " Q_{ij}=q_{i}A_{ij} "
},
{
"math_id": 21,
"text": " A "
},
{
"math_id": 22,
"text": "\\lambda_{Q}<1"
},
{
"math_id": 23,
"text": "\\lambda_{Q}=1"
},
{
"math_id": 24,
"text": "\\lambda_{Q}>1"
}
] | https://en.wikipedia.org/wiki?curid=9448193 |
9451796 | Disk covering problem | The disk covering problem asks for the smallest real number formula_0 such that formula_1 disks of radius formula_0 can be arranged in such a way as to cover the unit disk. Dually, for a given radius "ε", one wishes to find the smallest integer "n" such that "n" disks of radius "ε" can cover the unit disk.
The best solutions known to date are as follows.
Method.
The following picture shows an example of a dashed disk of radius 1 covered by six solid-line disks of radius ~0.6. One of the covering disks is placed central and the remaining five in a symmetrical way around it.
While this is not the best layout for r(6), similar arrangements of six, seven, eight, and nine disks around a central disk all having same radius result in the best layout strategies for r(7), r(8), r(9), and r(10), respectively. The corresponding angles θ are written in the "Symmetry" column in the above table.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "r(n)"
},
{
"math_id": 1,
"text": "n"
}
] | https://en.wikipedia.org/wiki?curid=9451796 |
945225 | Isoperimetric dimension | Concept in topology
In mathematics, the isoperimetric dimension of a manifold is a notion of dimension that tries to capture how the "large-scale behavior" of the manifold resembles that of a Euclidean space (unlike the topological dimension or the Hausdorff dimension which compare different "local behaviors" against those of the Euclidean space).
In the Euclidean space, the isoperimetric inequality says that of all bodies with the same volume, the ball has the smallest surface area. In other manifolds it is usually very difficult to find the precise body minimizing the surface area, and this is not what the isoperimetric dimension is about. The question we will ask is, what is "approximately" the minimal surface area, whatever the body realizing it might be.
Formal definition.
We say about a differentiable manifold "M" that it satisfies a "d"-dimensional isoperimetric inequality if for any open set "D" in "M" with a smooth boundary one has
formula_0
The notations vol and area refer to the regular notions of volume and surface area on the manifold, or more precisely, if the manifold has "n" topological dimensions then vol refers to "n"-dimensional volume and area refers to ("n" − 1)-dimensional volume. "C" here refers to some constant, which does not depend on "D" (it may depend on the manifold and on "d").
The isoperimetric dimension of "M" is the supremum of all values of "d" such that "M" satisfies a "d"-dimensional isoperimetric inequality.
Examples.
A "d"-dimensional Euclidean space has isoperimetric dimension "d". This is the well known isoperimetric problem — as discussed above, for the Euclidean space the constant "C" is known precisely since the minimum is achieved for the ball.
An infinite cylinder (i.e. a product of the circle and the line) has topological dimension 2 but isoperimetric dimension 1. Indeed, multiplying any manifold with a compact manifold does not change the isoperimetric dimension (it only changes the value of the constant "C"). Any compact manifold has isoperimetric dimension 0.
It is also possible for the isoperimetric dimension to be larger than the topological dimension. The simplest example is the infinite jungle gym, which has topological dimension 2 and isoperimetric dimension 3. See for pictures and Mathematica code.
The hyperbolic plane has topological dimension 2 and isoperimetric dimension infinity. In fact the hyperbolic plane has positive Cheeger constant. This means that it satisfies the inequality
formula_1
which obviously implies infinite isoperimetric dimension.
Consequences of isoperimetry.
A simple integration over "r" (or sum in the case of graphs) shows that a "d"-dimensional isoperimetric inequality implies a "d"-dimensional volume growth, namely
formula_2
where "B"("x","r") denotes the ball of radius "r" around the point "x" in the Riemannian distance or in the graph distance. In general, the opposite is not true, i.e. even uniformly exponential volume growth does not imply any kind of isoperimetric inequality. A simple example can be had by taking the graph Z (i.e. all the integers with edges between "n" and "n" + 1) and connecting to the vertex "n" a complete binary tree of height |"n"|. Both properties (exponential growth and 0 isoperimetric dimension) are easy to verify.
An interesting exception is the case of groups. It turns out that a group with polynomial growth of order "d" has isoperimetric dimension "d". This holds both for the case of Lie groups and for the Cayley graph of a finitely generated group.
A theorem of Varopoulos connects the isoperimetric dimension of a graph to the rate of escape of random walk on the graph. The result states
"Varopoulos' theorem: If G is a graph satisfying a d-dimensional isoperimetric inequality then"
formula_3
"where" formula_4 "is the probability that a random walk on" "G" "starting from" "x" "will be in" "y" "after" "n" "steps, and" "C" "is some constant."
Discusses the topic in the context of manifolds, no mention of graphs.
This paper contains the result that on groups of polynomial growth, volume growth and isoperimetric inequalities are equivalent. In French.
This paper contains a precise definition of the isoperimetric dimension of a graph, and establishes many of its properties. | [
{
"math_id": 0,
"text": "\\operatorname{area}(\\partial D)\\geq C\\operatorname{vol}(D)^{(d-1)/d}."
},
{
"math_id": 1,
"text": "\\operatorname{area}(\\partial D)\\geq C\\operatorname{vol}(D),"
},
{
"math_id": 2,
"text": "\\operatorname{vol} B(x,r)\\geq Cr^d"
},
{
"math_id": 3,
"text": "p_n(x,y)\\leq Cn^{-d/2} "
},
{
"math_id": 4,
"text": " p_n(x,y)"
}
] | https://en.wikipedia.org/wiki?curid=945225 |
9453042 | Karp–Flatt metric | The Karp–Flatt metric is a measure of parallelization of code in parallel processor systems. This metric exists in addition to Amdahl's law and Gustafson's law as an indication of the extent to which a particular computer code is parallelized. It was proposed by Alan H. Karp and Horace P. Flatt in 1990.
Description.
Given a parallel computation exhibiting speedup formula_0 on formula_1 processors, where formula_1 > 1, the experimentally determined serial fraction formula_2 is defined to be the Karp–Flatt Metric viz:
formula_3
The lower the value of formula_2, the better the parallelization.
Justification.
There are many ways to measure the performance of a parallel algorithm running on a parallel processor. The Karp–Flatt metric defines a metric which reveals aspects of the performance that are not easily discerned from other metrics. A pseudo-"derivation" of sorts follows from Amdahl's Law, which can be written as:
formula_4
Where:
with the result obtained by substituting formula_1 = 1 viz. formula_8, if we define the serial fraction formula_2 = formula_9 then the equation can be rewritten as
formula_10
In terms of the speedup formula_0 = formula_11 :
formula_12
Solving for the serial fraction, we get the Karp–Flatt metric as above. Note that this is not a "derivation" from Amdahl's law as the left hand side represents a metric rather than a mathematically derived quantity. The treatment above merely shows that the Karp–Flatt metric is consistent with Amdahl's Law.
Use.
While the serial fraction e is often mentioned in computer science literature, it was rarely used as a diagnostic tool the way speedup and efficiency are. Karp and Flatt hoped to correct this by proposing this metric. This metric addresses the inadequacies of the other laws and quantities used to measure the parallelization of computer code. In particular, Amdahl's law does not take into account load balancing issues, nor does it take overhead into consideration. Using the serial fraction as a metric poses definite advantages over the others, particularly as the number of processors grows.
For a problem of fixed size, the efficiency of a parallel computation typically decreases as the number of processors increases. By using the serial fraction obtained experimentally using the Karp–Flatt metric, we can determine if the efficiency decrease is due to limited opportunities of parallelism or increases in algorithmic or architectural overhead. | [
{
"math_id": 0,
"text": "\\psi"
},
{
"math_id": 1,
"text": "p"
},
{
"math_id": 2,
"text": "e"
},
{
"math_id": 3,
"text": "e = \\frac{\\frac{1}{\\psi}-\\frac{1}{p}}{1-\\frac{1}{p}}"
},
{
"math_id": 4,
"text": "T(p) = T_s + \\frac{T_p}{p}"
},
{
"math_id": 5,
"text": "T(p)"
},
{
"math_id": 6,
"text": "T_s"
},
{
"math_id": 7,
"text": "T_p"
},
{
"math_id": 8,
"text": "T(1) = T_s + T_p"
},
{
"math_id": 9,
"text": "\\frac{T_s}{T(1)}"
},
{
"math_id": 10,
"text": "T(p) = T(1) e + \\frac{T(1) (1-e)}{p}"
},
{
"math_id": 11,
"text": "\\frac{T(1)}{T(p)}"
},
{
"math_id": 12,
"text": "\\frac{1}{\\psi} = e + \\frac{1-e}{p}"
}
] | https://en.wikipedia.org/wiki?curid=9453042 |
945330 | Donald McKay | American shipbuilder
Donald McKay (September 4, 1810 – September 20, 1880) was a Nova Scotian-born American designer and builder of sailing ships, famed for his record-setting extreme clippers.
Early life.
McKay was born in Jordan Falls, Shelburne County, on Nova Scotia's South Shore, the oldest son and one of eighteen children of Hugh McKay, a fisherman and a farmer, and Ann McPherson McKay. Both of his parents were of Scottish descent. He was named after his grandfather, Captain Donald McKay, a British officer, who after the Revolutionary war moved to Nova Scotia from the Scottish Highlands.
Early years as a shipbuilder.
In 1826 McKay moved to New York, where he served his apprenticeship under Isaac Webb in the Webb & Allen shipyard from 1827 to 1831. He then returned briefly to Nova Scotia and built a boat with his uncle, but after they were swindled from the proceeds he returned to New York and took a job in the Brown & Bell shipyard, working for Jacob Bell. In 1840, following a recommendation from Bell, he was taken on as a supervisor at the Brooklyn Navy Yard, but stayed only briefly because of the anti-immigrant sentiment towards him (as a Canadian) from the men he was supervising. Bell came to the rescue and found him an assignment to work on a packet ship in a shipyard in Wiscasset, Maine. Returning south when that assignment was complete, he stopped in Newburyport and took a job as a foreman in the yard of John Currier, Jr., where he supervised the construction of the 427-ton "Delia Walker." Currier was very impressed with McKay and offered him a five-year contract, which McKay refused driven by desire to own his own business.
In 1841, William Currier (no relation to John) offered McKay the chance to become a partner in what would become the Currier & McKay shipyard in Newburyport. Two years later, with McKay now designing ships on his own, he and Currier parted ways and McKay went into business with a man named William Picket, building the packet ships "St. George" and "John R. Skiddy". The partnership with Picket was "pleasant and profitable", but after McKay built the "Joshua Bates" for Enoch Train's new packet line to Liverpool in 1844, Train persuaded him to move to East Boston and start his own shipyard there. Train not only provided the financing for McKay to do this but then became his biggest customer, commissioning seven more packet ships and four clipper ships between 1845 and 1853—including the legendary extreme clipper "Flying Cloud".
Ships built before 1845.
Sources:
East Boston shipyard.
In 1845 McKay, as a sole owner, established his own shipyard on Border Street, East Boston, where he built some of the finest American ships over a career of almost 25 years.
One of his first large orders was building five large packet ships for Enoch Train's White Diamond line between 1845 and 1850: "Washington Irving", "Anglo Saxon", "Anglo American", "Daniel Webster", and "Ocean Monarch". The "Ocean Monarch" was lost to fire on August 28, 1848, soon after leaving Liverpool and within sight of Wales; over 170 of the passengers and crew perished. The "Washington Irving" carried Patrick Kennedy, grandfather of Kennedy family patriarch Joseph P. Kennedy Sr., to Boston in 1849.
In the summer of 1851, McKay visited Liverpool and secured a contract to build four large ships for James Baines & Co.'s Australian trade: "Lightning" (1854), "Champion of the Seas" (1854), "James Baines" (1854), and "Donald McKay (1855). "
Ships built after 1845.
Sources:
Late life.
In 1869, under financial pressure from previous losses, McKay sold his shipyard and worked for some time in other shipyards. He retired to his farm near Hamilton, Massachusetts, spending the rest of his life there. He died in 1880 in relative poverty and was buried in Newburyport.
Design practices.
McKay's designs were characterized by a long fine bow with increasing hollow and waterlines. He was perhaps influenced by the writings of John W. Griffiths, designer of the China clipper "Rainbow" in 1845. The long hollow bow helped to penetrate rather than ride over the wave produced by the hull at high speeds, reducing resistance as hull speed is approached. Hull speed is the natural speed of a wave the same length as the ship, in knots, formula_0, where LWL = Length of Water Line in feet. His hulls had a shorter afterbody, putting the center of buoyancy farther aft than was typical of the period, as well as a full midsection with rather flat bottom. These characteristics led to lower drag at high speed compared to other ships of similar length, as well as great stability which translated into the ability to carry sail in high winds (more power in extreme conditions). His fishing schooner design was even more radical than his clippers, being a huge flat-bottomed dinghy similar in form to 20th century planing boats. These design changes were not favorable for light wind conditions such as were expected on the China trade, but were profitable in the California and Australian trades.
Legacy and honors.
Pan Am named one of their Boeing 747s "Clipper Donald McKay" in his honor.
There is a monument to McKay in South Boston, near Fort Independence, overlooking the channel, that lists all his ships. There were more than thirty ships listed.
His house in East Boston was designated a Boston Landmark in 1977 and is also on the National Register of Historic Places.
A memorial pavilion to McKay, including a painting of his famous "Flying Cloud", can be found at Piers Park in East Boston.
McKay was inducted into the National Sailing Hall of Fame on November 9, 2019.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "1.34 \\times \\sqrt{\\mbox{LWL}}"
}
] | https://en.wikipedia.org/wiki?curid=945330 |
945565 | Calkin algebra | In functional analysis, the Calkin algebra, named after John Williams Calkin, is the quotient of "B"("H"), the ring of bounded linear operators on a separable infinite-dimensional Hilbert space "H", by the ideal "K"("H") of compact operators. Here the addition in "B"("H") is addition of operators and the multiplication in "B"("H") is composition of operators; it is easy to verify that these operations make "B"("H") into a ring. When scalar multiplication is also included, "B"("H") becomes in fact an algebra over the same field over which "H" is a Hilbert space.
formula_0
which induces a six-term cyclic exact sequence in K-theory. Those operators in "B"("H") which are mapped to an invertible element of the Calkin algebra are called Fredholm operators, and their index can be described both using K-theory and directly. One can conclude, for instance, that the collection of unitary operators in the Calkin algebra consists of homotopy classes indexed by the integers Z. This is in contrast to "B"("H"), where the unitary operators are path connected. | [
{
"math_id": 0,
"text": "0 \\to K(H) \\to B(H) \\to B(H)/K(H) \\to 0"
}
] | https://en.wikipedia.org/wiki?curid=945565 |
945656 | Free-electron laser | Laser using electron beam in vacuum as gain medium
A free-electron laser (FEL) is a fourth generation light source producing extremely brilliant and short pulses of radiation. An FEL functions much as a laser but employs relativistic electrons as a gain medium instead of using stimulated emission from atomic or molecular excitations. In an FEL, a "bunch" of electrons passes through a magnetic structure called an undulator or wiggler to generate radiation, which re-interacts with the electrons to make them emit coherently, exponentially increasing its intensity.
As electron kinetic energy and undulator parameters can be adapted as desired, free-electron lasers are tunable and can be built for a wider frequency range than any other type of laser, currently ranging in wavelength from microwaves, through terahertz radiation and infrared, to the visible spectrum, ultraviolet, and X-ray.
The first free-electron laser was developed by John Madey in 1971 at Stanford University using technology developed by Hans Motz and his coworkers, who built an undulator at Stanford in 1953, using the wiggler magnetic configuration. Madey used a 43 MeV electron beam and 5 m long wiggler to amplify a signal.
<templatestyles src="Template:TOC limit/styles.css" />
Beam creation.
To create an FEL, an electron gun is used. A beam of electrons is generated by a short laser pulse illuminating a photocathode located inside a microwave cavity and accelerated to almost the speed of light in a device called a photoinjector. The beam is further accelerated to a design energy by a particle accelerator, usually a linear particle accelerator. Then the beam passes through a periodic arrangement of magnets with alternating poles across the beam path, which creates a side to side magnetic field. The direction of the beam is called the longitudinal direction, while the direction across the beam path is called transverse. This array of magnets is called an undulator or a wiggler, because the Lorentz force of the field forces the electrons in the beam to wiggle transversely, traveling along a sinusoidal path about the axis of the undulator.
The transverse acceleration of the electrons across this path results in the release of photons, which are monochromatic but still incoherent, because the electromagnetic waves from randomly distributed electrons interfere constructively and destructively in time. The resulting radiation power scales linearly with the number of electrons. Mirrors at each end of the undulator create an optical cavity, causing the radiation to form standing waves, or alternately an external excitation laser is provided.
The radiation becomes sufficiently strong that the transverse electric field of the radiation beam interacts with the transverse electron current created by the sinusoidal wiggling motion, causing some electrons to gain and others to lose energy to the optical field via the ponderomotive force.
This energy modulation evolves into electron density (current) modulations with a period of one optical wavelength. The electrons are thus longitudinally clumped into "microbunches", separated by one optical wavelength along the axis. Whereas an undulator alone would cause the electrons to radiate independently (incoherently), the radiation emitted by the bunched electrons is in phase, and the fields add together coherently.
The radiation intensity grows, causing additional microbunching of the electrons, which continue to radiate in phase with each other. This process continues until the electrons are completely microbunched and the radiation reaches a saturated power several orders of magnitude higher than that of the undulator radiation.
The wavelength of the radiation emitted can be readily tuned by adjusting the energy of the electron beam or the magnetic-field strength of the undulators.
FELs are relativistic machines. The wavelength of the emitted radiation, formula_0, is given by
formula_1
or when the wiggler strength parameter K, discussed below, is small
formula_2
where formula_3 is the undulator wavelength (the spatial period of the magnetic field), formula_4 is the relativistic Lorentz factor and the proportionality constant depends on the undulator geometry and is of the order of 1.
This formula can be understood as a combination of two relativistic effects. Imagine you are sitting on an electron passing through the undulator. Due to Lorentz contraction the undulator is shortened by a formula_4 factor and the electron experiences much shorter undulator wavelength formula_5. However, the radiation emitted at this wavelength is observed in the laboratory frame of reference and the relativistic Doppler effect brings the second formula_4 factor to the above formula. In an X-ray FEL the typical undulator wavelength of 1 cm is transformed to X-ray wavelengths on the order of 1 nm by formula_4 ≈ 2000, i.e. the electrons have to travel with the speed of 0.9999998"c".
Wiggler strength parameter K.
K, a dimensionless parameter, defines the wiggler strength as the relationship between the length of a period and the radius of bend,
formula_6
where formula_7 is the bending radius, formula_8 is the applied magnetic field, formula_9 is the electron mass, and formula_10 is the elementary charge.
Expressed in practical units, the dimensionless undulator parameter is
formula_11.
Quantum effects.
In most cases, the theory of classical electromagnetism adequately accounts for the behavior of free electron lasers. For sufficiently short wavelengths, quantum effects of electron recoil and shot noise may have to be considered.
Construction.
Free-electron lasers require the use of an electron accelerator with its associated shielding, as accelerated electrons can be a radiation hazard if not properly contained. These accelerators are typically powered by klystrons, which require a high-voltage supply. The electron beam must be maintained in a vacuum, which requires the use of numerous vacuum pumps along the beam path. While this equipment is bulky and expensive, free-electron lasers can achieve very high peak powers, and the tunability of FELs makes them highly desirable in many disciplines, including chemistry, structure determination of molecules in biology, medical diagnosis, and nondestructive testing.
Infrared and terahertz FELs.
The Fritz Haber Institute in Berlin completed a mid-infrared and terahertz FEL in 2013.
X-ray FELs.
The lack of mirror materials that can reflect extreme ultraviolet and x-rays means that X-ray free electron lasers (XFEL) need to work without a resonant cavity. Consequently, in an X-ray FEL (XFEL) the beam is produced by a single pass of radiation through the undulator. This requires that there be enough amplification over a single pass to produce an appropriate beam.
Hence, XFELs use long undulator sections that are tens or hundreds of meters long. This allows XFELs to produce the brightest X-ray pulses of any human-made x-ray source. The intense pulses from the X-ray laser lies in the principle of self-amplified spontaneous emission (SASE), which leads to microbunching. Initially all electrons are distributed evenly and emit only incoherent spontaneous radiation. Through the interaction of this radiation and the electrons' oscillations, they drift into microbunches separated by a distance equal to one radiation wavelength. This interaction drives all electrons to begin emitting coherent radiation. Emitted radiation can reinforce itself perfectly whereby wave crests and wave troughs are optimally superimposed on one another. This results in an exponential increase of emitted radiation power, leading to high beam intensities and laser-like properties.
Examples of facilities operating on the SASE FEL principle include the:
In 2022, an upgrade to Stanford University’s Linac Coherent Light Source (LCLS-II) used temperatures around −271 °C to produce 106 pulses/second of near light-speed electrons, using superconducting niobium cavities.
Seeding and Self-seeding.
One problem with SASE FELs is the lack of temporal coherence due to a noisy startup process. To avoid this, one can "seed" an FEL with a laser tuned to the resonance of the FEL. Such a temporally coherent seed can be produced by more conventional means, such as by high harmonic generation (HHG) using an optical laser pulse. This results in coherent amplification of the input signal; in effect, the output laser quality is characterized by the seed. While HHG seeds are available at wavelengths down to the extreme ultraviolet, seeding is not feasible at x-ray wavelengths due to the lack of conventional x-ray lasers.
In late 2010, in Italy, the seeded-FEL source FERMI@Elettra started commissioning, at the Trieste Synchrotron Laboratory. FERMI@Elettra is a single-pass FEL user-facility covering the wavelength range from 100 nm (12 eV) to 10 nm (124 eV), located next to the third-generation synchrotron radiation facility ELETTRA in Trieste, Italy.
In 2001, at Brookhaven national laboratory, a seeding technique called "High-Gain Harmonic-Generation" that works to X-ray wavelength has been developed. The technique, which can be multiple-staged in an FEL to achieve increasingly shorter wavelengths, utilizes a longitudinal shift of the radiation relative to the electron bunch to avoid the reduced beam quality caused by a previous stage. This longitudinal staging along the beam is called "Fresh-Bunch". This technique was demonstrated at x-ray wavelength at Trieste Synchrotron Laboratory.
A similar staging approach, named "Fresh-Slice", was demonstrated at the Paul Scherrer Institut, also at X-ray wavelengths. In the Fresh Slice the short X-ray pulse produced at the first stage is moved to a fresh part of the electron bunch by a transverse tilt of the bunch.
In 2012, scientists working on the LCLS found an alternative solution to the seeding limitation for x-ray wavelengths by self-seeding the laser with its own beam after being filtered through a diamond monochromator. The resulting intensity and monochromaticity of the beam were unprecedented and allowed new experiments to be conducted involving manipulating atoms and imaging molecules. Other labs around the world are incorporating the technique into their equipment.
Research.
Biomedical.
Basic research.
Researchers have explored X-ray free-electron lasers as an alternative to synchrotron light sources that have been the workhorses of protein crystallography and cell biology.
Exceptionally bright and fast X-rays can image proteins using x-ray crystallography. This technique allows first-time imaging of proteins that do not stack in a way that allows imaging by conventional techniques, 25% of the total number of proteins. Resolutions of 0.8 nm have been achieved with pulse durations of 30 femtoseconds. To get a clear view, a resolution of 0.1–0.3 nm is required. The short pulse durations allow images of X-ray diffraction patterns to be recorded before the molecules are destroyed. The bright, fast X-rays were produced at the Linac Coherent Light Source at SLAC. As of 2014, LCLS was the world's most powerful X-ray FEL.
Due to the increased repetition rates of the next-generation X-ray FEL sources, such as the European XFEL, the expected number of diffraction patterns is also expected to increase by a substantial amount. The increase in the number of diffraction patterns will place a large strain on existing analysis methods. To combat this, several methods have been researched to sort the huge amount of data that typical X-ray FEL experiments will generate. While the various methods have been shown to be effective, it is clear that to pave the way towards single-particle X-ray FEL imaging at full repetition rates, several challenges have to be overcome before the next resolution revolution can be achieved.
New biomarkers for metabolic diseases: taking advantage of the selectivity and sensitivity when combining infrared ion spectroscopy and mass spectrometry scientists can provide a structural fingerprint of small molecules in biological samples, like blood or urine. This new and unique methodology is generating exciting new possibilities to better understand metabolic diseases and develop novel diagnostic and therapeutic strategies.
Surgery.
Research by Glenn Edwards and colleagues at Vanderbilt University's FEL Center in 1994 found that soft tissues including skin, cornea, and brain tissue could be cut, or ablated, using infrared FEL wavelengths around 6.45 micrometres with minimal collateral damage to adjacent tissue. This led to surgeries on humans, the first ever using a free-electron laser. Starting in 1999, Copeland and Konrad performed three surgeries in which they resected meningioma brain tumors. Beginning in 2000, Joos and Mawn performed five surgeries that cut a window in the sheath of the optic nerve, to test the efficacy for optic nerve sheath fenestration. These eight surgeries produced results consistent with the standard of care and with the added benefit of minimal collateral damage. A review of FELs for medical uses is given in the 1st edition of Tunable Laser Applications.
Fat removal.
Several small, clinical lasers tunable in the 6 to 7 micrometre range with pulse structure and energy to give minimal collateral damage in soft tissue have been created. At Vanderbilt, there exists a Raman shifted system pumped by an Alexandrite laser.
Rox Anderson proposed the medical application of the free-electron laser in melting fats without harming the overlying skin. At infrared wavelengths, water in tissue was heated by the laser, but at wavelengths corresponding to 915, 1210 and 1720 nm, subsurface lipids were differentially heated more strongly than water. The possible applications of this selective photothermolysis (heating tissues using light) include the selective destruction of sebum lipids to treat acne, as well as targeting other lipids associated with cellulite and body fat as well as fatty plaques that form in arteries which can help treat atherosclerosis and heart disease.
Military.
FEL technology is being evaluated by the US Navy as a candidate for an anti-aircraft and anti-missile directed-energy weapon. The Thomas Jefferson National Accelerator Facility's FEL has demonstrated over 14 kW power output. Compact multi-megawatt class FEL weapons are undergoing research. On June 9, 2009 the Office of Naval Research announced it had awarded Raytheon a contract to develop a 100 kW experimental FEL. On March 18, 2010 Boeing Directed Energy Systems announced the completion of an initial design for U.S. Naval use. A prototype FEL system was demonstrated, with a full-power prototype scheduled by 2018.
FEL prize winners.
The FEL prize is given to a person who has contributed significantly to the advancement of the field of free-electron lasers. In addition, it gives the international FEL community the opportunity to recognize its members for their outstanding achievements. The prize winners are announced at the FEL conference, which currently takes place every two years.
Young Scientist FEL Award.
The Young Scientist FEL Award (or "Young Investigator FEL Prize") is intended to honor outstanding contributions to FEL science and technology from a person who is less than 37 years of age at the time of the FEL conference.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\lambda_r"
},
{
"math_id": 1,
"text": "\\lambda_r = \\frac{\\lambda_u}{2 \\gamma^2}\\left(1+\\frac{K^2}{2}\\right)"
},
{
"math_id": 2,
"text": "\\lambda_r \\propto \\frac{\\lambda_u}{2 \\gamma^2}"
},
{
"math_id": 3,
"text": "\\lambda_u"
},
{
"math_id": 4,
"text": "\\gamma"
},
{
"math_id": 5,
"text": "\\lambda_u/\\gamma"
},
{
"math_id": 6,
"text": "K = \\frac{\\gamma \\lambda_u}{ 2 \\pi \\rho } = \\frac{e B_0 \\lambda_u}{2 \\pi m_e c}"
},
{
"math_id": 7,
"text": "\\rho"
},
{
"math_id": 8,
"text": "B_0 "
},
{
"math_id": 9,
"text": "m_e "
},
{
"math_id": 10,
"text": "e "
},
{
"math_id": 11,
"text": "K=0.934 \\cdot B_0\\,\\text{[T]} \\cdot \\lambda_u\\,\\text{[cm]}"
}
] | https://en.wikipedia.org/wiki?curid=945656 |
945957 | New Foundations | Axiomatic set theory devised by W.V.O. Quine
In mathematical logic, New Foundations (NF) is a non-well-founded, finitely axiomatizable set theory conceived by Willard Van Orman Quine as a simplification of the theory of types of "Principia Mathematica".
Definition.
The well-formed formulas of NF are the standard formulas of propositional calculus with two primitive predicates equality (formula_0) and membership (formula_1). NF can be presented with only two axiom schemata:
A formula formula_3 is said to be "stratified" if there exists a function "f" from pieces of formula_3's syntax to the natural numbers, such that for any atomic subformula formula_4 of formula_3 we have "f"("y") = "f"("x") + 1, while for any atomic subformula formula_5 of formula_3, we have "f"("x") = "f"("y").
Finite axiomatization.
NF can be finitely axiomatized. One advantage of such a finite axiomatization is that it eliminates the notion of stratification. The axioms in a finite axiomatization correspond to natural basic constructions, whereas stratified comprehension is powerful but not necessarily intuitive. In his introductory book, Holmes opted to take the finite axiomatization as basic, and prove stratified comprehension as a theorem. The precise set of axioms can vary, but includes most of the following, with the others provable as theorems:
Typed Set Theory.
New Foundations is closely related to Russellian unramified typed set theory (TST), a streamlined version of the theory of types of "Principia Mathematica" with a linear hierarchy of types. In this many-sorted theory, each variable and set is assigned a type. It is customary to write the "type indices" as superscripts: formula_46 denotes a variable of type "n". Type 0 consists of individuals otherwise undescribed. For each (meta-) natural number "n", type "n"+1 objects are sets of type "n" objects; objects connected by identity have equal types and sets of type "n" have members of type "n"-1. The axioms of TST are extensionality, on sets of the same (positive) type, and comprehension, namely that if formula_47 is a formula, then the set formula_48 exists. In other words, given any formula formula_49, the formula formula_50 is an axiom where formula_51 represents the set formula_48 and is not free in formula_47. This type theory is much less complicated than the one first set out in the "Principia Mathematica", which included types for relations whose arguments were not necessarily all of the same types.
There is a correspondence between New Foundations and TST in terms of adding or erasing type annotations. In NF's comprehension schema, a formula is stratified exactly when the formula can be assigned types according to the rules of TST. This can be extended to map every NF formula to a set of corresponding TST formulas with various type index annotations. The mapping is one-to-many because TST has many similar formulas. For example, raising every type index in a TST formula by 1 results in a new, valid TST formula.
Tangled Type Theory.
Tangled Type Theory (TTT) is an extension of TST where each variable is typed by an ordinal rather than a natural number. The well-formed atomic formulas are formula_52 and formula_53 where formula_54. The axioms of TTT are those of TST where each variable of type formula_55 is mapped to a variable formula_56 where formula_57 is an increasing function.
TTT is considered a "weird" theory because each type is related to "each" lower type in the same way. For example, type 2 sets have both type 1 members and type 0 members, and extensionality axioms assert that a type 2 set is determined uniquely by "either" its type 1 members or its type 0 members. Whereas TST has natural models where each type formula_58 is the power set of type formula_55, in TTT each type is being interpreted as the power set of each lower type simultaneously. Regardless, a model of NF can be easily converted to a model of TTT, because in NF all the types are already one and the same. Conversely, with a more complicated argument, it can also be shown that the consistency of TTT implies the consistency of NF.
NFU and other variants.
NF with urelements (NFU) is an important variant of NF due to Jensen and clarified by Holmes. Urelements are objects that are not sets and do not contain any elements, but can be contained in sets. One of the simplest forms of axiomatization of NFU regards urelements as multiple, unequal empty sets, thus weakening the extensionality axiom of NF to:
formula_59
In this axiomatization, the comprehension schema is unchanged, although the set formula_60 will not be unique if it is empty (i.e. if formula_61 is unsatisfiable).
However, for ease of use, it is more convenient to have a unique, "canonical" empty set. This can be done by introducing a sethood predicate formula_62 to distinguish sets from atoms. The axioms are then:
NF3 is the fragment of NF with full extensionality (no urelements) and those instances of comprehension which can be stratified using at most three types. NF4 is the same theory as NF.
Mathematical Logic (ML) is an extension of NF that includes proper classes as well as sets. ML was proposed by Quine and revised by Hao Wang, who proved that NF and the revised ML are equiconsistent.
Constructions.
This section discusses some problematic constructions in NF. For a further development of mathematics in NFU, with a comparison to the development of the same in ZFC, see implementation of mathematics in set theory.
Ordered pairs.
Relations and functions are defined in TST (and in NF and NFU) as sets of ordered pairs in the usual way. For purposes of stratification, it is desirable that a relation or function is merely one type higher than the type of the members of its field. This requires defining the ordered pair so that its type is the same as that of its arguments (resulting in a type-level ordered pair). The usual definition of the ordered pair, namely formula_67, results in a type two higher than the type of its arguments "a" and "b". Hence for purposes of determining stratification, a function is three types higher than the members of its field. NF and related theories usually employ Quine's set-theoretic definition of the ordered pair, which yields a type-level ordered pair. However, Quine's definition relies on set operations on each of the elements "a" and "b", and therefore does not directly work in NFU.
As an alternative approach, Holmes takes the ordered pair "(a, b)" as a primitive notion, as well as its left and right projections formula_68 and formula_69, i.e., functions such that formula_70 and formula_71 (in Holmes' axiomatization of NFU, the comprehension schema that asserts the existence of formula_2 for any stratified formula formula_3 is considered a theorem and only proved later, so expressions like formula_72 are not considered proper definitions). Fortunately, whether the ordered pair is type-level by definition or by assumption (i.e., taken as primitive) usually does not matter.
Natural numbers and the axiom of infinity.
The usual form of the axiom of infinity is based on the von Neumann construction of the natural numbers, which is not suitable for NF, since the description of the successor operation (and many other aspects of von Neumann numerals) is necessarily unstratified. The usual form of natural numbers used in NF follows Frege's definition, i.e., the natural number "n" is represented by the set of all sets with "n" elements. Under this definition, 0 is easily defined as formula_73, and the successor operation can be defined in a stratified way: formula_74 Under this definition, one can write down a statement analogous to the usual form of the axiom of infinity. However, that statement would be trivially true, since the universal set formula_75 would be an inductive set.
Since inductive sets always exist, the set of natural numbers formula_76 can be defined as the intersection of all inductive sets. This definition enables mathematical induction for stratified statements formula_77, because the set formula_78 can be constructed, and when formula_77 satisfies the conditions for mathematical induction, this set is an inductive set.
Finite sets can then be defined as sets that belong to a natural number. However, it is not trivial to prove that formula_75 is not a "finite set", i.e., that the size of the universe formula_79 is not a natural number. Suppose that formula_80. Then formula_81 (it can be shown inductively that a finite set is not equinumerous with any of its proper subsets), formula_82, and each subsequent natural number would be formula_83 too, causing arithmetic to break down. To prevent this, one can introduce the axiom of infinity for NF: formula_84
It may intuitively seem that one should be able to prove "Infinity" in NF(U) by constructing any "externally" infinite sequence of sets, such as formula_85. However, such a sequence could only be constructed through unstratified constructions (evidenced by the fact that TST itself has finite models), so such a proof could not be carried out in NF(U). In fact, "Infinity" is logically independent of NFU: There exists models of NFU where formula_79 is a non-standard natural number. In such models, mathematical induction can prove statements about formula_79, making it impossible to "distinguish" formula_79 from standard natural numbers.
However, there are some cases where "Infinity" can be proven (in which cases it may be referred to as the theorem of infinity):
Stronger axioms of infinity exist, such as that the set of natural numbers is a strongly Cantorian set, or NFUM = NFU + "Infinity" + "Large Ordinals" + "Small Ordinals" which is equivalent to Morse–Kelley set theory plus a predicate on proper classes which is a "κ"-complete nonprincipal ultrafilter on the proper class ordinal "κ".
Large sets.
NF (and NFU + "Infinity" + "Choice", described below and known consistent) allow the construction of two kinds of sets that ZFC and its proper extensions disallow because they are "too large" (some set theories admit these entities under the heading of proper classes):
Cartesian closure.
The category whose objects are the sets of NF and whose arrows are the functions between those sets is not Cartesian closed; Since NF lacks Cartesian closure, not every function curries as one might intuitively expect, and NF is not a topos.
Resolution of set-theoretic paradoxes.
NF may seem to run afoul of problems similar to those in naive set theory, but this is not the case. For example, the existence of the impossible Russell class formula_89 is not an axiom of NF, because formula_90 cannot be stratified. NF steers clear of the three well-known paradoxes of set theory in drastically different ways than how those paradoxes are resolved in well-founded set theories such as ZFC. Many useful concepts that are unique to NF and its variants can be developed from the resolution of those paradoxes.
Russell's paradox.
The resolution of Russell's paradox is trivial: formula_91 is not a stratified formula, so the existence of formula_89 is not asserted by any instance of "Comprehension". Quine said that he constructed NF with this paradox uppermost in mind.
Cantor's paradox and Cantorian sets.
Cantor's paradox boils down to the question of whether there exists a largest cardinal number, or equivalently, whether there exists a set with the largest cardinality. In NF, the universal set formula_75 is obviously a set with the largest cardinality. However, Cantor's theorem says (given ZFC) that the power set formula_92 of any set formula_6 is larger than formula_6 (there can be no injection (one-to-one map) from formula_92 into formula_6), which seems to imply a contradiction when formula_93.
Of course there is an injection from formula_94 into formula_75 since formula_75 is the universal set, so it must be that Cantor's theorem (in its original form) does not hold in NF. Indeed, the proof of Cantor's theorem uses the diagonalization argument by considering the set formula_95. In NF, formula_8 and formula_96 should be assigned the same type, so the definition of formula_7 is not stratified. Indeed, if formula_97 is the trivial injection formula_98, then formula_7 is the same (ill-defined) set in Russell's paradox.
This failure is not surprising since formula_99 makes no sense in TST: the type of formula_92 is one higher than the type of formula_6. In NF, formula_99 is a syntactical sentence due to the conflation of all the types, but any general proof involving "Comprehension" is unlikely to work.
The usual way to correct such a type problem is to replace formula_6 with formula_100, the set of one-element subsets of formula_6. Indeed, the correctly typed version of Cantor's theorem formula_101 is a theorem in TST (thanks to the diagonalization argument), and thus also a theorem in NF. In particular, formula_102: there are fewer one-element sets than sets (and so fewer one-element sets than general objects, if we are in NFU). The "obvious" bijection formula_103 from the universe to the one-element sets is not a set; it is not a set because its definition is unstratified. Note that in all models of NFU + "Choice" it is the case that formula_104; "Choice" allows one not only to prove that there are urelements but that there are many cardinals between formula_105 and formula_79.
However, unlike in TST, formula_106 is a syntactical sentence in NF(U), and as shown above one can talk about its truth value for specific values of formula_6 (e.g. when formula_93 it is false). A set formula_6 which satisfies the intuitively appealing formula_106 is said to be Cantorian: a Cantorian set satisfies the usual form of Cantor's theorem. A set formula_6 which satisfies the further condition that formula_107, the restriction of the singleton map to "A", is a set is not only Cantorian set but strongly Cantorian.
Burali-Forti paradox and the T operation.
The "Burali-Forti paradox" of the largest ordinal number is resolved in the opposite way: In NF, having access to the set of ordinals does not allow one to construct a "largest ordinal number". One can construct the ordinal formula_108 that corresponds to the natural well-ordering of all ordinals, but that does not mean that formula_108 is larger than all those ordinals.
To formalize the Burali-Forti paradox in NF, it is necessary to first formalize the concept of ordinal numbers. In NF, ordinals are defined (in the same way as in naive set theory) as equivalence classes of well-orderings under isomorphism. This is a stratified definition, so the set of ordinals formula_109 can be defined with no problem. Transfinite induction works on stratified statements, which allows one to prove that the natural ordering of ordinals (formula_110 iff there exists well-orderings formula_111 such that formula_36 is a continuation of formula_14) is a well-ordering of formula_109. By definition of ordinals, this well-ordering also belongs to an ordinal formula_112. In naive set theory, one would go on to prove by transfinite induction that each ordinal formula_113 is the order type of the natural order on the ordinals less than formula_113, which would imply an contradiction since formula_108 by definition is the order type of "all" ordinals, not any proper initial segment of them.
However, the statement "formula_113 is the order type of the natural order on the ordinals less than formula_113" is not stratified, so the transfinite induction argument does not work in NF. In fact, "the order type formula_114 of the natural order formula_115 on the ordinals less than formula_113" is at least "two" types higher than formula_113: The order relation formula_116 is one type higher than formula_113 assuming that formula_117 is a type-level ordered pair, and the order type (equivalence class) formula_118 is one type higher than formula_115. If formula_117 is the usual Kuratowski ordered pair (two types higher than formula_8 and formula_119), then formula_114 would be "four" types higher than formula_113.
To correct such a type problem, one needs the T operation, formula_120, that "raises the type" of an ordinal formula_113, just like how formula_100 "raises the type" of the set formula_6. The T operation is defined as follows: If formula_121, then formula_120 is the order type of the order formula_122. Now the lemma on order types may be restated in a stratified manner:
The order type of the natural order on the ordinals formula_123 is formula_124 or formula_125, depending on which ordered pair is used.
Both versions of this statement can be proven by transfinite induction; we assume the type level pair hereinafter. This means that formula_124 is always less than formula_108, the order type of "all" ordinals. In particular, formula_126.
Another (stratified) statement that can be proven by transfinite induction is that T is a strictly monotone (order-preserving) operation on the ordinals, i.e., formula_127 iff formula_128. Hence the T operation is not a function: The collection of ordinals formula_129 cannot have a least member, and thus cannot be a set. More concretely, the monotonicity of T implies formula_130, a "descending sequence" in the ordinals which also cannot be a set.
One might assert that this result shows that no model of NF(U) is "standard", since the ordinals in any model of NFU are externally not well-ordered. This is a philosophical question, not a question of what can be proved within the formal theory. Note that even within NFU it can be proven that any set model of NFU has non-well-ordered "ordinals"; NFU does not conclude that the universe formula_75 is a model of NFU, despite formula_75 being a set, because the membership relation is not a set relation.
Consistency.
Some mathematicians have questioned the consistency of NF, partly because it is not clear why it avoids the known paradoxes. A key issue was that Specker proved NF combined with the Axiom of Choice is inconsistent. The proof is complex and involves T-operations. However, since 2010, Holmes has claimed to have shown that NF is consistent relative to the consistency of standard set theory (ZFC). In 2024, Sky Wilshaw confirmed Holmes' proof using the Lean proof assistant.
Although NFU resolves the paradoxes similarly to NF, it has a much simpler consistency proof. The proof can be formalized within Peano Arithmetic (PA), a theory weaker than ZF that most mathematicians accept without question. This does not conflict with Gödel's second incompleteness theorem because NFU does not include the Axiom of Infinity and therefore PA cannot be modeled in NFU, avoiding a contradiction. PA also proves that NFU with Infinity and NFU with both Infinity and Choice are equiconsistent with TST with Infinity and TST with both Infinity and Choice, respectively. Therefore, a stronger theory like ZFC, which proves the consistency of TST, will also prove the consistency of NFU with these additions. In simpler terms, NFU is generally seen as weaker than NF because, in NFU, the collection of all sets (the power set of the universe) can be smaller than the universe itself, especially when urelements are included, as required by NFU with Choice.
Models of NFU.
Jensen's proof gives a fairly simple method for producing models of NFU in bulk. Using well-known techniques of model theory, one can construct a nonstandard model of Zermelo set theory (nothing nearly as strong as full ZFC is needed for the basic technique) on which there is an external automorphism "j" (not a set of the model) which moves a rank formula_131 of the cumulative hierarchy of sets. We may suppose without loss of generality that formula_132.
The domain of the model of NFU will be the nonstandard rank formula_131. The basic idea is that the automorphism "j" codes the "power set" formula_133 of our "universe" formula_131 into its externally isomorphic copy formula_134 inside our "universe." The remaining objects not coding subsets of the universe are treated as urelements. Formally, the membership relation of the model of NFU will be formula_135
It may now be proved that this actually is a model of NFU. Let formula_3 be a stratified formula in the language of NFU. Choose an assignment of types to all variables in the formula which witnesses the fact that it is stratified. Choose a natural number "N" greater than all types assigned to variables by this stratification. Expand the formula formula_3 into a formula formula_136 in the language of the nonstandard model of Zermelo set theory with automorphism "j" using the definition of membership in the model of NFU. Application of any power of "j" to both sides of an equation or membership statement preserves its truth value because "j" is an automorphism. Make such an application to each atomic formula in formula_136 in such a way that each variable "x" assigned type "i" occurs with exactly formula_137 applications of "j". This is possible thanks to the form of the atomic membership statements derived from NFU membership statements, and to the formula being stratified. Each quantified sentence formula_138 can be converted to the form formula_139 (and similarly for existential quantifiers). Carry out this transformation everywhere and obtain a formula formula_140 in which "j" is never applied to a bound variable. Choose any free variable "y" in formula_3 assigned type "i". Apply formula_141 uniformly to the entire formula to obtain a formula formula_142 in which "y" appears without any application of "j". Now formula_143 exists (because "j" appears applied only to free variables and constants), belongs to formula_133, and contains exactly those "y" which satisfy the original formula
formula_3 in the model of NFU. formula_144 has this extension in the model of NFU (the application of "j" corrects for the different definition of membership in the model of NFU). This establishes that "Stratified Comprehension" holds in the model of NFU.
To see that weak "Extensionality" holds is straightforward: each nonempty element of formula_134 inherits a unique extension from the nonstandard model, the empty set inherits its usual extension as well, and all other objects are urelements.
If formula_113 is a natural number "n", one gets a model of NFU which claims that the universe is finite (it is externally infinite, of course). If formula_113 is infinite and the "Choice" holds in the nonstandard model of ZFC, one obtains a model of NFU + "Infinity" + "Choice".
Self-sufficiency of mathematical foundations in NFU.
For philosophical reasons, it is important to note that it is not necessary to work in ZFC or any related system to carry out this proof. A common argument against the use of NFU as a foundation for mathematics is that the reasons for relying on it have to do with the intuition that ZFC is correct. It is sufficient to accept TST (in fact TSTU). In outline: take the type theory TSTU (allowing urelements in each positive type) as a metatheory and consider the theory of set models of TSTU in TSTU (these models will be sequences of sets formula_145 (all of the same type in the metatheory) with embeddings of each formula_146 into formula_147 coding embeddings of the power set of formula_145 into formula_148 in a type-respecting manner). Given an embedding of formula_149 into formula_150 (identifying elements of the base "type" with subsets of the base type), embeddings may be defined from each "type" into its successor in a natural way. This can be generalized to transfinite sequences formula_151 with care.
Note that the construction of such sequences of sets is limited by the size of the type in which they are being constructed; this prevents TSTU from proving its own consistency (TSTU + "Infinity" can prove the consistency of TSTU; to prove the consistency of TSTU+"Infinity" one needs a type containing a set of cardinality formula_152, which cannot be proved to exist in TSTU+"Infinity" without stronger assumptions). Now the same results of model theory can be used to build a model of NFU and verify that it is a model of NFU in much the same way, with the formula_151's being used in place of formula_131 in the usual construction. The final move is to observe that since NFU is consistent, we can drop the use of absolute types in our metatheory, bootstrapping the metatheory from TSTU to NFU.
Facts about the automorphism "j".
The automorphism "j" of a model of this kind is closely related to certain natural operations in NFU. For example, if "W" is a well-ordering in the nonstandard model (we suppose here that we use Kuratowski pairs so that the coding of functions in the two theories will agree to some extent) which is also a well-ordering in NFU (all well-orderings of NFU are well-orderings in the nonstandard model of Zermelo set theory, but not vice versa, due to the formation of urelements in the construction of the model), and "W" has type α in NFU, then "j"("W") will be a well-ordering of type "T"(α) in NFU.
In fact, "j" is coded by a function in the model of NFU. The function in the nonstandard model which sends the singleton of any element of formula_153 to its sole element, becomes in NFU a function which sends each singleton {"x"}, where "x" is any object in the universe, to "j"("x"). Call this function "Endo" and let it have the following properties: "Endo" is an injection from the set of singletons into the set of sets, with the property that "Endo"( {"x"} ) = {"Endo"( {"y"} ) | "y"∈"x"} for each set "x". This function can define a type level "membership" relation on the universe, one reproducing the membership relation of the original nonstandard model.
History.
In 1914, Norbert Wiener showed how to code the ordered pair as a set of sets, making it possible to eliminate the relation types of "Principia Mathematica" in favor of the linear hierarchy of sets in TST. The usual definition of the ordered pair was first proposed by Kuratowski in 1921. Willard Van Orman Quine first proposed NF as a way to avoid the "disagreeable consequences" of TST in a 1937 article titled "New Foundations for Mathematical Logic"; hence the name. Quine extended the theory in his book "Mathematical Logic", whose first edition was published in 1940. In the book, Quine introduced the system "Mathematical Logic" or "ML", an extension of NF that included proper classes as well as sets. The first edition's set theory married NF to the proper classes of NBG set theory and included an axiom schema of unrestricted comprehension for proper classes. However, J. Barkley Rosser proved that the system was subject to the Burali-Forti paradox. Hao Wang showed how to amend Quine's axioms for ML so as to avoid this problem. Quine included the resulting axiomatization in the second and final edition, published in 1951.
In 1944, Theodore Hailperin showed that "Comprehension" is equivalent to a finite conjunction of its instances, In 1953, Ernst Specker showed that the axiom of choice is false in NF (without urelements). In 1969, Jensen showed that adding urelements to NF yields a theory (NFU) that is provably consistent. That same year, Grishin proved NF3 consistent. Specker additionally showed that NF is equiconsistent with TST plus the axiom scheme of "typical ambiguity". NF is also equiconsistent with TST augmented with a "type shifting automorphism", an operation (external to the theory) which raises type by one, mapping each type onto the next higher type, and preserves equality and membership relations.
In 1983, Marcel Crabbé proved consistent a system he called NFI, whose axioms are unrestricted extensionality and those instances of comprehension in which no variable is assigned a type higher than that of the set asserted to exist. This is a predicativity restriction, though NFI is not a predicative theory: it admits enough impredicativity to define the set of natural numbers (defined as the intersection of all inductive sets; note that the inductive sets quantified over are of the same type as the set of natural numbers being defined). Crabbé also discussed a subtheory of NFI, in which only parameters (free variables) are allowed to have the type of the set asserted to exist by an instance of comprehension. He called the result "predicative NF" (NFP); it is, of course, doubtful whether any theory with a self-membered universe is truly predicative. Holmes has [] shown that NFP has the same consistency strength as the predicative theory of types of "Principia Mathematica" without the axiom of reducibility.
The Metamath database implemented Hailperin's finite axiomatization for New Foundations. Since 2015, several candidate proofs by Randall Holmes of the consistency of NF relative to ZF were available both on arXiv and on the logician's home page. His proofs were based on demonstrating the equiconsistency of a "weird" variant of TST, "tangled type theory with λ-types" (TTTλ), with NF, and then showing that TTTλ is consistent relative to ZF with atoms but without choice (ZFA) by constructing a class model of ZFA which includes "tangled webs of cardinals" in ZF with atoms and choice (ZFA+C). These proofs were "difficult to read, insanely involved, and involve the sort of elaborate bookkeeping which makes it easy to introduce errors". In 2024, Sky Wilshaw formalized a version of Holmes' proof using the proof assistant Lean, finally resolving the question of NF's consistency. Timothy Chow characterized Wilshaw's work as showing that the reluctance of peer reviewers to engage with a difficult to understand proof can be addressed with the help of proof assistants.
Notes.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "="
},
{
"math_id": 1,
"text": "\\in"
},
{
"math_id": 2,
"text": "\\{x \\mid \\phi \\}"
},
{
"math_id": 3,
"text": "\\phi"
},
{
"math_id": 4,
"text": "x \\in y"
},
{
"math_id": 5,
"text": "x=y"
},
{
"math_id": 6,
"text": "A"
},
{
"math_id": 7,
"text": "B"
},
{
"math_id": 8,
"text": "x"
},
{
"math_id": 9,
"text": "A = B"
},
{
"math_id": 10,
"text": "\\iota(x) = \\{x\\} = \\{y | y = x\\}"
},
{
"math_id": 11,
"text": "A \\times B = \\{(a, b) | a \\in A \\text{ and } b \\in B\\}"
},
{
"math_id": 12,
"text": "A \\times V"
},
{
"math_id": 13,
"text": "V \\times B"
},
{
"math_id": 14,
"text": "R"
},
{
"math_id": 15,
"text": "R^{-1} = \\{(x, y) | (y,x) \\in R\\}"
},
{
"math_id": 16,
"text": "x R^{-1} y"
},
{
"math_id": 17,
"text": "y R x"
},
{
"math_id": 18,
"text": "R\\iota = \\{(\\{x\\}, \\{y\\}) | (x,y) \\in R\\}"
},
{
"math_id": 19,
"text": "\\text{dom}(R) = \\{x | \\exists y. (x,y) \\in R\\}"
},
{
"math_id": 20,
"text": "[\\subseteq] = \\{(x, y) | x \\subseteq y\\}"
},
{
"math_id": 21,
"text": "[\\in] = [\\subseteq] \\cap (1 \\times V) = \\{(\\{x\\}, y) | x \\in y\\}"
},
{
"math_id": 22,
"text": "A^c = \\{x | x \\notin A\\}"
},
{
"math_id": 23,
"text": "A \\cup B = \\{x | x \\in A \\text{ or } x \\in B \\text{ or both}\\}"
},
{
"math_id": 24,
"text": "V = \\{x | x = x\\}"
},
{
"math_id": 25,
"text": "x \\cup x^c = V"
},
{
"math_id": 26,
"text": "a"
},
{
"math_id": 27,
"text": "b"
},
{
"math_id": 28,
"text": "(a, b)"
},
{
"math_id": 29,
"text": "(a, b) = (c, d)"
},
{
"math_id": 30,
"text": "a = c"
},
{
"math_id": 31,
"text": "b = d"
},
{
"math_id": 32,
"text": "\\pi_1 = \\{((x, y), x) | x, y \\in V \\}"
},
{
"math_id": 33,
"text": "\\pi_2 = \\{((x, y), y) | x, y \\in V \\}"
},
{
"math_id": 34,
"text": "[=] = \\{(x, x) | x \\in V \\}"
},
{
"math_id": 35,
"text": "\\bigcup [A] = \\{x | \\text{for some } B, x \\in B \\text{ and } B \\in A\\}"
},
{
"math_id": 36,
"text": "S"
},
{
"math_id": 37,
"text": "(R|S) = \\{(x, y) | \\text{for some } z, x R z \\text{ and } z S y\\}"
},
{
"math_id": 38,
"text": "x|y = \\{z : \\neg(z \\in x \\land z \\in y) \\}"
},
{
"math_id": 39,
"text": "x^c = x|x"
},
{
"math_id": 40,
"text": "x \\cup y =x^c|y^c"
},
{
"math_id": 41,
"text": "1"
},
{
"math_id": 42,
"text": "\\{ x | \\exists y : (\\forall w : w \\in x \\leftrightarrow w = y) \\}"
},
{
"math_id": 43,
"text": "I_2(R) = \\{ (z, w, t) : (z, t) \\in R \\}"
},
{
"math_id": 44,
"text": "I_3(R) = \\{ (z, w, t) : (z, w) \\in R \\}"
},
{
"math_id": 45,
"text": "\\text{TL}(S) = \\{ z : \\forall w : (w, \\{z\\}) \\in S \\}"
},
{
"math_id": 46,
"text": "x^n"
},
{
"math_id": 47,
"text": "\\phi(x^n)"
},
{
"math_id": 48,
"text": "\\{x^n \\mid \\phi(x^n)\\}^{n+1}\\!"
},
{
"math_id": 49,
"text": "\\phi(x^n)\\!"
},
{
"math_id": 50,
"text": "\\exists A^{n+1} \\forall x^n [ x^n \\in A^{n+1} \\leftrightarrow \\phi(x^n) ]"
},
{
"math_id": 51,
"text": "A^{n+1}\\!"
},
{
"math_id": 52,
"text": "x^{n} = y^{n}"
},
{
"math_id": 53,
"text": "x^{m} \\in y^{n}"
},
{
"math_id": 54,
"text": "m<n"
},
{
"math_id": 55,
"text": "i"
},
{
"math_id": 56,
"text": "s(i)"
},
{
"math_id": 57,
"text": "s"
},
{
"math_id": 58,
"text": "i + 1"
},
{
"math_id": 59,
"text": "\\forall x y w. (w \\in x) \\to (x = y \\leftrightarrow (\\forall z. z \\in x \\leftrightarrow z \\in y)))"
},
{
"math_id": 60,
"text": "\\{x \\mid \\phi(x)\\}"
},
{
"math_id": 61,
"text": "\\phi(x)"
},
{
"math_id": 62,
"text": "\\mathrm{set}(x)"
},
{
"math_id": 63,
"text": "\\forall x y. x \\in y \\to \\mathrm{set}(y)."
},
{
"math_id": 64,
"text": "\\forall y z. (\\mathrm{set}(y) \\wedge \\mathrm{set}(z) \\wedge (\\forall x. x \\in y \\leftrightarrow x \\in z)) \\to y = z."
},
{
"math_id": 65,
"text": "\\{x \\mid \\phi(x) \\}"
},
{
"math_id": 66,
"text": "\\exists A. \\mathrm{set}(A) \\wedge (\\forall x. x \\in A \\leftrightarrow \\phi(x))."
},
{
"math_id": 67,
"text": "(a, \\ b)_K \\; := \\ \\{ \\{ a \\}, \\ \\{ a, \\ b \\} \\}"
},
{
"math_id": 68,
"text": "\\pi_1"
},
{
"math_id": 69,
"text": "\\pi_2"
},
{
"math_id": 70,
"text": "\\pi_1((a, b)) = a"
},
{
"math_id": 71,
"text": "\\pi_2((a, b)) = b"
},
{
"math_id": 72,
"text": "\\pi_1 = \\{((a, b), a) \\mid a, b \\in V\\}"
},
{
"math_id": 73,
"text": "\\{\\varnothing\\}"
},
{
"math_id": 74,
"text": "S(A) = \\{a \\cup \\{x\\} \\mid a \\in A \\wedge x \\notin a\\}."
},
{
"math_id": 75,
"text": "V"
},
{
"math_id": 76,
"text": "\\mathbf{N}"
},
{
"math_id": 77,
"text": "P(n)"
},
{
"math_id": 78,
"text": "\\{n \\in \\mathbf{N} \\mid P(n)\\}"
},
{
"math_id": 79,
"text": "|V|"
},
{
"math_id": 80,
"text": "|V| = n \\in \\mathbf{N}"
},
{
"math_id": 81,
"text": "n = \\{V\\}"
},
{
"math_id": 82,
"text": "n + 1 = S(n) = \\varnothing"
},
{
"math_id": 83,
"text": "\\varnothing"
},
{
"math_id": 84,
"text": "\\varnothing \\notin \\mathbf{N}."
},
{
"math_id": 85,
"text": "\\varnothing, \\{\\varnothing\\}, \\{\\{{\\varnothing}\\}\\}, \\ldots"
},
{
"math_id": 86,
"text": "V \\times \\{0\\}"
},
{
"math_id": 87,
"text": "x=x"
},
{
"math_id": 88,
"text": "A \\sim B"
},
{
"math_id": 89,
"text": "\\{x \\mid x \\not\\in x\\}"
},
{
"math_id": 90,
"text": " x \\not\\in x "
},
{
"math_id": 91,
"text": "x \\not\\in x"
},
{
"math_id": 92,
"text": "P(A)"
},
{
"math_id": 93,
"text": "A = V"
},
{
"math_id": 94,
"text": "P(V)"
},
{
"math_id": 95,
"text": "B = \\{x \\in A \\mid x \\notin f(x)\\}"
},
{
"math_id": 96,
"text": "f(x)"
},
{
"math_id": 97,
"text": "f: P(V) \\to V"
},
{
"math_id": 98,
"text": "x \\mapsto x"
},
{
"math_id": 99,
"text": "|A| < |P(A)|"
},
{
"math_id": 100,
"text": "P_1(A)"
},
{
"math_id": 101,
"text": "|P_1(A)| < |P(A)|"
},
{
"math_id": 102,
"text": "|P_1(V)| < |P(V)|"
},
{
"math_id": 103,
"text": "x \\mapsto \\{x\\}"
},
{
"math_id": 104,
"text": "|P_1(V)| < |P(V)| \\ll |V|"
},
{
"math_id": 105,
"text": "|P(V)|"
},
{
"math_id": 106,
"text": "|A| = |P_1(A)|"
},
{
"math_id": 107,
"text": "(x \\mapsto \\{x\\})\\lceil A"
},
{
"math_id": 108,
"text": "\\Omega"
},
{
"math_id": 109,
"text": "\\mathrm{Ord}"
},
{
"math_id": 110,
"text": "\\alpha \\le \\beta"
},
{
"math_id": 111,
"text": "R \\in \\alpha, S \\in \\beta"
},
{
"math_id": 112,
"text": "\\Omega \\in \\mathrm{Ord}"
},
{
"math_id": 113,
"text": "\\alpha"
},
{
"math_id": 114,
"text": "\\beta"
},
{
"math_id": 115,
"text": "R_\\alpha"
},
{
"math_id": 116,
"text": "R_\\alpha = \\{(x, y) \\mid x \\le y < \\alpha\\}"
},
{
"math_id": 117,
"text": "(x, y)"
},
{
"math_id": 118,
"text": "\\beta = \\{S \\mid S \\sim R_\\alpha\\}"
},
{
"math_id": 119,
"text": "y"
},
{
"math_id": 120,
"text": "T(\\alpha)"
},
{
"math_id": 121,
"text": "W \\in \\alpha"
},
{
"math_id": 122,
"text": "W^{\\iota} = \\{(\\{x\\},\\{y\\}) \\mid xWy\\}"
},
{
"math_id": 123,
"text": " < \\alpha"
},
{
"math_id": 124,
"text": "T^2(\\alpha)"
},
{
"math_id": 125,
"text": "T^4(\\alpha)"
},
{
"math_id": 126,
"text": "T^2(\\Omega)<\\Omega"
},
{
"math_id": 127,
"text": "T(\\alpha) < T(\\beta)"
},
{
"math_id": 128,
"text": "\\alpha < \\beta"
},
{
"math_id": 129,
"text": "\\{\\alpha \\mid T(\\alpha) < \\alpha\\}"
},
{
"math_id": 130,
"text": "\\Omega > T^2(\\Omega) > T^4(\\Omega)\\ldots"
},
{
"math_id": 131,
"text": "V_{\\alpha}"
},
{
"math_id": 132,
"text": "j(\\alpha)<\\alpha"
},
{
"math_id": 133,
"text": "V_{\\alpha+1}"
},
{
"math_id": 134,
"text": "V_{j(\\alpha)+1}"
},
{
"math_id": 135,
"text": "x \\in_{NFU} y \\equiv_{def} j(x) \\in y \\wedge y \\in V_{j(\\alpha)+1}."
},
{
"math_id": 136,
"text": "\\phi_1"
},
{
"math_id": 137,
"text": "N-i"
},
{
"math_id": 138,
"text": "(\\forall x \\in V_{\\alpha}.\\psi(j^{N-i}(x)))"
},
{
"math_id": 139,
"text": "(\\forall x \\in j^{N-i}(V_{\\alpha}).\\psi(x))"
},
{
"math_id": 140,
"text": "\\phi_2"
},
{
"math_id": 141,
"text": "j^{i-N}"
},
{
"math_id": 142,
"text": "\\phi_3"
},
{
"math_id": 143,
"text": "\\{y \\in V_{\\alpha} \\mid \\phi_3\\}"
},
{
"math_id": 144,
"text": "j(\\{y \\in V_{\\alpha} \\mid \\phi_3\\})"
},
{
"math_id": 145,
"text": "T_i"
},
{
"math_id": 146,
"text": "P(T_i)"
},
{
"math_id": 147,
"text": "P_1(T_{i+1})"
},
{
"math_id": 148,
"text": "T_{i+1}"
},
{
"math_id": 149,
"text": "T_0"
},
{
"math_id": 150,
"text": "T_1"
},
{
"math_id": 151,
"text": "T_{\\alpha}"
},
{
"math_id": 152,
"text": "\\beth_{\\omega}"
},
{
"math_id": 153,
"text": "V_{j(\\alpha)}"
}
] | https://en.wikipedia.org/wiki?curid=945957 |
9460040 | Lyman-alpha emitter | A Lyman-alpha emitter (LAE) is a type of distant galaxy that emits Lyman-alpha radiation from neutral hydrogen.
Most known LAEs are extremely distant, and because of the finite travel time of light they provide glimpses into the history of the universe. They are thought to be the progenitors of most modern Milky Way type galaxies. These galaxies can be found nowadays rather easily in narrow-band searches by an excess of their narrow-band flux at a wavelength which may be interpreted from their redshift
formula_0
where z is the redshift, formula_1 is the observed wavelength, and 1215.67 Å is the wavelength of Lyman-alpha emission. The Lyman-alpha line in most LAEs is thought to be caused by recombination of interstellar hydrogen that is ionized by an ongoing burst of star formation. Such Lyman alpha emission was first suggested as a signature of young galaxies by Bruce Partridge and P. J. E. Peebles in 1967. Experimental observations of the redshift of LAEs are important in cosmology because they trace dark matter halos and subsequently the evolution of matter distribution in the universe.
Properties.
Lyman-alpha emitters are typically low mass galaxies of 108 to 1010 solar masses. They are typically young galaxies that are 200 to 600 million years old, and they have the highest specific star formation rate of any galaxies known. All of these properties indicate that Lyman-alpha emitters are important clues as to the progenitors of modern Milky Way type galaxies.
Lyman-alpha emitters have many unknown properties. The Lyman-alpha photon escape fraction varies greatly in these galaxies. This is what portion of the light emitted at the Lyman-alpha line wavelength inside the galaxy actually escapes and will be visible to distant observers. There is much evidence that the dust content of these galaxies could be significant and therefore is obscuring the brightness of these galaxies. It is also possible that anisotropic distribution of hydrogen density and velocity play a significant role in the varying escape fraction due to the photons' continued interaction with the hydrogen gas (radiative transfer). Evidence now shows strong evolution in the Lyman-alpha escape fraction with redshift, most likely associated with the buildup of dust in the ISM. Dust is shown to be the main parameter setting the escape of Lyman-alpha photons. Additionally the metallicity, outflows, and detailed evolution with redshift is unknown.
Importance in cosmology.
LAEs are important probes of reionization, cosmology (BAO), and they allow probing of the faint end of the luminosity function at high redshift.
The baryonic acoustic oscillation signal should be evident in the power spectrum of Lyman-alpha emitters at high redshift. Baryonic acoustic oscillations are imprints of sound waves on scales where radiation pressure stabilized the density perturbations against gravitational collapse in the early universe. The three-dimensional distribution of the characteristically homogeneous Lyman-alpha galaxy population will allow a robust probe of cosmology. They are a good tool because the Lyman-alpha bias, the propensity for galaxies to form in the highest overdensity of the underlying dark matter distribution, can be modeled and accounted for. Lyman-alpha emitters are over dense in clusters.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " 1+z=\\frac{\\lambda}{1215.67\\mathrm{\\AA}} "
},
{
"math_id": 1,
"text": "\\lambda"
}
] | https://en.wikipedia.org/wiki?curid=9460040 |
9460224 | Zeuthen strategy | The Zeuthen strategy in cognitive science is a negotiation strategy used by some artificial agents. Its purpose is to measure the "willingness to risk conflict". An agent will be more willing to risk conflict if it does not have much to lose in case that the negotiation fails. In contrast, an agent is less willing to risk conflict when it has more to lose. The value of a deal is expressed in its utility. An agent has much to lose when the difference between the utility of its current proposal and the conflict deal is high.
When both agents use the monotonic concession protocol, the Zeuthen strategy leads them to agree upon a deal in the negotiation set. This set consists of all conflict free deals, which are individually rational and Pareto optimal, and the conflict deal, which maximizes the Nash product.
The strategy was introduced in 1930 by the Danish economist Frederik Zeuthen.
Three key questions.
The Zeuthen strategy answers three open questions that arise when using the monotonic concession protocol, namely:
The answer to the first question is that any agent should start with its most preferred deal, because that deal has the highest utility for that agent. The second answer is that the agent with the smallest value of "Risk(i,t)" concedes, because the agent with the lowest utility for the conflict deal profits most from avoiding conflict. To the third question, the Zeuthen strategy suggests that the conceding agent should concede just enough raise its value of "Risk(i,t)" just above that of the other agent. This prevents the conceding agent to have to concede again in the next round.
formula_0
Risk.
"Risk(i,t)" is a measurement of agent "i"'s willingness to risk conflict. The risk function formalizes the notion that an agent's willingness to risk conflict is the ratio of the utility that agent would lose by accepting the other agent's proposal to the utility that agent would lose by causing a conflict. Agent "i" is said to be using a rational negotiation strategy if at any step "t + 1" that agent "i" sticks to his last proposal, "Risk(i,t) > Risk(j,t)".
Sufficient concession.
If agent "i" makes a sufficient concession in the next step, then, assuming that agent "j" is using a rational negotiation strategy, if agent "j" does not concede in the next step, he must do so in the step after that. The set of all sufficient concessions of agent "i" at step "t" is denoted "SC(i, t)".
formula_1
Minimal sufficient concession.
is the minimal sufficient concession of agent A in step "t".
Agent A begins the negotiation by proposing
formula_2
and will make the minimal sufficient concession in step "t + 1" if and only if "Risk(A,t) ≤ Risk(B,t)".
Theorem
If both agents are using Zeuthen strategies, then they will agree on
formula_3
that is, the deal which maximizes the Nash product.
Proof
Let δA = δ(A,t).
Let δB = δ(B,t).
According to the Zeuthen strategy, agent A will concede at step formula_4 if and only if
formula_5
That is, if and only if
formula_6
formula_7
formula_8
formula_9
formula_10
formula_11
Thus, Agent A will concede if and only if formula_12 does not yield the larger product of utilities.
Therefore, the Zeuthen strategy guarantees a final agreement that maximizes the Nash Product.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\n\\text{Risk}(i,t)=\n\\begin{cases}\n 1 & U_{i}(\\delta(i,t))=0 \\\\\n \\frac{U_{i}(\\delta(i,t))-U_{i}(\\delta(j,t))}{U_{i}(\\delta(i,t))} & \\text{otherwise}\n\\end{cases}\n"
},
{
"math_id": 1,
"text": "\\delta'=\\arg\\max_{\\delta\\in{SC(A,t)}}\\{U_{A}(\\delta)\\}"
},
{
"math_id": 2,
"text": "\\delta(A,0)=\\arg\\max_{\\delta\\in{NS}}U_{A}(\\delta)"
},
{
"math_id": 3,
"text": "\\delta=\\arg\\max_{\\delta'\\in{NS}}\\{\\pi(\\delta')\\},"
},
{
"math_id": 4,
"text": "t"
},
{
"math_id": 5,
"text": "Risk(A,t)\\leq Risk(B,t)."
},
{
"math_id": 6,
"text": "\\frac{U_{A}(\\delta_{A})-U_{A}(\\delta_{B})}{U_{A}(\\delta_{A})}\\leq \\frac{U_{B}(\\delta_{B})-U_{B}(\\delta_{A})}{U_{B}(\\delta_{B})}"
},
{
"math_id": 7,
"text": "U_{B}(\\delta_{B})(U_{A}(\\delta_{A})-U_{A}(\\delta_{B}))\\leq\nU_{A}(\\delta_{A})(U_{B}(\\delta_{B})-U_{B}(\\delta_{A}))"
},
{
"math_id": 8,
"text": "U_{A}(\\delta_{A})U_{B}(\\delta_{B})-U_{A}(\\delta_{B})U_{B}(\\delta_{B})\\leq\nU_{A}(\\delta_{A})U_{B}(\\delta_{B})-U_{A}(\\delta_{A})U_{B}(\\delta_{A})"
},
{
"math_id": 9,
"text": "-U_{A}(\\delta_{B})U_{B}(\\delta_{B})\\leq -U_{A}(\\delta_{A})U_{B}(\\delta_{A})"
},
{
"math_id": 10,
"text": "U_{A}(\\delta_{A}) U_{B}(\\delta_{A})\\leq U_{A}(\\delta_{B}) U_{B}(\\delta_{B})"
},
{
"math_id": 11,
"text": "\\pi(\\delta_{A})\\leq \\pi(\\delta_{B})"
},
{
"math_id": 12,
"text": "\\delta_{A}"
}
] | https://en.wikipedia.org/wiki?curid=9460224 |
9461236 | Riemann problem | A Riemann problem, named after Bernhard Riemann, is a specific initial value problem composed of a conservation equation together with piecewise constant initial data which has a single discontinuity in the domain of interest. The Riemann problem is very useful for the understanding of equations like Euler conservation equations because all properties, such as shocks and rarefaction waves, appear as characteristics in the solution. It also gives an exact solution to some complex nonlinear equations, such as the Euler equations.
In numerical analysis, Riemann problems appear in a natural way in finite volume methods for the solution of conservation law equations due to the discreteness of the grid. For that it is widely used in computational fluid dynamics and in computational magnetohydrodynamics simulations. In these fields, Riemann problems are calculated using Riemann solvers.
The Riemann problem in linearized gas dynamics.
As a simple example, we investigate the properties of the one-dimensional Riemann problem
in gas dynamics
The initial conditions are given by
formula_0
where "x" = 0 separates two different states, together with the linearised gas dynamic equations (see gas dynamics for derivation).
formula_1
where we can assume without loss of generality formula_2.
We can now rewrite the above equations in a conservative form:
formula_3:
where
formula_4
and the index denotes the partial derivative with respect to the corresponding variable (i.e. x or t).
The eigenvalues of the system are the characteristics of the system
formula_5. They give the propagation speed of the medium, including that of any discontinuity, which is the speed of sound here. The corresponding eigenvectors are
formula_6
By decomposing the left state formula_7 in terms of the eigenvectors, we get for some formula_8
formula_9
Now we can solve for formula_10 and formula_11:
formula_12
Analogously
formula_13
for
formula_14
Using this, in the domain in between the two characteristics formula_15,
we get the final constant solution:
formula_16
and the (piecewise constant) solution in the entire domain formula_17:
formula_18
Although this is a simple example, it still shows the basic properties. Most notably, the characteristics decompose the solution into three domains. The propagation speed
of these two equations is equivalent to the propagation speed of sound.
The fastest characteristic defines the Courant–Friedrichs–Lewy (CFL) condition, which sets the restriction for the maximum time step for which an explicit numerical method is stable. Generally as more conservation equations are used, more characteristics are involved.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \n\\begin{bmatrix} \\rho \\\\ u \\end{bmatrix} = \\begin{bmatrix} \\rho_L \\\\ u_L\\end{bmatrix} \\text{ for } x \\leq 0\n\\qquad \\text{and} \\qquad \\begin{bmatrix} \\rho \\\\ u \\end{bmatrix} = \\begin{bmatrix} \\rho_R \\\\ u_R \\end{bmatrix} \\text{ for } x > 0\n"
},
{
"math_id": 1,
"text": "\n\\begin{align}\n\\frac{\\partial\\rho}{\\partial t} + \\rho_0 \\frac{\\partial u}{\\partial x} & = 0 \\\\[8pt]\n \\frac{\\partial u}{\\partial t} + \\frac{a^2}{\\rho_0} \\frac{\\partial \\rho}{\\partial x} & = 0\n\\end{align}\n"
},
{
"math_id": 2,
"text": "a\\ge 0"
},
{
"math_id": 3,
"text": "\nU_t + A \\cdot U_x = 0\n"
},
{
"math_id": 4,
"text": "\nU = \\begin{bmatrix} \\rho \\\\ u \\end{bmatrix}, \\quad A = \\begin{bmatrix} 0 & \\rho_0 \\\\ \\frac{a^2}{\\rho_0} & 0 \\end{bmatrix}\n"
},
{
"math_id": 5,
"text": " \\lambda_1 = -a, \\lambda_2 = a "
},
{
"math_id": 6,
"text": "\n\\mathbf{e}^{(1)} = \\begin{bmatrix} \\rho_0 \\\\ -a \\end{bmatrix}, \\quad \n\\mathbf{e}^{(2)} = \\begin{bmatrix} \\rho_0 \\\\ a \\end{bmatrix}.\n"
},
{
"math_id": 7,
"text": "u_L"
},
{
"math_id": 8,
"text": "\\alpha_{1},\\alpha_{2}"
},
{
"math_id": 9,
"text": "\nU_L = \\begin{bmatrix} \\rho_L \\\\ u_L \\end{bmatrix} = \\alpha_1\\mathbf{e}^{(1)} + \\alpha_2 \\mathbf{e}^{(2)} .\n"
},
{
"math_id": 10,
"text": "\\alpha_1"
},
{
"math_id": 11,
"text": "\\alpha_2"
},
{
"math_id": 12,
"text": "\n\\begin{align}\n\\alpha_1 & = \\frac{a \\rho_L - \\rho_0 u_L}{2a\\rho_0} \\\\[8pt]\n\\alpha_2 & = \\frac{a \\rho_L + \\rho_0 u_L}{2a\\rho_0}\n\\end{align}\n"
},
{
"math_id": 13,
"text": "U_R = \\begin{bmatrix} \\rho_R \\\\ u_R \\end{bmatrix} = \\beta_1\\mathbf{e}^{(1)}+\\beta_2\\mathbf{e}^{(2)} "
},
{
"math_id": 14,
"text": "\n\\begin{align}\n\\beta_1 & = \\frac{a \\rho_R - \\rho_0 u_R}{2a\\rho_0} \\\\[8pt]\n\\beta_2 & = \\frac{a \\rho_R + \\rho_0 u_R}{2a\\rho_0}\n\\end{align}\n"
},
{
"math_id": 15,
"text": "t=|x|/a"
},
{
"math_id": 16,
"text": "\nU_* = \\begin{bmatrix} \\rho_* \\\\ u_* \\end{bmatrix} \n=\\beta_1\\mathbf{e}^{(1)}+\\alpha_2\\mathbf{e}^{(2)}\n= \\beta_1 \\begin{bmatrix} \\rho_0 \\\\ -a\\end{bmatrix} + \\alpha_2 \\begin{bmatrix} \\rho_0 \\\\ a \\end{bmatrix}\n"
},
{
"math_id": 17,
"text": "t>0"
},
{
"math_id": 18,
"text": " U(t,x)\n= \\begin{bmatrix} \\rho(t,x)\\\\ u(t,x)\\end{bmatrix}\n=\\begin{cases} \nU_L, & 0<t \\le -x/a \\\\\nU_* , & 0\\le |x|/a <t \\\\ \nU_R,& 0<t \\le x/a \n\\end{cases}\n"
}
] | https://en.wikipedia.org/wiki?curid=9461236 |
9462323 | Wilhelmy plate | Device used to measure surface tension
A Wilhelmy plate is a thin plate that is used to measure equilibrium surface or interfacial tension at an air–liquid or liquid–liquid interface. In this method, the plate is oriented perpendicular to the interface, and the force exerted on it is measured. Based on the work of Ludwig Wilhelmy, this method finds wide use in the preparation and monitoring of Langmuir films.
Detailed description.
The Wilhelmy plate consists of a thin plate usually on the order of a few square centimeters in area. The plate is often made from filter paper, glass or platinum which may be roughened to ensure complete wetting. In fact, the results of the experiment do not depend on the material used, as long as the material is wetted by the liquid. The plate is cleaned thoroughly and attached to a balance with a thin metal wire. The force on the plate due to wetting is measured using a tensiometer or microbalance and used to calculate the surface tension (formula_0) using the Wilhelmy equation:
formula_1
where formula_2 is the wetted perimeter (formula_3), formula_4 is the plate width, formula_5 is the plate thickness, and formula_6 is the contact angle between the liquid phase and the plate. In practice the contact angle is rarely measured; instead, either literature values are used or complete wetting (formula_7) is assumed.
In general, surface tension may be measured with high sensitivity using very thin plates ranging in thickness from 0.1 to 0.002 mm. The device is calibrated with pure liquids like water and ethanol. The buoyancy adjustment is minimized by utilizing a thin plate and dipping it as little as feasible. Wetting water on a platinum plate is accomplished by using commercially available platinum plates that have been roughened to improve wettability.
Advantages and short brief.
If complete wetting is assumed (contact angle = 0), no correction factors are required to calculate surface tensions when using the Wilhelmy plate, unlike for a du Noüy ring. In addition, because the plate is not moved during measurements, the Wilhelmy plate allows accurate determination of surface kinetics on a wide range of timescales, and it displays low operator variance. In a typical plate experiment, the plate is lowered to the surface being analyzed until a meniscus is formed, and then raised so that the bottom edge of the plate lies on the plane of the undisturbed surface. If measuring a buried interface, the second (less dense) phase is then added on top of the undisturbed primary (denser) phase in such a way as to not disturb the meniscus. The force at equilibrium can then be used to determine the absolute surface or interfacial tension. Due to a large wetted area of the plate, the measurement is less susceptible for measurement errors than when using a smaller probe. Also, the method has been described in several international measurement standards.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\gamma"
},
{
"math_id": 1,
"text": "\\gamma = \\frac{F}{l\\cos(\\theta)}"
},
{
"math_id": 2,
"text": "l"
},
{
"math_id": 3,
"text": "2w + 2d"
},
{
"math_id": 4,
"text": "w"
},
{
"math_id": 5,
"text": "d"
},
{
"math_id": 6,
"text": "\\theta"
},
{
"math_id": 7,
"text": "\\theta=0"
}
] | https://en.wikipedia.org/wiki?curid=9462323 |
9463925 | Wilson–Bappu effect | Correlation among statistics of a star
The Ca II K line in cool stars is among the strongest emission lines which originates in the star's chromosphere. In 1957, Olin C. Wilson and M. K. Vainu Bappu reported on the remarkable correlation between the measured width of the aforementioned emission line and the absolute visual magnitude of the star. This is known as the Wilson–Bappu effect. The correlation is independent of spectral type and is applicable to stellar classification main sequence types G, K, and Red giant type M. The greater the emission band, the brighter the star, which is correlated with distance empirically.
The main interest of the Wilson–Bappu effect is in its use for determining the distance of stars too remote for direct measurements. It can be studied using nearby stars, for which independent distance measurements are possible, and it can be expressed in a simple analytical form. In other words, the Wilson–Bappu effect can be calibrated with stars within 100 parsecs from the Sun. The width of the emission core of the K line (W0) can be measured in distant stars, so, knowing W0 and the analytical form expressing the Wilson–Bappu effect, we can determine the absolute magnitude of a star. The distance of a star follows immediately from the knowledge of both absolute and apparent magnitude, provided that the interstellar reddening of the star is either negligible or well known.
The first calibration of the Wilson–Bappu effect using distance from Hipparcos parallaxes was made in 1999 by Wallerstein et al. A later work also used W0 measurements on high-resolution spectra taken with CCD, but a smaller sample.
According to the latest calibration, the relation between absolute visual magnitude (Mv) expressed in magnitudes and W0, transformed in km/s, is the following:
formula_0
The data error, however, is quite large: about 0.5 mag, rendering the effect too imprecise to significantly improve the cosmic distance ladder. Another limitation comes from the fact that the measurement of W0 in distant stars is very challenging, requires long observations at big telescopes. Sometimes the emission feature in the core of the K line is affected by the interstellar extinction. In these cases an accurate measurement of W0 is not possible.
The Wilson–Bappu effect is also valid for the Mg II k line. However, the Mg II k line is at 2796.34 Å in the ultraviolet, and since the radiation at this wavelength does not reach the Earth's surface it can only be observed with satellites such as the International Ultraviolet Explorer.
In 1977, Stencel published a spectroscopic survey that showed that the wing emission features seen in the broad wings of the K line among higher luminosity late type stars, share a correlation of line width and Mv similar to the Wilson–Bappu effect.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "M_V=33.2-18.0 \\log(W_0)"
}
] | https://en.wikipedia.org/wiki?curid=9463925 |
946426 | Amplitude-shift keying | Digital modulation scheme
Amplitude-shift keying (ASK) is a form of amplitude modulation that represents digital data as variations in the amplitude of a carrier wave.
In an ASK system, a symbol, representing one or more bits, is sent by transmitting a fixed-amplitude carrier wave at a fixed frequency for a specific time duration. For example, if each symbol represents a single bit, then the carrier signal could be transmitted at nominal amplitude when the input value is 1, but transmitted at reduced amplitude or not at all when the input value is 0.
Any digital modulation scheme uses a finite number of distinct signals to represent digital data. ASK uses a finite number of amplitudes, each assigned a unique pattern of binary digits. Usually, each amplitude encodes an equal number of bits. Each pattern of bits forms the symbol that is represented by the particular amplitude. The demodulator, which is designed specifically for the symbol-set used by the modulator, determines the amplitude of the received signal and maps it back to the symbol it represents, thus recovering the original data. Frequency and phase of the carrier are kept constant.
Like AM, an ASK is also linear and sensitive to atmospheric noise, distortions, propagation conditions on different routes in PSTN, etc. Both ASK modulation and demodulation processes are relatively inexpensive. The ASK technique is also commonly used to transmit digital data over optical fiber. For LED transmitters, binary 1 is represented by a short pulse of light and binary 0 by the absence of light. Laser transmitters normally have a fixed "bias" current that causes the device to emit a low light level. This low level represents binary 0, while a higher-amplitude lightwave represents binary 1.
The simplest and most common form of ASK operates as a switch, using the presence of a carrier wave to indicate a binary one and its absence to indicate a binary zero. This type of modulation is called on-off keying (OOK), and is used at radio frequencies to transmit Morse code (referred to as continuous wave operation),
More sophisticated encoding schemes have been developed which represent data in groups using additional amplitude levels. For instance, a four-level encoding scheme can represent two bits with each shift in amplitude; an eight-level scheme can represent three bits; and so on. These forms of amplitude-shift keying require a high signal-to-noise ratio for their recovery, as by their nature much of the signal is transmitted at reduced power.
ASK system can be divided into three blocks. The first one represents the transmitter, the second one is a linear model of the effects of the channel, the third one shows the structure of the receiver. The following notation is used:
Different symbols are represented with different voltages. If the maximum allowed value for the voltage is A, then all the possible values are in the range [−A, A] and they are given by:
formula_0
the difference between one voltage and the other is:
formula_1
Considering the picture, the symbols v[n] are generated randomly by the source S, then the impulse generator creates impulses with an area of v[n]. These impulses are sent to the filter ht to be sent through the channel. In other words, for each symbol a different carrier wave is sent with the relative amplitude.
Out of the transmitter, the signal s(t) can be expressed in the form:
formula_2
In the receiver, after the filtering through hr (t) the signal is:
formula_3
where we use the notation:
formula_4
where * indicates the convolution between two signals. After the A/D conversion the signal z[k] can be expressed in the form:
formula_5
In this relationship, the second term represents the symbol to be extracted. The others are unwanted: the first one is the effect of noise, the third one is due to the intersymbol interference.
If the filters are chosen so that g(t) will satisfy the Nyquist ISI criterion, then there will be no intersymbol interference and the value of the sum will be zero, so:
formula_6
the transmission will be affected only by noise.
Probability of error.
The probability density function of having an error of a given size can be modelled by a Gaussian function; the mean value will be the relative sent value, and its variance will be given by:
formula_7
where formula_8 is the spectral density of the noise within the band and Hr (f) is the continuous Fourier transform of the impulse response of the filter hr (f).
The probability of making an error is given by:
formula_9
where, for example, formula_10 is the conditional probability of making an error given that a symbol v0 has been sent and formula_11 is the probability of sending a symbol v0.
If the probability of sending any symbol is the same, then:
formula_12
If we represent all the probability density functions on the same plot against the possible value of the voltage to be transmitted, we get a picture like this (the particular case of formula_13 is shown):
The probability of making an error after a single symbol has been sent is the area of the Gaussian function falling under the functions for the other symbols. It is shown in cyan for just one of them. If we call formula_14 the area under one side of the Gaussian, the sum of all the areas will be: formula_15. The total probability of making an error can be expressed in the form:
formula_16
We now have to calculate the value of formula_14. In order to do that, we can move the origin of the reference wherever we want: the area below the function will not change. We are in a situation like the one shown in the following picture:
it does not matter which Gaussian function we are considering, the area we want to calculate will be the same. The value we are looking for will be given by the following integral:
formula_17
where formula_18 is the complementary error function. Putting all these results together, the probability to make an error is:
formula_19
from this formula we can easily understand that the probability to make an error decreases if the maximum amplitude of the transmitted signal or the amplification of the system becomes greater; on the other hand, it increases if the number of levels or the power of noise becomes greater.
This relationship is valid when there is no intersymbol interference, i.e. formula_20 is a Nyquist function. | [
{
"math_id": 0,
"text": "v_i = \\frac{2 A}{L-1} i - A; \\quad i = 0,1,\\dots, L-1"
},
{
"math_id": 1,
"text": "\\Delta = \\frac{2 A}{L - 1} "
},
{
"math_id": 2,
"text": "s (t) = \\sum_{n = -\\infty}^\\infty v[n] \\cdot h_t (t - n T_s)"
},
{
"math_id": 3,
"text": "z(t) = n_r (t) + \\sum_{n = -\\infty}^\\infty v[n] \\cdot g (t - n T_s)"
},
{
"math_id": 4,
"text": "\\begin{align}\n n_r (t) &= n (t) * h_r (t) \\\\\n g (t) &= h_t (t) * h_c (t) * h_r (t)\n\\end{align}"
},
{
"math_id": 5,
"text": "z[k] = n_r [k] + v[k] g[0] + \\sum_{n \\neq k} v[n] g[k-n]"
},
{
"math_id": 6,
"text": "z[k] = n_r [k] + v[k] g[0]"
},
{
"math_id": 7,
"text": "\\sigma_N^2 = \\int_{-\\infty}^{+\\infty} \\Phi_N (f) \\cdot |H_r (f)|^2 df"
},
{
"math_id": 8,
"text": "\\Phi_N (f)"
},
{
"math_id": 9,
"text": "P_e = P_{e|H_0} \\cdot P_{H_0} + P_{e|H_1} \\cdot P_{H_1} + \\cdots + P_{e|H_{L-1}} \\cdot P_{H_{L-1}} = \\sum^{L-1}_{k=0} P_{e|H_{k}} \\cdot P_{H_{k}}"
},
{
"math_id": 10,
"text": "P_{e|H_0}"
},
{
"math_id": 11,
"text": "P_{H_0}"
},
{
"math_id": 12,
"text": "P_{H_i} = \\frac{1}{L}"
},
{
"math_id": 13,
"text": "L = 4"
},
{
"math_id": 14,
"text": "P^+"
},
{
"math_id": 15,
"text": "2 L P^+ - 2 P^+"
},
{
"math_id": 16,
"text": "P_e = 2 \\left( 1 - \\frac{1}{L} \\right) P^+"
},
{
"math_id": 17,
"text": "P^+ = \\int_{\\frac{A g(0)}{L-1}}^{\\infty} \\frac{1}{\\sqrt{2 \\pi} \\sigma_N} e^{-\\frac{x^2}{2 \\sigma_N^2}} d x = \\frac{1}{2} \\operatorname{erfc} \\left( \\frac{A g(0)}{\\sqrt{2} (L-1) \\sigma_N} \\right) "
},
{
"math_id": 18,
"text": "\\operatorname{erfc}(x)"
},
{
"math_id": 19,
"text": "P_e = \\left( 1 - \\frac{1}{L} \\right) \\operatorname{erfc} \\left( \\frac{A g(0)}{\\sqrt{2} (L-1) \\sigma_N} \\right) "
},
{
"math_id": 20,
"text": "g(t)"
}
] | https://en.wikipedia.org/wiki?curid=946426 |
946510 | Multivariate gamma function | Multivariate generalization of the gamma function
In mathematics, the multivariate gamma function Γ"p" is a generalization of the gamma function. It is useful in multivariate statistics, appearing in the probability density function of the Wishart and inverse Wishart distributions, and the matrix variate beta distribution.
It has two equivalent definitions. One is given as the following integral over the formula_0 positive-definite real matrices:
formula_1
where formula_2 denotes the determinant of formula_3. The other one, more useful to obtain a numerical result is:
formula_4
In both definitions, formula_5 is a complex number whose real part satisfies formula_6. Note that formula_7 reduces to the ordinary gamma function. The second of the above definitions allows to directly obtain the recursive relationships for formula_8:
formula_9
Thus
and so on.
This can also be extended to non-integer values of formula_12 with the expression:
formula_13
Where G is the Barnes G-function, the indefinite product of the Gamma function.
The function is derived by Anderson from first principles who also cites earlier work by Wishart, Mahalanobis and others.
There also exists a version of the multivariate gamma function which instead of a single complex number takes a formula_12-dimensional vector of complex numbers as its argument. It generalizes the above defined multivariate gamma function insofar as the latter is obtained by a particular choice of multivariate argument of the former.
Derivatives.
We may define the multivariate digamma function as
formula_14
and the general polygamma function as
formula_15
formula_16
it follows that
formula_17
formula_18
it follows that
formula_19
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "p \\times p"
},
{
"math_id": 1,
"text": "\n\\Gamma_p(a)=\n\\int_{S>0} \\exp\\left(\n-{\\rm tr}(S)\\right)\\,\n\\left|S\\right|^{a-\\frac{p+1}{2}}\ndS, \n"
},
{
"math_id": 2,
"text": "|S|"
},
{
"math_id": 3,
"text": "S"
},
{
"math_id": 4,
"text": "\n\\Gamma_p(a)=\n\\pi^{p(p-1)/4}\\prod_{j=1}^p\n\\Gamma(a+(1-j)/2).\n"
},
{
"math_id": 5,
"text": "a"
},
{
"math_id": 6,
"text": "\\Re(a) > (p-1)/2"
},
{
"math_id": 7,
"text": "\\Gamma_1(a)"
},
{
"math_id": 8,
"text": "p\\ge 2"
},
{
"math_id": 9,
"text": "\n\\Gamma_p(a) = \\pi^{(p-1)/2} \\Gamma(a) \\Gamma_{p-1}(a-\\tfrac{1}{2}) = \\pi^{(p-1)/2} \\Gamma_{p-1}(a) \\Gamma(a+(1-p)/2).\n"
},
{
"math_id": 10,
"text": "\\Gamma_2(a)=\\pi^{1/2}\\Gamma(a)\\Gamma(a-1/2)"
},
{
"math_id": 11,
"text": "\\Gamma_3(a)=\\pi^{3/2}\\Gamma(a)\\Gamma(a-1/2)\\Gamma(a-1)"
},
{
"math_id": 12,
"text": "p"
},
{
"math_id": 13,
"text": "\\Gamma_p(a)=\\pi^{p(p-1)/4} \\frac{G(a+\\frac{1}2)G(a+1)}{G(a+\\frac{1-p}2)G(a+1-\\frac{p}2)}"
},
{
"math_id": 14,
"text": "\\psi_p(a) = \\frac{\\partial \\log\\Gamma_p(a)}{\\partial a} = \\sum_{i=1}^p \\psi(a+(1-i)/2) ,"
},
{
"math_id": 15,
"text": "\\psi_p^{(n)}(a) = \\frac{\\partial^n \\log\\Gamma_p(a)}{\\partial a^n} = \\sum_{i=1}^p \\psi^{(n)}(a+(1-i)/2)."
},
{
"math_id": 16,
"text": "\\Gamma_p(a) = \\pi^{p(p-1)/4}\\prod_{j=1}^p \\Gamma\\left(a+\\frac{1-j}{2}\\right),"
},
{
"math_id": 17,
"text": "\\frac{\\partial \\Gamma_p(a)}{\\partial a} = \\pi^{p(p-1)/4}\\sum_{i=1}^p \\frac{\\partial\\Gamma\\left(a+\\frac{1-i}{2}\\right)}{\\partial a}\\prod_{j=1, j\\neq i}^p\\Gamma\\left(a+\\frac{1-j}{2}\\right)."
},
{
"math_id": 18,
"text": "\\frac{\\partial\\Gamma(a+(1-i)/2)}{\\partial a} = \\psi(a+(i-1)/2)\\Gamma(a+(i-1)/2)"
},
{
"math_id": 19,
"text": "\n\\begin{align}\n\\frac{\\partial \\Gamma_p(a)}{\\partial a} & = \\pi^{p(p-1)/4}\\prod_{j=1}^p \\Gamma(a+(1-j)/2) \\sum_{i=1}^p \\psi(a+(1-i)/2) \\\\[4pt]\n& = \\Gamma_p(a)\\sum_{i=1}^p \\psi(a+(1-i)/2).\n\\end{align}\n"
}
] | https://en.wikipedia.org/wiki?curid=946510 |
9467383 | Cardiac index | Haemodynamic parameter
The cardiac index (CI) is a hemodynamic measure that represents the cardiac output (CO) of an individual divided by their body surface area (BSA), expressed in liters per minute per square meter (L/min/m²). This parameter provides a more accurate assessment of heart function relative to the size of the individual, as opposed to absolute cardiac output alone. Cardiac index is crucial in assessing patients with heart failure and other cardiovascular conditions, providing insight into the adequacy of cardiac function in relation to the individual's metabolic needs.
Calculation.
The index is usually calculated using the following formula:
formula_0
where
Body Surface Area Calculation.
The cardiac index is adjusted for body surface area (BSA), typically calculated using the Mosteller formula. This adjustment allows for standardized comparison across individuals with different body sizes, improving the accuracy of CI measurements.
formula_1
Clinical significance.
Cardiac index is a critical parameter in evaluating cardiac performance and the adequacy of tissue perfusion. In healthy adults, the normal range of cardiac index is generally between 2.6 to 4.2 L/min/m². Values below this range may indicate hypoperfusion and are often seen in conditions such as heart failure, hypovolemia, and cardiogenic shock. Conversely, elevated cardiac index values may be observed in hyperdynamic states, such as systemic inflammatory response syndrome (SIRS) or in patients with anemia. The cardiac index is thus a valuable tool in guiding therapeutic interventions in various clinical settings, including intensive care units.
In clinical practice, CI helps tailor therapies such as the administration of vasopressors in septic shock based on real-time assessments from tools like bedside echocardiograms. This metric is essential for evaluating heart performance relative to the body’s needs rather than in isolation, making it a key factor in managing various forms of shock.
There are four main types of shock where CI plays a crucial role:
CI is not only important in acute care settings but also in long-term health outcomes. Research, including the Framingham Heart Study, has linked low CI with an increased risk of dementia and Alzheimer’s disease. Additionally, higher CI in organ donors has been associated with improved survival rates in heart transplant recipients.
Measurement Techniques.
Cardiac index can be assessed using a variety of methods, which can be broadly categorized into noninvasive imaging and invasive techniques. The choice of method depends on the patient's condition, the specific clinical requirements, and the desired balance between accuracy and procedural risk.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\text{CI} = \\frac{\\text{CO}}{\\text{BSA}} = \\frac{\\text{SV}\\times\\text{HR}}{\\text{BSA}}"
},
{
"math_id": 1,
"text": "BSA (m^2) = \\sqrt \\left ( \\frac{Weight (kg) \\times Height (cm) }{3600} \\right )"
}
] | https://en.wikipedia.org/wiki?curid=9467383 |
9467420 | Data parallelism | Parallelization across multiple processors in parallel computing environments
Data parallelism is parallelization across multiple processors in parallel computing environments. It focuses on distributing the data across different nodes, which operate on the data in parallel. It can be applied on regular data structures like arrays and matrices by working on each element in parallel. It contrasts to task parallelism as another form of parallelism.
A data parallel job on an array of "n" elements can be divided equally among all the processors. Let us assume we want to sum all the elements of the given array and the time for a single addition operation is Ta time units. In the case of sequential execution, the time taken by the process will be "n"×Ta time units as it sums up all the elements of an array. On the other hand, if we execute this job as a data parallel job on 4 processors the time taken would reduce to ("n"/4)×Ta + merging overhead time units. Parallel execution results in a speedup of 4 over sequential execution. One important thing to note is that the locality of data references plays an important part in evaluating the performance of a data parallel programming model. Locality of data depends on the memory accesses performed by the program as well as the size of the cache.
History.
Exploitation of the concept of data parallelism started in 1960s with the development of the Solomon machine. The Solomon machine, also called a vector processor, was developed to expedite the performance of mathematical operations by working on a large data array (operating on multiple data in consecutive time steps). Concurrency of data operations was also exploited by operating on multiple data at the same time using a single instruction. These processors were called 'array processors'. In the 1980s, the term was introduced to describe this programming style, which was widely used to program Connection Machines in data parallel languages like C*. Today, data parallelism is best exemplified in graphics processing units (GPUs), which use both the techniques of operating on multiple data in space and time using a single instruction.
Most data parallel hardware supports only a fixed number of parallel levels, often only one. This means that within a parallel operation it is not possible to launch more parallel operations recursively, and means that programmers cannot make use of nested hardware parallelism. The programming language NESL was an early effort at implementing a nested data-parallel programming model on flat parallel machines, and in particular introduced the flattening transformation that transforms nested data parallelism to flat data parallelism. This work was continued by other languages such as Data Parallel Haskell and Futhark, although arbitrary nested data parallelism is not widely available in current data-parallel programming languages.
Description.
In a multiprocessor system executing a single set of instructions (SIMD), data parallelism is achieved when each processor performs the same task on different distributed data. In some situations, a single execution thread controls operations on all the data. In others, different threads control the operation, but they execute the same code.
For instance, consider matrix multiplication and addition in a sequential manner as discussed in the example.
Example.
Below is the sequential pseudo-code for multiplication and addition of two matrices where the result is stored in the matrix C. The pseudo-code for multiplication calculates the dot product of two matrices A, B and stores the result into the output matrix C.
If the following programs were executed sequentially, the time taken to calculate the result would be of the formula_0(assuming row lengths and column lengths of both matrices are n) and formula_1for multiplication and addition respectively.
// Matrix multiplication
for (i = 0; i < row_length_A; i++)
for (k = 0; k < column_length_B; k++)
sum = 0;
for (j = 0; j < column_length_A; j++)
sum += A[i][j] * B[j][k];
C[i][k] = sum;
// Array addition
for (i = 0; i < n; i++) {
c[i] = a[i] + b[i];
We can exploit data parallelism in the preceding code to execute it faster as the arithmetic is loop independent. Parallelization of the matrix multiplication code is achieved by using OpenMP. An OpenMP directive, "omp parallel for" instructs the compiler to execute the code in the for loop in parallel. For multiplication, we can divide matrix A and B into blocks along rows and columns respectively. This allows us to calculate every element in matrix C individually thereby making the task parallel. For example: "A[m x n] dot B [n x k]" can be finished in formula_1 instead of formula_2 when executed in parallel using "m*k" processors.
// Matrix multiplication in parallel
for (i = 0; i < row_length_A; i++){
for (k = 0; k < column_length_B; k++){
sum = 0;
for (j = 0; j < column_length_A; j++){
sum += A[i][j] * B[j][k];
C[i][k] = sum;
It can be observed from the example that a lot of processors will be required as the matrix sizes keep on increasing. Keeping the execution time low is the priority but as the matrix size increases, we are faced with other constraints like complexity of such a system and its associated costs. Therefore, constraining the number of processors in the system, we can still apply the same principle and divide the data into bigger chunks to calculate the product of two matrices.
For addition of arrays in a data parallel implementation, let's assume a more modest system with two central processing units (CPU) A and B, CPU A could add all elements from the top half of the arrays, while CPU B could add all elements from the bottom half of the arrays. Since the two processors work in parallel, the job of performing array addition would take one half the time of performing the same operation in serial using one CPU alone.
The program expressed in pseudocode below—which applies some arbitrary operation, codice_0, on every element in the array codice_1—illustrates data parallelism:
if CPU = "a" then
lower_limit := 1
upper_limit := round(d.length / 2)
else if CPU = "b" then
lower_limit := round(d.length / 2) + 1
upper_limit := d.length
for i from lower_limit to upper_limit by 1 do
foo(d[i])
In an SPMD system executed on 2 processor system, both CPUs will execute the code.
Data parallelism emphasizes the distributed (parallel) nature of the data, as opposed to the processing (task parallelism). Most real programs fall somewhere on a continuum between task parallelism and data parallelism.
Steps to parallelization.
The process of parallelizing a sequential program can be broken down into four discrete steps.
Mixed data and task parallelism.
Data and task parallelism, can be simultaneously implemented by combining them together for the same application. This is called Mixed data and task parallelism. Mixed parallelism requires sophisticated scheduling algorithms and software support. It is the best kind of parallelism when communication is slow and number of processors is large.
Mixed data and task parallelism has many applications. It is particularly used in the following applications:
Data parallel programming environments.
A variety of data parallel programming environments are available today, most widely used of which are:
Applications.
Data parallelism finds its applications in a variety of fields ranging from physics, chemistry, biology, material sciences to signal processing. Sciences imply data parallelism for simulating models like molecular dynamics, sequence analysis of genome data and other physical phenomenon. Driving forces in signal processing for data parallelism are video encoding, image and graphics processing, wireless communications to name a few.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "O(n^3)"
},
{
"math_id": 1,
"text": "O(n)"
},
{
"math_id": 2,
"text": "O(m*n*k)"
}
] | https://en.wikipedia.org/wiki?curid=9467420 |
946812 | Systematic risk | Vulnerability to significant events that affect aggregate outcomes
In finance and economics, systematic risk (in economics often called aggregate risk or undiversifiable risk) is vulnerability to events which affect aggregate outcomes such as broad market returns, total economy-wide resource holdings, or aggregate income. In many contexts, events like earthquakes, epidemics and major weather catastrophes pose aggregate risks that affect not only the distribution but also the total amount of resources. That is why it is also known as contingent risk, unplanned risk or risk events. If every possible outcome of a stochastic economic process is characterized by the same aggregate result (but potentially different distributional outcomes), the process then has no aggregate risk.
Properties.
Systematic or aggregate risk arises from market structure or dynamics which produce shocks or uncertainty faced by all agents in the market; such shocks could arise from government policy, international economic forces, or acts of nature. In contrast, specific risk (sometimes called residual risk, unsystematic risk, or idiosyncratic risk) is risk to which only specific agents or industries are vulnerable (and is uncorrelated with broad market returns). Due to the idiosyncratic nature of unsystematic risk, it can be reduced or eliminated through diversification; but since all market actors are vulnerable to systematic risk, it cannot be limited through diversification (but it may be insurable). As a result, assets whose returns are negatively correlated with broader market returns command higher prices than assets not possessing this property.
In some cases, aggregate risk exists due to institutional or other constraints on market completeness. For countries or regions lacking access to broad hedging markets, events like earthquakes and adverse weather shocks can also act as costly aggregate risks. Robert Shiller has found that, despite the globalization progress of recent decades, country-level aggregate income risks are still significant and could potentially be reduced through the creation of better global hedging markets (thereby potentially becoming idiosyncratic, rather than aggregate, risks). Specifically, Shiller advocated for the creation of macro futures markets. The benefits of such a mechanism would depend on the degree to which macro conditions are correlated across countries.
In finance.
Systematic risk plays an important role in portfolio allocation. Risk which cannot be eliminated through diversification commands returns in excess of the risk-free rate (while idiosyncratic risk does not command such returns since it can be diversified). Over the long run, a well-diversified portfolio provides returns which correspond with its exposure to systematic risk; investors face a trade-off between expected returns and systematic risk. Therefore, an investor's desired returns correspond with their desired exposure to systematic risk and corresponding asset selection. Investors can only reduce a portfolio's exposure to systematic risk by sacrificing expected returns.
An important concept for evaluating an asset's exposure to systematic risk is beta. Since beta indicates the degree to which an asset's return is correlated with broader market outcomes, it is simply an indicator of an asset's vulnerability to systematic risk. Hence, the capital asset pricing model (CAPM) directly ties an asset's equilibrium price to its exposure to systematic risk.
A simple example.
Consider an investor who purchases stock in many firms from most global industries. This investor is vulnerable to systematic risk but has diversified away the effects of idiosyncratic risks on his portfolio value; further reduction in risk would require him to acquire risk-free assets with lower returns (such as U.S. Treasury securities). On the other hand, an investor who invests all of his money in one industry whose returns are typically uncorrelated with broad market outcomes (beta close to zero) has limited his exposure to systematic risk but, due to lack of diversification, is highly vulnerable to idiosyncratic risk.
In economics.
Aggregate risk can be generated by a variety of sources. Fiscal, monetary, and regulatory policy can all be sources of aggregate risk. In some cases, shocks from phenomena like weather and natural disaster can pose aggregate risks. Small economies can also be subject to aggregate risks generated by international conditions such as terms of trade shocks.
Aggregate risk has potentially large implications for economic growth. For example, in the presence of credit rationing, aggregate risk can cause bank failures and hinder capital accumulation. Banks may respond to increases in profitability-threatening aggregate risk by raising standards for quality and quantity credit rationing to reduce monitoring costs; but the practice of lending to small numbers of borrowers reduces the diversification of bank portfolios (concentration risk) while also denying credit to some potentially productive firms or industries. As a result, capital accumulation and the overall productivity level of the economy can decline.
In economic modeling, model outcomes depend heavily on the nature of risk. Modelers often incorporate aggregate risk through shocks to endowments (budget constraints), productivity, monetary policy, or external factors like terms of trade. Idiosyncratic risks can be introduced through mechanisms like individual labor productivity shocks; if agents possess the ability to trade assets and lack borrowing constraints, the welfare effects of idiosyncratic risks are minor. The welfare costs of aggregate risk, though, can be significant.
Under some conditions, aggregate risk can arise from the aggregation of micro shocks to individual agents. This can be the case in models with many agents and strategic complementarities; situations with such characteristics include: innovation, search and trading, production in the presence of input complementarities, and information sharing. Such situations can generate aggregate data which are empirically indistinguishable from a data-generating process with aggregate shocks.
Example: Arrow–Debreu equilibrium.
The following example is from Mas-Colell, Whinston, and Green (1995). Consider a simple exchange economy with two identical agents, one (divisible) good, and two potential states of the world (which occur with some probability). Each agent has expected utility in the form formula_0 where formula_1 and formula_2 are the probabilities of states 1 and 2 occurring, respectively. In state 1, agent 1 is endowed with one unit of the good while agent 2 is endowed with nothing. In state 2, agent 2 is endowed with one unit of the good while agent 1 is endowed with nothing. That is, denoting the vector of endowments in state "i" as formula_3 we have formula_4, formula_5. Then the aggregate endowment of this economy is one good regardless of which state is realized; that is, the economy has no aggregate risk. It can be shown that, if agents are allowed to make trades, the ratio of the price of a claim on the good in state 1 to the price of a claim on the good in state 2 is equal to the ratios of their respective probabilities of occurrence (and, hence, the marginal rates of substitution of each agent are also equal to this ratio). That is, formula_6. If allowed to do so, agents make trades such that their consumption is equal in either state of the world.
Now consider an example with aggregate risk. The economy is the same as that described above except for endowments: in state 1, agent 1 is endowed two units of the good while agent 2 still receives zero units; and in state 2, agent 2 still receives one unit of the good while agent 1 receives nothing. That is, formula_7, formula_5. Now, if state 1 is realized, the aggregate endowment is 2 units; but if state 2 is realized, the aggregate endowment is only 1 unit; this economy is subject to aggregate risk. Agents cannot fully insure and guarantee the same consumption in either state. It can be shown that, in this case, the price ratio will be less than the ratio of probabilities of the two states: formula_8, so formula_9. Thus, for example, if the two states occur with equal probabilities, then formula_10. This is the well-known finance result that the contingent claim that delivers more resources in the state of low market returns has a higher price.
In heterogeneous agent models.
While the inclusion of aggregate risk is common in macroeconomic models, considerable challenges arise when researchers attempt to incorporate aggregate uncertainty into models with heterogeneous agents. In this case, the entire distribution of allocational outcomes is a state variable which must be carried across periods. This gives rise to the well-known curse of dimensionality. One approach to the dilemma is to let agents ignore attributes of the aggregate distribution, justifying this assumption by referring to bounded rationality. Den Haan (2010) evaluates several algorithms which have been applied to solving the Krusell and Smith (1998) model, showing that solution accuracy can depend heavily on solution method. Researchers should carefully consider the results of accuracy tests while choosing solution methods and pay particular attention to grid selection.
In projects.
Systematic risk exists in projects and is called the overall project risk bred by the combined effect of uncertainty in external environmental factors such as PESTLE, VUCA, etc. It is also called contingent or unplanned risk or simply uncertainty because it is of unknown likelihood and unknown impact. In contrast, systemic risk is known as the individual project risk, caused by internal factors or attributes of the project system or culture. This is also known as inherent, planned, event or condition risk caused by known unknowns such as variability or ambiguity of impact but 100% probability of occurrence. Both systemic and systematic risks are residual risk.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\pi_1*u_i(x_{1i})+\\pi_2*u_i(x_{2i})"
},
{
"math_id": 1,
"text": "\\pi_1"
},
{
"math_id": 2,
"text": "\\pi_2"
},
{
"math_id": 3,
"text": "\\omega_i,"
},
{
"math_id": 4,
"text": "\\omega_1=(1,0)"
},
{
"math_id": 5,
"text": "\\omega_2=(0,1)"
},
{
"math_id": 6,
"text": "p_1/p_2=\\pi_1/\\pi_2"
},
{
"math_id": 7,
"text": "\\omega_1=(2,0)"
},
{
"math_id": 8,
"text": "p_1/p_2<\\pi_1/\\pi_2"
},
{
"math_id": 9,
"text": "p_1/\\pi_1<p_2/\\pi_2"
},
{
"math_id": 10,
"text": "p_1<p_2"
}
] | https://en.wikipedia.org/wiki?curid=946812 |
9469328 | Abhyankar–Moh theorem | Every embedding of a complex line into the complex affine plane extends to an automorphism
In mathematics, the Abhyankar–Moh theorem states that if formula_0 is a complex line in the complex affine plane formula_1, then every embedding of formula_0 into formula_1 extends to an automorphism of the plane. It is named after Shreeram Shankar Abhyankar and Tzuong-Tsieng Moh, who published it in 1975. More generally, the same theorem applies to lines and planes over any algebraically closed field of characteristic zero, and to certain well-behaved subsets of higher-dimensional complex affine spaces. | [
{
"math_id": 0,
"text": "L"
},
{
"math_id": 1,
"text": "\\mathbb{C}^2"
}
] | https://en.wikipedia.org/wiki?curid=9469328 |
946975 | Line (geometry) | Straight figure with zero width and depth
In geometry, a straight line, usually abbreviated line, is an infinitely long object with no width, depth, or curvature, an idealization of such physical objects as a straightedge, a taut string, or a ray of light. Lines are spaces of dimension one, which may be embedded in spaces of dimension two, three, or higher. The word "line" may also refer, in everyday life, to a line segment, which is a part of a line delimited by two points (its "endpoints").
Euclid's "Elements" defines a straight line as a "breadthless length" that "lies evenly with respect to the points on itself", and introduced several postulates as basic unprovable properties on which the rest of geometry was established. Euclidean line and "Euclidean geometry" are terms introduced to avoid confusion with generalizations introduced since the end of the 19th century, such as non-Euclidean, projective, and affine geometry.
Properties.
In the Greek deductive geometry of Euclid's "Elements", a general "line" (now called a "curve") is defined as a "breadthless length", and a "straight line" (now called a line segment) was defined as a line "which lies evenly with the points on itself".291 These definitions appeal to readers' physical experience, relying on terms that are not themselves defined, and the definitions are never explicitly referenced in the remainder of the text. In modern geometry, a line is usually either taken as a primitive notion with properties given by axioms,95 or else defined as a set of points obeying a linear relationship, for instance when real numbers are taken to be primitive and geometry is established analytically in terms of numerical coordinates.
In an axiomatic formulation of Euclidean geometry, such as that of Hilbert (modern mathematicians added to Euclid's original axioms to fill perceived logical gaps),108 a line is stated to have certain properties that relate it to other lines and points. For example, for any two distinct points, there is a unique line containing them, and any two distinct lines intersect at most at one point.300 In two dimensions (i.e., the Euclidean plane), two lines that do not intersect are called parallel. In higher dimensions, two lines that do not intersect are parallel if they are contained in a plane, or skew if they are not.
On a Euclidean plane, a line can be represented as a boundary between two regions.104 Any collection of finitely many lines partitions the plane into convex polygons (possibly unbounded); this partition is known as an arrangement of lines.
In higher dimensions.
In three-dimensional space, a first degree equation in the variables "x", "y", and "z" defines a plane, so two such equations, provided the planes they give rise to are not parallel, define a line which is the intersection of the planes. More generally, in "n"-dimensional space "n"−1 first-degree equations in the "n" coordinate variables define a line under suitable conditions.
In more general Euclidean space, R"n" (and analogously in every other affine space), the line "L" passing through two different points "a" and "b" is the subset
formula_0
The direction of the line is from a reference point "a" ("t" = 0) to another point "b" ("t" = 1), or in other words, in the direction of the vector "b" − "a". Different choices of "a" and "b" can yield the same line.
Collinear points.
Three points are said to be "collinear" if they lie on the same line. Three points "usually" determine a plane, but in the case of three collinear points this does "not" happen.
In affine coordinates, in "n"-dimensional space the points "X" = ("x"1, "x"2, ..., "x""n"), "Y" = ("y"1, "y"2, ..., "y""n"), and "Z" = ("z"1, "z"2, ..., "z""n") are collinear if the matrix
formula_1
has a rank less than 3. In particular, for three points in the plane ("n" = 2), the above matrix is square and the points are collinear if and only if its determinant is zero.
Equivalently for three points in a plane, the points are collinear if and only if the slope between one pair of points equals the slope between any other pair of points (in which case the slope between the remaining pair of points will equal the other slopes). By extension, "k" points in a plane are collinear if and only if any ("k"–1) pairs of points have the same pairwise slopes.
In Euclidean geometry, the Euclidean distance "d"("a","b") between two points "a" and "b" may be used to express the collinearity between three points by:
The points "a", "b" and "c" are collinear if and only if "d"("x","a") = "d"("c","a") and "d"("x","b") = "d"("c","b") implies "x" = "c".
However, there are other notions of distance (such as the Manhattan distance) for which this property is not true.
In the geometries where the concept of a line is a primitive notion, as may be the case in some synthetic geometries, other methods of determining collinearity are needed.
Types.
In a sense, all lines in Euclidean geometry are equal, in that, without coordinates, one can not tell them apart from one another. However, lines may play special roles with respect to other objects in the geometry and be divided into types according to that relationship. For instance, with respect to a conic (a circle, ellipse, parabola, or hyperbola), lines can be:
In the context of determining parallelism in Euclidean geometry, a transversal is a line that intersects two other lines that may or not be parallel to each other.
For more general algebraic curves, lines could also be:
With respect to triangles we have:
For a convex quadrilateral with at most two parallel sides, the Newton line is the line that connects the midpoints of the two diagonals.
For a hexagon with vertices lying on a conic we have the Pascal line and, in the special case where the conic is a pair of lines, we have the Pappus line.
Parallel lines are lines in the same plane that never cross. Intersecting lines share a single point in common. Coincidental lines coincide with each other—every point that is on either one of them is also on the other.
Perpendicular lines are lines that intersect at right angles.
In three-dimensional space, skew lines are lines that are not in the same plane and thus do not intersect each other.
In axiomatic systems.
The concept of line is often considered in geometry as a primitive notion in axiomatic systems,95 meaning it is not being defined by other concepts. In those situations where a line is a defined concept, as in coordinate geometry, some other fundamental ideas are taken as primitives. When the line concept is a primitive, the properties of lines are dictated by the axioms which they must satisfy.
In a non-axiomatic or simplified axiomatic treatment of geometry, the concept of a primitive notion may be too abstract to be dealt with. In this circumstance, it is possible to provide a "description" or "mental image" of a primitive notion, to give a foundation to build the notion on which would formally be based on the (unstated) axioms. Descriptions of this type may be referred to, by some authors, as definitions in this informal style of presentation. These are not true definitions, and could not be used in formal proofs of statements. The "definition" of line in Euclid's Elements falls into this category.95 Even in the case where a specific geometry is being considered (for example, Euclidean geometry), there is no generally accepted agreement among authors as to what an informal description of a line should be when the subject is not being treated formally.
Definition.
Linear equation.
Lines in a Cartesian plane or, more generally, in affine coordinates, are characterized by linear equations. More precisely, every line formula_2 (including vertical lines) is the set of all points whose coordinates ("x", "y") satisfy a linear equation; that is,
formula_3
where "a", "b" and "c" are fixed real numbers (called coefficients) such that "a" and "b" are not both zero. Using this form, vertical lines correspond to equations with "b" = 0.
One can further suppose either "c" = 1 or "c" = 0, by dividing everything by c if it is not zero.
There are many variant ways to write the equation of a line which can all be converted from one to another by algebraic manipulation. The above form is sometimes called the "standard form". If the constant term is put on the left, the equation becomes
formula_4
and this is sometimes called the "general form" of the equation. However, this terminology is not universally accepted, and many authors do not distinguish these two forms.
These forms are generally named by the type of information (data) about the line that is needed to write down the form. Some of the important data of a line is its slope, x-intercept, known points on the line and y-intercept.
The equation of the line passing through two different points formula_5 and formula_6 may be written as
formula_7
If "x"0 ≠ "x"1, this equation may be rewritten as
formula_8
or
formula_9In two dimensions, the equation for non-vertical lines is often given in the "slope–intercept form":
formula_10
where:
The slope of the line through points formula_11 and formula_12, when formula_13, is given by formula_14 and the equation of this line can be written formula_15.
As a note, lines in three dimensions may also be described as the simultaneous solutions of two linear equations
formula_16
formula_17
such that formula_18 and formula_19 are not proportional (the relations formula_20 imply formula_21). This follows since in three dimensions a single linear equation typically describes a plane and a line is what is common to two distinct intersecting planes.
Parametric equation.
Parametric equations are also used to specify lines, particularly in those in three dimensions or more because in more than two dimensions lines "cannot" be described by a single linear equation.
In three dimensions lines are frequently described by parametric equations:
formula_22
where:
Parametric equations for lines in higher dimensions are similar in that they are based on the specification of one point on the line and a direction vector.
Hesse normal form.
The "normal form" (also called the "Hesse normal form", after the German mathematician Ludwig Otto Hesse), is based on the "normal segment" for a given line, which is defined to be the line segment drawn from the origin perpendicular to the line. This segment joins the origin with the closest point on the line to the origin. The normal form of the equation of a straight line on the plane is given by:
formula_23
where formula_24 is the angle of inclination of the normal segment (the oriented angle from the unit vector of the "x"-axis to this segment), and "p" is the (positive) length of the normal segment. The normal form can be derived from the standard form formula_25 by dividing all of the coefficients by
formula_26
Unlike the slope-intercept and intercept forms, this form can represent any line but also requires only two finite parameters, formula_24 and "p", to be specified. If "p" > 0, then formula_24 is uniquely defined modulo 2"π". On the other hand, if the line is through the origin ("c" = "p" = 0), one drops the "c"/|"c"| term to compute formula_27 and formula_28, and it follows that formula_24 is only defined modulo π.
Other representations.
Vectors.
The vector equation of the line through points A and B is given by formula_29 (where λ is a scalar).
If a is vector OA and b is vector OB, then the equation of the line can be written: formula_30.
A ray starting at point "A" is described by limiting λ. One ray is obtained if λ ≥ 0, and the opposite ray comes from λ ≤ 0.
Polar coordinates.
In a Cartesian plane, polar coordinates ("r", "θ") are related to Cartesian coordinates by the parametric equations:formula_31
In polar coordinates, the equation of a line not passing through the origin—the point with coordinates (0, 0)—can be written
formula_32
with "r" > 0 and formula_33 Here, p is the (positive) length of the line segment perpendicular to the line and delimited by the origin and the line, and formula_24 is the (oriented) angle from the x-axis to this segment.
It may be useful to express the equation in terms of the angle formula_34 between the x-axis and the line. In this case, the equation becomes
formula_35
with "r" > 0 and formula_36
These equations can be derived from the normal form of the line equation by setting formula_37 and formula_38 and then applying the angle difference identity for sine or cosine.
These equations can also be proven geometrically by applying right triangle definitions of sine and cosine to the right triangle that has a point of the line and the origin as vertices, and the line and its perpendicular through the origin as sides.
The previous forms do not apply for a line passing through the origin, but a simpler formula can be written: the polar coordinates formula_39 of the points of a line passing through the origin and making an angle of formula_40 with the x-axis, are the pairs formula_39 such that
formula_41
Generalizations of the Euclidean line.
In modern mathematics, given the multitude of geometries, the concept of a line is closely tied to the way the geometry is described. For instance, in analytic geometry, a line in the plane is often defined as the set of points whose coordinates satisfy a given linear equation, but in a more abstract setting, such as incidence geometry, a line may be an independent object, distinct from the set of points which lie on it.
When a geometry is described by a set of axioms, the notion of a line is usually left undefined (a so-called primitive object). The properties of lines are then determined by the axioms which refer to them. One advantage to this approach is the flexibility it gives to users of the geometry. Thus in differential geometry, a line may be interpreted as a geodesic (shortest path between points), while in some projective geometries, a line is a 2-dimensional vector space (all linear combinations of two independent vectors). This flexibility also extends beyond mathematics and, for example, permits physicists to think of the path of a light ray as being a line.
Projective geometry.
In many models of projective geometry, the representation of a line rarely conforms to the notion of the "straight curve" as it is visualised in Euclidean geometry. In elliptic geometry we see a typical example of this.108 In the spherical representation of elliptic geometry, lines are represented by great circles of a sphere with diametrically opposite points identified. In a different model of elliptic geometry, lines are represented by Euclidean planes passing through the origin. Even though these representations are visually distinct, they satisfy all the properties (such as, two points determining a unique line) that make them suitable representations for lines in this geometry.
The "shortness" and "straightness" of a line, interpreted as the property that the distance along the line between any two of its points is minimized (see triangle inequality), can be generalized and leads to the concept of geodesics in metric spaces.
Extensions.
Ray.
Given a line and any point "A" on it, we may consider "A" as decomposing this line into two parts.
Each such part is called a ray and the point "A" is called its "initial point". It is also known as half-line, a one-dimensional half-space. The point A is considered to be a member of the ray. Intuitively, a ray consists of those points on a line passing through "A" and proceeding indefinitely, starting at "A", in one direction only along the line. However, in order to use this concept of a ray in proofs a more precise definition is required.
Given distinct points "A" and "B", they determine a unique ray with initial point "A". As two points define a unique line, this ray consists of all the points between "A" and "B" (including "A" and "B") and all the points "C" on the line through "A" and "B" such that "B" is between "A" and "C". This is, at times, also expressed as the set of all points "C" on the line determined by "A" and "B" such that "A" is not between "B" and "C". A point "D", on the line determined by "A" and "B" but not in the ray with initial point "A" determined by "B", will determine another ray with initial point "A". With respect to the "AB" ray, the "AD" ray is called the "opposite ray".
Thus, we would say that two different points, "A" and "B", define a line and a decomposition of this line into the disjoint union of an open segment ("A", "B") and two rays, "BC" and "AD" (the point "D" is not drawn in the diagram, but is to the left of "A" on the line "AB"). These are not opposite rays since they have different initial points.
In Euclidean geometry two rays with a common endpoint form an angle.
The definition of a ray depends upon the notion of betweenness for points on a line. It follows that rays exist only for geometries for which this notion exists, typically Euclidean geometry or affine geometry over an ordered field. On the other hand, rays do not exist in projective geometry nor in a geometry over a non-ordered field, like the complex numbers or any finite field.
Line segment.
A line segment is a part of a line that is bounded by two distinct end points and contains every point on the line between its end points. Depending on how the line segment is defined, either of the two end points may or may not be part of the line segment. Two or more line segments may have some of the same relationships as lines, such as being parallel, intersecting, or skew, but unlike lines they may be none of these, if they are coplanar and either do not intersect or are collinear.
Number line.
A point on number line corresponds to a real number and vice versa. Usually, integers are evenly spaced on the line, with positive numbers are on the right, negative numbers on the left. As an extension to the concept, an imaginary line representing imaginary numbers can be drawn perpendicular to the number line at zero. The two lines forms the complex plane, a geometrical representation of the set of complex numbers.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "L = \\left\\{ (1 - t) \\, a + t b \\mid t\\in\\mathbb{R}\\right\\}."
},
{
"math_id": 1,
"text": "\\begin{bmatrix}\n 1 & x_1 & x_2 & \\cdots & x_n \\\\\n 1 & y_1 & y_2 & \\cdots & y_n \\\\\n 1 & z_1 & z_2 & \\cdots & z_n\n\\end{bmatrix}"
},
{
"math_id": 2,
"text": "L"
},
{
"math_id": 3,
"text": "L = \\{(x,y)\\mid ax+by=c\\}, "
},
{
"math_id": 4,
"text": "ax + by - c = 0,"
},
{
"math_id": 5,
"text": "P_0( x_0, y_0 )"
},
{
"math_id": 6,
"text": "P_1(x_1, y_1)"
},
{
"math_id": 7,
"text": "(y - y_0)(x_1 - x_0) = (y_1 - y_0)(x - x_0)."
},
{
"math_id": 8,
"text": "y=(x-x_0)\\,\\frac{y_1-y_0}{x_1-x_0}+y_0"
},
{
"math_id": 9,
"text": "y=x\\,\\frac{y_1-y_0}{x_1-x_0}+\\frac{x_1y_0-x_0y_1}{x_1-x_0}\\,."
},
{
"math_id": 10,
"text": " y = mx + b "
},
{
"math_id": 11,
"text": "A(x_a, y_a)"
},
{
"math_id": 12,
"text": "B(x_b, y_b)"
},
{
"math_id": 13,
"text": "x_a \\neq x_b"
},
{
"math_id": 14,
"text": "m = (y_b - y_a)/(x_b - x_a)"
},
{
"math_id": 15,
"text": "y = m (x - x_a) + y_a"
},
{
"math_id": 16,
"text": " a_1 x + b_1 y + c_1 z - d_1 = 0 "
},
{
"math_id": 17,
"text": " a_2 x + b_2 y + c_2 z - d_2 = 0 "
},
{
"math_id": 18,
"text": " (a_1,b_1,c_1)"
},
{
"math_id": 19,
"text": " (a_2,b_2,c_2)"
},
{
"math_id": 20,
"text": " a_1 = t a_2, b_1 = t b_2, c_1 = t c_2 "
},
{
"math_id": 21,
"text": "t = 0"
},
{
"math_id": 22,
"text": "\\begin{align}\nx &= x_0 + at \\\\\ny &= y_0 + bt \\\\\nz &= z_0 + ct\n\\end{align}"
},
{
"math_id": 23,
"text": " x \\cos \\varphi + y \\sin \\varphi - p = 0 ,"
},
{
"math_id": 24,
"text": "\\varphi"
},
{
"math_id": 25,
"text": "ax + by = c"
},
{
"math_id": 26,
"text": "\\frac{c}{|c|}\\sqrt{a^2 + b^2}."
},
{
"math_id": 27,
"text": "\\sin\\varphi"
},
{
"math_id": 28,
"text": "\\cos\\varphi"
},
{
"math_id": 29,
"text": "\\mathbf{r} = \\mathbf{OA} + \\lambda\\, \\mathbf{AB}"
},
{
"math_id": 30,
"text": "\\mathbf{r} = \\mathbf{a} + \\lambda (\\mathbf{b} - \\mathbf{a})"
},
{
"math_id": 31,
"text": "x=r\\cos\\theta, \\quad y=r\\sin\\theta."
},
{
"math_id": 32,
"text": "r = \\frac p {\\cos (\\theta-\\varphi)},"
},
{
"math_id": 33,
"text": "\\varphi-\\pi/2 < \\theta < \\varphi + \\pi/2."
},
{
"math_id": 34,
"text": "\\alpha=\\varphi+\\pi/2"
},
{
"math_id": 35,
"text": "r=\\frac p {\\sin (\\theta-\\alpha)},"
},
{
"math_id": 36,
"text": "0 < \\theta < \\alpha + \\pi."
},
{
"math_id": 37,
"text": "x = r \\cos\\theta,"
},
{
"math_id": 38,
"text": "y = r \\sin\\theta,"
},
{
"math_id": 39,
"text": "(r, \\theta)"
},
{
"math_id": 40,
"text": "\\alpha"
},
{
"math_id": 41,
"text": "r\\ge 0,\\qquad \\text{and} \\quad \\theta=\\alpha \\quad\\text{or}\\quad \\theta=\\alpha +\\pi."
}
] | https://en.wikipedia.org/wiki?curid=946975 |
94707 | Enharmonic equivalence | Distinct pitch classes sounding the same
In music, two written notes have enharmonic equivalence if they produce the same pitch but are notated differently. Similarly, written intervals, chords, or key signatures are considered enharmonic if they represent identical pitches that are notated differently. The term derives from Latin , in turn from Late Latin , from Ancient Greek (), from ('in') and ('harmony').
Definition.
<score>{ \magnifyStaff #5/4 \omit Score.TimeSignature \clef F \time 2/1 fis2 s ges s }</score>The notes F♯ and G♭ are enharmonic equivalents in 12 TET.
<score>\relative c' { \magnifyStaff #5/4 \omit Score.TimeSignature \clef C \time 2/1 gisis2 s beses s}</score>G and B are enharmonic equivalents in 12 TET; both are the same as A♮.
The predominant tuning system in Western music is twelve-tone equal temperament (12 TET), where each octave is divided into twelve equivalent half steps or semitones. The notes F and G are a whole step apart, so the note one semitone above F (F♯) and the note one semitone below G (G♭) indicate the same pitch. These written notes are "enharmonic", or "enharmonically equivalent". The choice of notation for a pitch can depend on its role in harmony; this notation keeps modern music compatible with earlier tuning systems, such as meantone temperaments. The choice can also depend on the note's readability in the context of the surrounding pitches. Multiple accidentals can produce other enharmonic equivalents; for example, F (double-sharp) is enharmonically equivalent to G♮. Prior to this modern use of the term, "enharmonic" referred to notes that were "very close" in pitch — closer than the smallest step of a diatonic scale — but not quite identical. In a tuning system without equivalent half steps, F♯ and G♭ would not indicate the same pitch.
<score>\relative c' { \magnifyStaff #5/4 \omit Score.TimeSignature \time 2/1 <c fis>1 <c ges'>}</score>Enharmonic tritones: Augmented 4th = diminished 5th on C.
Sets of notes that involve pitch relationships — scales, key signatures, or intervals,
for example — can also be referred to as "enharmonic" (e.g., the keys of C♯ major and D♭ major contain identical pitches and are therefore enharmonic). Identical intervals notated with different (enharmonically equivalent) written pitches are also referred to as enharmonic. The interval of a tritone above C may be written as a diminished fifth from C to G♭, or as an augmented fourth (C to F♯). Representing the C as a B♯ leads to other enharmonically equivalent options for notation.
Enharmonic equivalents can be used to improve the readability of music, as when a sequence of notes is more easily read using sharps or flats. This may also reduce the number of accidentals required.
Examples.
At the end of the bridge section of Jerome Kern's "All the Things You Are", a G♯ (the sharp 5 of an augmented C chord) becomes an enharmonically equivalent A♭ (the third of an F minor chord) at the beginning of the returning "A" section.
Beethoven's Piano Sonata in E Minor, Op. 90, contains a passage where a B♭ becomes an A♯, altering its musical function. The first two bars of the following passage unfold a descending B♭ major scale. Immediately following this, the B♭s become A♯s, the leading tone of B minor:
Chopin's Prelude No. 15, known as the "Raindrop Prelude", features a pedal point on the note A♭ throughout its opening section.
In the middle section, these are changed to G♯s as the key changes to C-sharp minor. This is primarily a notational convenience, since D-flat minor would require many double-flats and be difficult to read:
The concluding passage of the slow movement of Schubert's final piano sonata in B♭ (D960) contains a dramatic enharmonic change. In bars 102–3, a B♯, the third of a G♯ major triad, transforms into C♮ as the prevailing harmony changes to C major:
<score>\relative c" { \magnifyStaff #5/4 \omit Score.TimeSignature \set doubleSlurs = ##t <bis dis gis>1 (<c e g!>)}</score>G-sharp to C progression.
Other tuning conventions.
The standard tuning system used in Western music is twelve-tone equal temperament tuning, where the octave is divided into 12 equal semitones. In this system, written notes that produce the same pitch, such as C♯ and D♭, are called "enharmonic". In other tuning systems, such pairs of written notes do not produce an identical pitch, but can still be called "enharmonic" using the older, original sense of the word.
Pythagorean.
In Pythagorean tuning, all pitches are generated from a series of justly tuned perfect fifths, each with a frequency ratio of 3 to 2. If the first note in the series is an A♭, the thirteenth note in the series, G♯ is "higher" than the seventh octave (1 octave = frequency ratio of 7 octaves is of the A♭ by a small interval called a Pythagorean comma. This interval is expressed mathematically as:
formula_0
Meantone.
In quarter-comma meantone, there will be a discrepancy between, for example, G♯ and A♭. If middle C's frequency is f, the next highest C has a frequency of The quarter-comma meantone has perfectly tuned ("just") major thirds, which means major thirds with a frequency ratio of exactly To form a just major third with the C above it, A♭ and the C above it must be in the ratio 5 to 4, so A♭ needs to have the frequency
formula_1
To form a just major third above E, however, G♯ needs to form the ratio 5 to 4 with E, which, in turn, needs to form the ratio 5 to 4 with C, making the frequency of G♯
formula_2
This leads to G♯ and A♭ being different pitches; G♯ is, in fact 41 cents (41% of a semitone) lower in pitch. The difference is the interval called the enharmonic diesis, or a frequency ratio of . On a piano tuned in equal temperament, both G♯ and A♭ are played by striking the same key, so both have a frequency
formula_3
Such small differences in pitch can skip notice when presented as melodic intervals; however, when they are sounded as chords, especially as long-duration chords, the difference between meantone intonation and equal-tempered intonation can be quite noticeable.
Enharmonically equivalent pitches can be referred to with a single name in many situations, such as the numbers of integer notation used in serialism and musical set theory and as employed by MIDI.
Enharmonic genus.
In ancient Greek music the enharmonic was one of the three Greek genera in music in which the tetrachords are divided (descending) as a ditone plus two microtones. The ditone can be anywhere from to (3.55 to 4.35 semitones) and the microtones can be anything smaller than 1 semitone. Some examples of enharmonic genera are
Enharmonic key.
Some key signatures have an enharmonic equivalent that contains the same pitches, albeit spelled differently. In twelve-tone equal temperament, there are three pairs each of major and minor enharmonically equivalent keys: B major/C♭ major, G♯ minor/A♭ minor, F♯ major/G♭ major, D♯ minor/E♭ minor, C♯ major/D♭ major and A♯ minor/B♭ minor.
Theoretical key.
Keys that require more than 7 sharps or flats are called theoretical keys. They have enharmonically equivalent keys with simpler key signatures, so are rarely seen.
F flat major - (E major)<br>G sharp major - (A flat major)<br>D flat minor - (C sharp minor)<br>E sharp minor - (F minor)
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\frac{\\ \\hbox{twelve fifths}\\ }{\\ \\hbox{seven octaves}\\ } \n~=~ \\frac{ 1 }{\\ 2^7}\\left(\\frac{ 3 }{\\ 2\\ }\\right)^{12}\n~=~ \\frac{\\ 3^{12} }{\\ 2^{19} }\n~=~ \\frac{\\ 531\\ 441\\ }{\\ 524\\ 288\\ }\n~=~ 1.013\\ 643\\ 264\\ \\ldots\n~\\approx~ 23.460\\ 010 \\hbox{ cents} \n~."
},
{
"math_id": 1,
"text": "\\frac{\\ 4\\ }{ 5 }\\ (2 f) = \\frac{\\ 8\\ }{ 5 }\\ f = 1.6\\ f ~~."
},
{
"math_id": 2,
"text": " \\left( \\frac{\\ 5\\ }{ 4 } \\right)^2\\ f ~=~ \\frac{\\ 25\\ }{ 16 }\\ f ~=~ 1.5625\\ f ~."
},
{
"math_id": 3,
"text": "\\ 2^{\\left(\\ 8\\ /\\ 12\\ \\right)}\\ f ~=~ 2^{\\left(\\ 2\\ /\\ 3\\ \\right)}\\ f ~\\approx~ 1.5874\\ f ~."
}
] | https://en.wikipedia.org/wiki?curid=94707 |
947167 | Schreier refinement theorem | Statement in group theory
In mathematics, the Schreier refinement theorem of group theory states that any two subnormal series of subgroups of a given group have equivalent refinements, where two series are equivalent if there is a bijection between their factor groups that sends each factor group to an isomorphic one.
The theorem is named after the Austrian mathematician Otto Schreier who proved it in 1928. It provides an elegant proof of the Jordan–Hölder theorem. It is often proved using the Zassenhaus lemma. gives a short proof by intersecting the terms in one subnormal series with those in the other series.
Example.
Consider formula_0, where formula_1 is the symmetric group of degree 3. The alternating group formula_2 is a normal subgroup of formula_1, so we have the two subnormal series
formula_3
formula_4
with respective factor groups formula_5 and formula_6.<br>
The two subnormal series are not equivalent, but they have equivalent refinements:
formula_7
with factor groups isomorphic to formula_8 and
formula_9
with factor groups isomorphic to formula_10. | [
{
"math_id": 0,
"text": "\\mathbb{Z}_2 \\times S_3"
},
{
"math_id": 1,
"text": "S_3"
},
{
"math_id": 2,
"text": "A_3"
},
{
"math_id": 3,
"text": "\\{0\\} \\times \\{(1)\\} \\; \\triangleleft \\; \\mathbb{Z}_2 \\times \\{(1)\\} \\; \\triangleleft \\; \\mathbb{Z}_2 \\times S_3,"
},
{
"math_id": 4,
"text": "\\{0\\} \\times \\{(1)\\} \\; \\triangleleft \\; \\{0\\} \\times A_3 \\; \\triangleleft \\; \\mathbb{Z}_2 \\times S_3,"
},
{
"math_id": 5,
"text": "(\\mathbb{Z}_2,S_3)"
},
{
"math_id": 6,
"text": "(A_3,\\mathbb{Z}_2\\times\\mathbb{Z}_2)"
},
{
"math_id": 7,
"text": "\\{0\\} \\times \\{(1)\\} \\; \\triangleleft \\; \\mathbb{Z}_2 \\times \\{(1)\\} \\; \\triangleleft \\; \\mathbb{Z}_2 \\times A_3 \\; \\triangleleft \\; \\mathbb{Z}_2 \\times S_3"
},
{
"math_id": 8,
"text": "(\\mathbb{Z}_2, A_3, \\mathbb{Z}_2)"
},
{
"math_id": 9,
"text": "\\{0\\} \\times \\{(1)\\} \\; \\triangleleft \\; \\{0\\} \\times A_3 \\; \\triangleleft \\; \\{0\\} \\times S_3 \\; \\triangleleft \\; \\mathbb{Z}_2 \\times S_3"
},
{
"math_id": 10,
"text": "(A_3, \\mathbb{Z}_2, \\mathbb{Z}_2)"
}
] | https://en.wikipedia.org/wiki?curid=947167 |
947234 | Dunkl operator | Mathematical operator
In mathematics, particularly the study of Lie groups, a Dunkl operator is a certain kind of mathematical operator, involving differential operators but also reflections in an underlying space.
Formally, let "G" be a Coxeter group with reduced root system "R" and "k""v" an arbitrary "multiplicity" function on "R" (so "k""u" = "k""v" whenever the reflections σ"u" and σ"v" corresponding to the roots "u" and "v" are conjugate in "G"). Then, the Dunkl operator is defined by:
formula_0
where formula_1 is the "i"-th component of "v", 1 ≤ "i" ≤ "N", "x" in "R""N", and "f" a smooth function on "R""N".
Dunkl operators were introduced by Charles Dunkl (1989). One of Dunkl's major results was that Dunkl operators "commute," that is, they satisfy formula_2 just as partial derivatives do. Thus Dunkl operators represent a meaningful generalization of partial derivatives. | [
{
"math_id": 0,
"text": "T_i f(x) = \\frac{\\partial}{\\partial x_i} f(x) + \\sum_{v\\in R_+} k_v \\frac{f(x) - f(x \\sigma_v)}{\\left\\langle x, v\\right\\rangle} v_i"
},
{
"math_id": 1,
"text": "v_i "
},
{
"math_id": 2,
"text": "T_i (T_j f(x)) = T_j (T_i f(x))"
}
] | https://en.wikipedia.org/wiki?curid=947234 |
947383 | Tensile structure | Structure whose members are only in tension
In structural engineering, a tensile structure is a construction of elements carrying only tension and no compression or bending. The term "tensile" should not be confused with tensegrity, which is a structural form with both tension and compression elements. Tensile structures are the most common type of thin-shell structures.
Most tensile structures are supported by some form of compression or bending elements, such as masts (as in The O2, formerly the Millennium Dome), compression rings or beams.
A tensile membrane structure is most often used as a roof, as they can economically and attractively span large distances. Tensile membrane structures may also be used as complete buildings, with a few common applications being sports facilities, warehousing and storage buildings, and exhibition venues.
History.
This form of construction has only become more rigorously analyzed and widespread in large structures in the latter part of the twentieth century. Tensile structures have long been used in tents, where the guy ropes and tent poles provide pre-tension to the fabric and allow it to withstand loads.
Russian engineer Vladimir Shukhov was one of the first to develop practical calculations of stresses and deformations of tensile structures, shells and membranes. Shukhov designed eight tensile structures and thin-shell structures exhibition pavilions for the Nizhny Novgorod Fair of 1896, covering the area of 27,000 square meters. A more recent large-scale use of a membrane-covered tensile structure is the Sidney Myer Music Bowl, constructed in 1958.
Antonio Gaudi used the concept in reverse to create a compression-only structure for the Colonia Guell Church. He created a hanging tensile model of the church to calculate the compression forces and to experimentally determine the column and vault geometries.
The concept was later championed by German architect and engineer Frei Otto, whose first use of the idea was in the construction of the West German pavilion at Expo 67 in Montreal. Otto next used the idea for the roof of the Olympic Stadium for the 1972 Summer Olympics in Munich.
Since the 1960s, tensile structures have been promoted by designers and engineers such as Ove Arup, Buro Happold, Frei Otto, Mahmoud Bodo Rasch, Eero Saarinen, Horst Berger, Matthew Nowicki, Jörg Schlaich, and David Geiger.
Steady technological progress has increased the popularity of fabric-roofed structures. The low weight of the materials makes construction easier and cheaper than standard designs, especially when vast open spaces have to be covered.
Cable and membrane structures.
Membrane materials.
Common materials for doubly curved fabric structures are PTFE-coated fiberglass and PVC-coated polyester. These are woven materials with different strengths in different directions. The warp fibers (those fibers which are originally straight—equivalent to the starting fibers on a loom) can carry greater load than the weft or fill fibers, which are woven between the warp fibers.
Other structures make use of ETFE film, either as single layer or in cushion form (which can be inflated, to provide good insulation properties or for aesthetic effect—as on the Allianz Arena in Munich). ETFE cushions can also be etched with patterns in order to let different levels of light through when inflated to different levels.
In daylight, fabric membrane translucency offers soft diffused naturally lit spaces, while at night, artificial lighting can be used to create an ambient exterior luminescence. They are most often supported by a structural frame as they cannot derive their strength from double curvature.
Cables.
Cables can be of mild steel, high strength steel (drawn carbon steel), stainless steel, polyester or aramid fibres. Structural cables are made of a series of small strands twisted or bound together to form a much larger cable. Steel cables are either spiral strand, where circular rods are twisted together and "glued" using a polymer, or locked coil strand, where individual interlocking steel strands form the cable (often with a spiral strand core).
Spiral strand is slightly weaker than locked coil strand. Steel spiral strand cables have a Young's modulus, "E" of 150±10 kN/mm2 (or 150±10 GPa) and come in sizes from 3 to 90 mm diameter. Spiral strand suffers from construction stretch, where the strands compact when the cable is loaded. This is normally removed by pre-stretching the cable and cycling the load up and down to 45% of the ultimate tensile load.
Locked coil strand typically has a Young's Modulus of 160±10 kN/mm2 and comes in sizes from 20 mm to 160 mm diameter.
The properties of the individual strands of different materials are shown in the table below, where UTS is ultimate tensile strength, or the breaking load:
Structural forms.
Air-supported structures are a form of tensile structures where the fabric envelope is supported by pressurised air only.
The majority of fabric structures derive their strength from their doubly curved shape. By forcing the fabric to take on double-curvature the fabric gains sufficient stiffness to withstand the loads it is subjected to (for example wind and snow loads). In order to induce an adequately doubly curved form it is most often necessary to pretension or prestress the fabric or its supporting structure.
Form-finding.
The behaviour of structures which depend upon prestress to attain their strength is non-linear, so anything other than a very simple cable has, until the 1990s, been very difficult to design. The most common way to design doubly curved fabric structures was to construct scale models of the final buildings in order to understand their behaviour and to conduct form-finding exercises. Such scale models often employed stocking material or tights, or soap film, as they behave in a very similar way to structural fabrics (they cannot carry shear).
Soap films have uniform stress in every direction and require a closed boundary to form. They naturally form a minimal surface—the form with minimal area and embodying minimal energy. They are however very difficult to measure. For a large film, its weight can seriously affect its form.
For a membrane with curvature in two directions, the basic equation of equilibrium is:
formula_0
where:
Lines of principal curvature have no twist and intersect other lines of principal curvature at right angles.
A geodesic or geodetic line is usually the shortest line between two points on the surface. These lines are typically used when defining the cutting pattern seam-lines. This is due to their relative straightness after the planar cloths have been generated, resulting in lower cloth wastage and closer alignment with the fabric weave.
In a pre-stressed but unloaded surface "w" = 0, so formula_1.
In a soap film surface tensions are uniform in both directions, so "R"1 = −"R"2.
It is now possible to use powerful non-linear numerical analysis programs (or finite element analysis) to formfind and design fabric and cable structures. The programs must allow for large deflections.
The final shape, or form, of a fabric structure depends upon:
It is important that the final form will not allow ponding of water, as this can deform the membrane and lead to local failure or progressive failure of the entire structure.
Snow loading can be a serious problem for membrane structure, as the snow often will not flow off the structure as water will. For example, this has in the past caused the (temporary) collapse of the Hubert H. Humphrey Metrodome, an air-inflated structure in Minneapolis, Minnesota. Some structures prone to ponding use heating to melt snow which settles on them.
There are many different doubly curved forms, many of which have special mathematical properties. The most basic doubly curved from is the saddle shape, which can be a hyperbolic paraboloid (not all saddle shapes are hyperbolic paraboloids). This is a double ruled surface and is often used in both in lightweight shell structures (see hyperboloid structures). True ruled surfaces are rarely found in tensile structures. Other forms are anticlastic saddles, various radial, conical tent forms and any combination of them.
Pretension.
Pretension is tension artificially induced in the structural elements in addition to any self-weight or imposed loads they may carry. It is used to ensure that the normally very flexible structural elements remain stiff under all possible loads.
A day to day example of pretension is a shelving unit supported by wires running from floor to ceiling. The wires hold the shelves in place because they are tensioned – if the wires were slack the system would not work.
Pretension can be applied to a membrane by stretching it from its edges or by pretensioning cables which support it and hence changing its shape. The level of pretension applied determines the shape of a membrane structure.
Alternative form-finding approach.
The alternative approximated approach to the form-finding problem solution is based on the total energy balance of a grid-nodal system. Due to its physical meaning this approach is called the stretched grid method (SGM).
Simple mathematics of cables.
Transversely and uniformly loaded cable.
A uniformly loaded cable spanning between two supports forms a curve intermediate between a catenary curve and a parabola. The simplifying assumption can be made that it approximates a circular arc (of radius "R").
By equilibrium:
The horizontal and vertical reactions :
formula_2
formula_3
By geometry:
The length of the cable:
formula_4
The tension in the cable:
formula_5
By substitution:
formula_6
The tension is also equal to:
formula_7
The extension of the cable upon being loaded is (from Hooke's Law, where the axial stiffness, "k," is equal to formula_8):
formula_9
where "E" is the Young's modulus of the cable and "A" is its cross-sectional area.
If an initial pretension, formula_10 is added to the cable, the extension becomes:
formula_11
Combining the above equations gives:
formula_12
By plotting the left hand side of this equation against "T," and plotting the right hand side on the same axes, also against "T," the intersection will give the actual equilibrium tension in the cable for a given loading "w" and a given pretension formula_10.
Cable with central point load.
A similar solution to that above can be derived where:
By equilibrium:
formula_13
formula_14
By geometry:
formula_15
This gives the following relationship:
formula_16
As before, plotting the left hand side and right hand side of the equation against the tension, "T," will give the equilibrium tension for a given pretension, formula_10 and load, "W".
Tensioned cable oscillations.
The fundamental natural frequency, "f"1 of tensioned cables is given by:
formula_17
where "T" = tension in newtons, "m" = mass in kilograms and "L" = span length.
Notable structures.
<templatestyles src="Stack/styles.css"/>
Classification numbers.
The Construction Specifications Institute (CSI) and Construction Specifications Canada (CSC), MasterFormat 2018 Edition, Division 05 and 13:
CSI/CSC MasterFormat 1995 Edition:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "w = \\frac{t_1}{R_1} + \\frac{t_2}{R_2}"
},
{
"math_id": 1,
"text": "\\frac{t_1}{R_1} = -\\frac{t_2}{R_2}"
},
{
"math_id": 2,
"text": "H = \\frac{wS^2}{8d} "
},
{
"math_id": 3,
"text": "V = \\frac{wS}{2}"
},
{
"math_id": 4,
"text": "L = 2R\\arcsin\\frac{S}{2R}"
},
{
"math_id": 5,
"text": "T = \\sqrt{H^2+V^2}"
},
{
"math_id": 6,
"text": "T = \\sqrt{\\left(\\frac{wS^2}{8d}\\right)^2 + \\left(\\frac{wS}{2}\\right)^2}"
},
{
"math_id": 7,
"text": "T = wR"
},
{
"math_id": 8,
"text": "k = \\frac{{EA}}{{L}}"
},
{
"math_id": 9,
"text": "e = \\frac{TL}{EA}"
},
{
"math_id": 10,
"text": "T_0"
},
{
"math_id": 11,
"text": "e = L - L_0 = \\frac{L_0(T-T_0)}{EA}"
},
{
"math_id": 12,
"text": "\\frac{L_0(T-T_0)}{EA} + L_0 = \\frac{2T\\arcsin\\left(\\frac{wS}{2T}\\right)}{w}"
},
{
"math_id": 13,
"text": "W = \\frac{4Td}{L}"
},
{
"math_id": 14,
"text": "d = \\frac{WL}{4T}"
},
{
"math_id": 15,
"text": "L = \\sqrt{S^2 + 4d^2} = \\sqrt{S^2 + 4\\left(\\frac{WL}{4T}\\right)^2}"
},
{
"math_id": 16,
"text": "L_0 + \\frac{L_0(T-T_0)}{EA} = \\sqrt{S^2 + 4\\left(\\frac{W(L_0+\\frac{L_0(T-T_0)}{EA})}{4T}\\right)^2}"
},
{
"math_id": 17,
"text": "f_1=\\frac{\\sqrt{\\left(\\frac{T}{m}\\right)}}{2L}"
}
] | https://en.wikipedia.org/wiki?curid=947383 |
9476 | Electron | Elementary particle with negative charge
The electron (', or ' in nuclear reactions) is a subatomic particle with a negative one elementary electric charge. Electrons belong to the first generation of the lepton particle family, and are generally thought to be elementary particles because they have no known components or substructure. The electron's mass is approximately 1/1836 that of the proton. Quantum mechanical properties of the electron include an intrinsic angular momentum (spin) of a half-integer value, expressed in units of the reduced Planck constant, ħ. Being fermions, no two electrons can occupy the same quantum state, per the Pauli exclusion principle. Like all elementary particles, electrons exhibit properties of both particles and waves: They can collide with other particles and can be diffracted like light. The wave properties of electrons are easier to observe with experiments than those of other particles like neutrons and protons because electrons have a lower mass and hence a longer de Broglie wavelength for a given energy.
Electrons play an essential role in numerous physical phenomena, such as electricity, magnetism, chemistry, and thermal conductivity; they also participate in gravitational, electromagnetic, and weak interactions. Since an electron has charge, it has a surrounding electric field; if that electron is moving relative to an observer, the observer will observe it to generate a magnetic field. Electromagnetic fields produced from other sources will affect the motion of an electron according to the Lorentz force law. Electrons radiate or absorb energy in the form of photons when they are accelerated.
Laboratory instruments are capable of trapping individual electrons as well as electron plasma by the use of electromagnetic fields. Special telescopes can detect electron plasma in outer space. Electrons are involved in many applications, such as tribology or frictional charging, electrolysis, electrochemistry, battery technologies, electronics, welding, cathode-ray tubes, photoelectricity, photovoltaic solar panels, electron microscopes, radiation therapy, lasers, gaseous ionization detectors, and particle accelerators.
Interactions involving electrons with other subatomic particles are of interest in fields such as chemistry and nuclear physics. The Coulomb force interaction between the positive protons within atomic nuclei and the negative electrons without allows the composition of the two known as atoms. Ionization or differences in the proportions of negative electrons versus positive nuclei changes the binding energy of an atomic system. The exchange or sharing of the electrons between two or more atoms is the main cause of chemical bonding.
In 1838, British natural philosopher Richard Laming first hypothesized the concept of an indivisible quantity of electric charge to explain the chemical properties of atoms. Irish physicist George Johnstone Stoney named this charge "electron" in 1891, and J. J. Thomson and his team of British physicists identified it as a particle in 1897 during the cathode-ray tube experiment.
Electrons participate in nuclear reactions, such as nucleosynthesis in stars, where they are known as beta particles. Electrons can be created through beta decay of radioactive isotopes and in high-energy collisions, for instance, when cosmic rays enter the atmosphere. The antiparticle of the electron is called the positron; it is identical to the electron, except that it carries electrical charge of the opposite sign. When an electron collides with a positron, both particles can be annihilated, producing gamma ray photons.
History.
Discovery of effect of electric force.
The ancient Greeks noticed that amber attracted small objects when rubbed with fur. Along with lightning, this phenomenon is one of humanity's earliest recorded experiences with electricity. In his 1600 treatise , the English scientist William Gilbert coined the Neo-Latin term , to refer to those substances with property similar to that of amber which attract small objects after being rubbed. Both "electric" and "electricity" are derived from the Latin ' (also the root of the alloy of the same name), which came from the Greek word for amber, (').
Discovery of two kinds of charges.
In the early 1700s, French chemist Charles François du Fay found that if a charged gold-leaf is repulsed by glass rubbed with silk, then the same charged gold-leaf is attracted by amber rubbed with wool. From this and other results of similar types of experiments, du Fay concluded that electricity consists of two electrical fluids, "vitreous" fluid from glass rubbed with silk and "resinous" fluid from amber rubbed with wool. These two fluids can neutralize each other when combined. American scientist Ebenezer Kinnersley later also independently reached the same conclusion. A decade later Benjamin Franklin proposed that electricity was not from different types of electrical fluid, but a single electrical fluid showing an excess (+) or deficit (−). He gave them the modern charge nomenclature of positive and negative respectively. Franklin thought of the charge carrier as being positive, but he did not correctly identify which situation was a surplus of the charge carrier, and which situation was a deficit.
Between 1838 and 1851, British natural philosopher Richard Laming developed the idea that an atom is composed of a core of matter surrounded by subatomic particles that had unit electric charges. Beginning in 1846, German physicist Wilhelm Eduard Weber theorized that electricity was composed of positively and negatively charged fluids, and their interaction was governed by the inverse square law. After studying the phenomenon of electrolysis in 1874, Irish physicist George Johnstone Stoney suggested that there existed a "single definite quantity of electricity", the charge of a monovalent ion. He was able to estimate the value of this elementary charge "e" by means of Faraday's laws of electrolysis. However, Stoney believed these charges were permanently attached to atoms and could not be removed. In 1881, German physicist Hermann von Helmholtz argued that both positive and negative charges were divided into elementary parts, each of which "behaves like atoms of electricity".
Stoney initially coined the term "electrolion" in 1881. Ten years later, he switched to "electron" to describe these elementary charges, writing in 1894: "... an estimate was made of the actual amount of this most remarkable fundamental unit of electricity, for which I have since ventured to suggest the name "electron"". A 1906 proposal to change to "electrion" failed because Hendrik Lorentz preferred to keep "electron". The word "electron" is a combination of the words "electric" and "ion". The suffix -"on" which is now used to designate other subatomic particles, such as a proton or neutron, is in turn derived from electron.
Discovery of free electrons outside matter.
While studying electrical conductivity in rarefied gases in 1859, the German physicist Julius Plücker observed the radiation emitted from the cathode caused phosphorescent light to appear on the tube wall near the cathode; and the region of the phosphorescent light could be moved by application of a magnetic field. In 1869, Plücker's student Johann Wilhelm Hittorf found that a solid body placed in between the cathode and the phosphorescence would cast a shadow upon the phosphorescent region of the tube. Hittorf inferred that there are straight rays emitted from the cathode and that the phosphorescence was caused by the rays striking the tube walls. In 1876, the German physicist Eugen Goldstein showed that the rays were emitted perpendicular to the cathode surface, which distinguished between the rays that were emitted from the cathode and the incandescent light. Goldstein dubbed the rays cathode rays. Decades of experimental and theoretical research involving cathode rays were important in J. J. Thomson's eventual discovery of electrons.
During the 1870s, the English chemist and physicist Sir William Crookes developed the first cathode-ray tube to have a high vacuum inside. He then showed in 1874 that the cathode rays can turn a small paddle wheel when placed in their path. Therefore, he concluded that the rays carried momentum. Furthermore, by applying a magnetic field, he was able to deflect the rays, thereby demonstrating that the beam behaved as though it were negatively charged. In 1879, he proposed that these properties could be explained by regarding cathode rays as composed of negatively charged gaseous molecules in a fourth state of matter in which the mean free path of the particles is so long that collisions may be ignored.
The German-born British physicist Arthur Schuster expanded upon Crookes's experiments by placing metal plates parallel to the cathode rays and applying an electric potential between the plates. The field deflected the rays toward the positively charged plate, providing further evidence that the rays carried negative charge. By measuring the amount of deflection for a given electric and magnetic field, in 1890 Schuster was able to estimate the charge-to-mass ratio of the ray components. However, this produced a value that was more than a thousand times greater than what was expected, so little credence was given to his calculations at the time. This is because it was assumed that the charge carriers were much heavier hydrogen or nitrogen atoms. Schuster's estimates would subsequently turn out to be largely correct.
In 1892 Hendrik Lorentz suggested that the mass of these particles (electrons) could be a consequence of their electric charge.
While studying naturally fluorescing minerals in 1896, the French physicist Henri Becquerel discovered that they emitted radiation without any exposure to an external energy source. These radioactive materials became the subject of much interest by scientists, including the New Zealand physicist Ernest Rutherford who discovered they emitted particles. He designated these particles alpha and beta, on the basis of their ability to penetrate matter. In 1900, Becquerel showed that the beta rays emitted by radium could be deflected by an electric field, and that their mass-to-charge ratio was the same as for cathode rays. This evidence strengthened the view that electrons existed as components of atoms.
In 1897, the British physicist J. J. Thomson, with his colleagues John S. Townsend and H. A. Wilson, performed experiments indicating that cathode rays really were unique particles, rather than waves, atoms or molecules as was believed earlier. By 1899 he showed that their charge-to-mass ratio, "e"/"m", was independent of cathode material. He further showed that the negatively charged particles produced by radioactive materials, by heated materials and by illuminated materials were universal. Thomson measured "m"/"e" for cathode ray "corpuscles", and made good estimates of the charge "e", leading to value for the mass "m", finding a value 1400 times massive than the least massive ion known: hydrogen. In the same year Emil Wiechert and Walter Kaufmann also calculated the "e"/"m" ratio but did not take the step of interpreting their results as showing a new particle, while J. J. Thomson would subsequently in 1899 give estimates for the electron charge and mass as well: "e" ~ and "m" ~
The name "electron" was adopted for these particles by the scientific community, mainly due to the advocation by G. F. FitzGerald, J. Larmor, and H. A. Lorentz. The term was originally coined by George Johnstone Stoney in 1891 as a tentative name for the basic unit of electrical charge (which had then yet to be discovered).
The electron's charge was more carefully measured by the American physicists Robert Millikan and Harvey Fletcher in their oil-drop experiment of 1909, the results of which were published in 1911. This experiment used an electric field to prevent a charged droplet of oil from falling as a result of gravity. This device could measure the electric charge from as few as 1–150 ions with an error margin of less than 0.3%. Comparable experiments had been done earlier by Thomson's team, using clouds of charged water droplets generated by electrolysis, and in 1911 by Abram Ioffe, who independently obtained the same result as Millikan using charged microparticles of metals, then published his results in 1913. However, oil drops were more stable than water drops because of their slower evaporation rate, and thus more suited to precise experimentation over longer periods of time.
Around the beginning of the twentieth century, it was found that under certain conditions a fast-moving charged particle caused a condensation of supersaturated water vapor along its path. In 1911, Charles Wilson used this principle to devise his cloud chamber so he could photograph the tracks of charged particles, such as fast-moving electrons.
Atomic theory.
By 1914, experiments by physicists Ernest Rutherford, Henry Moseley, James Franck and Gustav Hertz had largely established the structure of an atom as a dense nucleus of positive charge surrounded by lower-mass electrons. In 1913, Danish physicist Niels Bohr postulated that electrons resided in quantized energy states, with their energies determined by the angular momentum of the electron's orbit about the nucleus. The electrons could move between those states, or orbits, by the emission or absorption of photons of specific frequencies. By means of these quantized orbits, he accurately explained the spectral lines of the hydrogen atom. However, Bohr's model failed to account for the relative intensities of the spectral lines and it was unsuccessful in explaining the spectra of more complex atoms.
Chemical bonds between atoms were explained by Gilbert Newton Lewis, who in 1916 proposed that a covalent bond between two atoms is maintained by a pair of electrons shared between them. Later, in 1927, Walter Heitler and Fritz London gave the full explanation of the electron-pair formation and chemical bonding in terms of quantum mechanics. In 1919, the American chemist Irving Langmuir elaborated on the Lewis's static model of the atom and suggested that all electrons were distributed in successive "concentric (nearly) spherical shells, all of equal thickness". In turn, he divided the shells into a number of cells each of which contained one pair of electrons. With this model Langmuir was able to qualitatively explain the chemical properties of all elements in the periodic table, which were known to largely repeat themselves according to the periodic law.
In 1924, Austrian physicist Wolfgang Pauli observed that the shell-like structure of the atom could be explained by a set of four parameters that defined every quantum energy state, as long as each state was occupied by no more than a single electron. This prohibition against more than one electron occupying the same quantum energy state became known as the Pauli exclusion principle. The physical mechanism to explain the fourth parameter, which had two distinct possible values, was provided by the Dutch physicists Samuel Goudsmit and George Uhlenbeck. In 1925, they suggested that an electron, in addition to the angular momentum of its orbit, possesses an intrinsic angular momentum and magnetic dipole moment. This is analogous to the rotation of the Earth on its axis as it orbits the Sun. The intrinsic angular momentum became known as spin, and explained the previously mysterious splitting of spectral lines observed with a high-resolution spectrograph; this phenomenon is known as fine structure splitting.
Quantum mechanics.
In his 1924 dissertation "" (Research on Quantum Theory), French physicist Louis de Broglie hypothesized that all matter can be represented as a de Broglie wave in the manner of light. That is, under the appropriate conditions, electrons and other matter would show properties of either particles or waves. The corpuscular properties of a particle are demonstrated when it is shown to have a localized position in space along its trajectory at any given moment. The wave-like nature of light is displayed, for example, when a beam of light is passed through parallel slits thereby creating interference patterns. In 1927, George Paget Thomson and Alexander Reid discovered the interference effect was produced when a beam of electrons was passed through thin celluloid foils and later metal films, and by American physicists Clinton Davisson and Lester Germer by the reflection of electrons from a crystal of nickel. Alexander Reid, who was Thomson's graduate student, performed the first experiments but he died soon after in a motorcycle accident and is rarely mentioned.
De Broglie's prediction of a wave nature for electrons led Erwin Schrödinger to postulate a wave equation for electrons moving under the influence of the nucleus in the atom. In 1926, this equation, the Schrödinger equation, successfully described how electron waves propagated. Rather than yielding a solution that determined the location of an electron over time, this wave equation also could be used to predict the probability of finding an electron near a position, especially a position near where the electron was bound in space, for which the electron wave equations did not change in time. This approach led to a second formulation of quantum mechanics (the first by Heisenberg in 1925), and solutions of Schrödinger's equation, like Heisenberg's, provided derivations of the energy states of an electron in a hydrogen atom that were equivalent to those that had been derived first by Bohr in 1913, and that were known to reproduce the hydrogen spectrum. Once spin and the interaction between multiple electrons were describable, quantum mechanics made it possible to predict the configuration of electrons in atoms with atomic numbers greater than hydrogen.
In 1928, building on Wolfgang Pauli's work, Paul Dirac produced a model of the electron – the Dirac equation, consistent with relativity theory, by applying relativistic and symmetry considerations to the hamiltonian formulation of the quantum mechanics of the electro-magnetic field. In order to resolve some problems within his relativistic equation, Dirac developed in 1930 a model of the vacuum as an infinite sea of particles with negative energy, later dubbed the Dirac sea. This led him to predict the existence of a positron, the antimatter counterpart of the electron. This particle was discovered in 1932 by Carl Anderson, who proposed calling standard electrons "negatrons" and using "electron" as a generic term to describe both the positively and negatively charged variants.
In 1947, Willis Lamb, working in collaboration with graduate student Robert Retherford, found that certain quantum states of the hydrogen atom, which should have the same energy, were shifted in relation to each other; the difference came to be called the Lamb shift. About the same time, Polykarp Kusch, working with Henry M. Foley, discovered the magnetic moment of the electron is slightly larger than predicted by Dirac's theory. This small difference was later called anomalous magnetic dipole moment of the electron. This difference was later explained by the theory of quantum electrodynamics, developed by Sin-Itiro Tomonaga, Julian Schwinger and
Richard Feynman in the late 1940s.
Particle accelerators.
With the development of the particle accelerator during the first half of the twentieth century, physicists began to delve deeper into the properties of subatomic particles. The first successful attempt to accelerate electrons using electromagnetic induction was made in 1942 by Donald Kerst. His initial betatron reached energies of 2.3 MeV, while subsequent betatrons achieved 300 MeV. In 1947, synchrotron radiation was discovered with a 70 MeV electron synchrotron at General Electric. This radiation was caused by the acceleration of electrons through a magnetic field as they moved near the speed of light.
With a beam energy of 1.5 GeV, the first high-energy
particle collider was ADONE, which began operations in 1968. This device accelerated electrons and positrons in opposite directions, effectively doubling the energy of their collision when compared to striking a static target with an electron. The Large Electron–Positron Collider (LEP) at CERN, which was operational from 1989 to 2000, achieved collision energies of 209 GeV and made important measurements for the Standard Model of particle physics.
Confinement of individual electrons.
Individual electrons can now be easily confined in ultra small ("L" = 20 nm, "W" = 20 nm) CMOS transistors operated at cryogenic temperature over a range of −269 °C (4 K) to about −258 °C (15 K). The electron wavefunction spreads in a semiconductor lattice and negligibly interacts with the valence band electrons, so it can be treated in the single particle formalism, by replacing its mass with the effective mass tensor.
Characteristics.
Classification.
In the Standard Model of particle physics, electrons belong to the group of subatomic particles called leptons, which are believed to be fundamental or elementary particles. Electrons have the lowest mass of any charged lepton (or electrically charged particle of any type) and belong to the first-generation of fundamental particles. The second and third generation contain charged leptons, the muon and the tau, which are identical to the electron in charge, spin and interactions, but are more massive. Leptons differ from the other basic constituent of matter, the quarks, by their lack of strong interaction. All members of the lepton group are fermions because they all have half-odd integer spin; the electron has spin .
Fundamental properties.
The invariant mass of an electron is approximately , or . Due to mass–energy equivalence, this corresponds to a rest energy of . The ratio between the mass of a proton and that of an electron is about 1836. Astronomical measurements show that the proton-to-electron mass ratio has held the same value, as is predicted by the Standard Model, for at least half the age of the universe.
Electrons have an electric charge of coulombs, which is used as a standard unit of charge for subatomic particles, and is also called the elementary charge. Within the limits of experimental accuracy, the electron charge is identical to the charge of a proton, but with the opposite sign. The electron is commonly symbolized by , and the positron is symbolized by .
The electron has an intrinsic angular momentum or spin of . This property is usually stated by referring to the electron as a spin-1/2 particle. For such particles the spin magnitude is , while the result of the measurement of a projection of the spin on any axis can only be ±. In addition to spin, the electron has an intrinsic magnetic moment along its spin axis. It is approximately equal to one Bohr magneton, which is a physical constant that is equal to . The orientation of the spin with respect to the momentum of the electron defines the property of elementary particles known as helicity.
The electron has no known substructure. Nevertheless, in condensed matter physics, spin–charge separation can occur in some materials. In such cases, electrons 'split' into three independent particles, the spinon, the orbiton and the holon (or chargon). The electron can always be theoretically considered as a bound state of the three, with the spinon carrying the spin of the electron, the orbiton carrying the orbital degree of freedom and the chargon carrying the charge, but in certain conditions they can behave as independent quasiparticles.
The issue of the radius of the electron is a challenging problem of modern theoretical physics. The admission of the hypothesis of a finite radius of the electron is incompatible to the premises of the theory of relativity. On the other hand, a point-like electron (zero radius) generates serious mathematical difficulties due to the self-energy of the electron tending to infinity. Observation of a single electron in a Penning trap suggests the upper limit of the particle's radius to be 10−22 meters.
The upper bound of the electron radius of 10−18 meters can be derived using the uncertainty relation in energy. There "is" also a physical constant called the "classical electron radius", with the much larger value of , greater than the radius of the proton. However, the terminology comes from a simplistic calculation that ignores the effects of quantum mechanics; in reality, the so-called classical electron radius has little to do with the true fundamental structure of the electron.
There are elementary particles that spontaneously decay into less massive particles. An example is the muon, with a mean lifetime of seconds, which decays into an electron, a muon neutrino and an electron antineutrino. The electron, on the other hand, is thought to be stable on theoretical grounds: the electron is the least massive particle with non-zero electric charge, so its decay would violate charge conservation. The experimental lower bound for the electron's mean lifetime is years, at a 90% confidence level.
Quantum properties.
As with all particles, electrons can act as waves. This is called the wave–particle duality and can be demonstrated using the double-slit experiment.
The wave-like nature of the electron allows it to pass through two parallel slits simultaneously, rather than just one slit as would be the case for a classical particle. In quantum mechanics, the wave-like property of one particle can be described mathematically as a complex-valued function, the wave function, commonly denoted by the Greek letter psi ("ψ"). When the absolute value of this function is squared, it gives the probability that a particle will be observed near a location—a probability density.
Electrons are identical particles because they cannot be distinguished from each other by their intrinsic physical properties. In quantum mechanics, this means that a pair of interacting electrons must be able to swap positions without an observable change to the state of the system. The wave function of fermions, including electrons, is antisymmetric, meaning that it changes sign when two electrons are swapped; that is, "ψ"("r"1, "r"2)
−"ψ"("r"2, "r"1), where the variables "r"1 and "r"2 correspond to the first and second electrons, respectively. Since the absolute value is not changed by a sign swap, this corresponds to equal probabilities. Bosons, such as the photon, have symmetric wave functions instead.
In the case of antisymmetry, solutions of the wave equation for interacting electrons result in a zero probability that each pair will occupy the same location or state. This is responsible for the Pauli exclusion principle, which precludes any two electrons from occupying the same quantum state. This principle explains many of the properties of electrons. For example, it causes groups of bound electrons to occupy different orbitals in an atom, rather than all overlapping each other in the same orbit.
Virtual particles.
In a simplified picture, which often tends to give the wrong idea but may serve to illustrate some aspects, every photon spends some time as a combination of a virtual electron plus its antiparticle, the virtual positron, which rapidly annihilate each other shortly thereafter. The combination of the energy variation needed to create these particles, and the time during which they exist, fall under the threshold of detectability expressed by the Heisenberg uncertainty relation, Δ"E" · Δ"t" ≥ "ħ". In effect, the energy needed to create these virtual particles, Δ"E", can be "borrowed" from the vacuum for a period of time, Δ"t", so that their product is no more than the reduced Planck constant, "ħ" ≈. Thus, for a virtual electron, Δ"t" is at most .
While an electron–positron virtual pair is in existence, the Coulomb force from the ambient electric field surrounding an electron causes a created positron to be attracted to the original electron, while a created electron experiences a repulsion. This causes what is called vacuum polarization. In effect, the vacuum behaves like a medium having a dielectric permittivity more than unity. Thus the effective charge of an electron is actually smaller than its true value, and the charge decreases with increasing distance from the electron. This polarization was confirmed experimentally in 1997 using the Japanese TRISTAN particle accelerator. Virtual particles cause a comparable shielding effect for the mass of the electron.
The interaction with virtual particles also explains the small (about 0.1%) deviation of the intrinsic magnetic moment of the electron from the Bohr magneton (the anomalous magnetic moment). The extraordinarily precise agreement of this predicted difference with the experimentally determined value is viewed as one of the great achievements of quantum electrodynamics.
The apparent paradox in classical physics of a point particle electron having intrinsic angular momentum and magnetic moment can be explained by the formation of virtual photons in the electric field generated by the electron. These photons can heuristically be thought of as causing the electron to shift about in a jittery fashion (known as zitterbewegung), which results in a net circular motion with precession. This motion produces both the spin and the magnetic moment of the electron. In atoms, this creation of virtual photons explains the Lamb shift observed in spectral lines. The Compton Wavelength shows that near elementary particles such as the electron, the uncertainty of the energy allows for the creation of virtual particles near the electron. This wavelength explains the "static" of virtual particles around elementary particles at a close distance.
Interaction.
An electron generates an electric field that exerts an attractive force on a particle with a positive charge, such as the proton, and a repulsive force on a particle with a negative charge. The strength of this force in nonrelativistic approximation is determined by Coulomb's inverse square law. When an electron is in motion, it generates a magnetic field.140 The Ampère–Maxwell law relates the magnetic field to the mass motion of electrons (the current) with respect to an observer. This property of induction supplies the magnetic field that drives an electric motor. The electromagnetic field of an arbitrary moving charged particle is expressed by the Liénard–Wiechert potentials, which are valid even when the particle's speed is close to that of light (relativistic).
When an electron is moving through a magnetic field, it is subject to the Lorentz force that acts perpendicularly to the plane defined by the magnetic field and the electron velocity. This centripetal force causes the electron to follow a helical trajectory through the field at a radius called the gyroradius. The acceleration from this curving motion induces the electron to radiate energy in the form of synchrotron radiation.160 The energy emission in turn causes a recoil of the electron, known as the Abraham–Lorentz–Dirac Force, which creates a friction that slows the electron. This force is caused by a back-reaction of the electron's own field upon itself.
Photons mediate electromagnetic interactions between particles in quantum electrodynamics. An isolated electron at a constant velocity cannot emit or absorb a real photon; doing so would violate conservation of energy and momentum. Instead, virtual photons can transfer momentum between two charged particles. This exchange of virtual photons, for example, generates the Coulomb force. Energy emission can occur when a moving electron is deflected by a charged particle, such as a proton. The deceleration of the electron results in the emission of Bremsstrahlung radiation.
An inelastic collision between a photon (light) and a solitary (free) electron is called Compton scattering. This collision results in a transfer of momentum and energy between the particles, which modifies the wavelength of the photon by an amount called the Compton shift. The maximum magnitude of this wavelength shift is "h"/"m"e"c", which is known as the Compton wavelength. For an electron, it has a value of . When the wavelength of the light is long (for instance, the wavelength of the visible light is 0.4–0.7 μm) the wavelength shift becomes negligible. Such interaction between the light and free electrons is called Thomson scattering or linear Thomson scattering.
The relative strength of the electromagnetic interaction between two charged particles, such as an electron and a proton, is given by the fine-structure constant. This value is a dimensionless quantity formed by the ratio of two energies: the electrostatic energy of attraction (or repulsion) at a separation of one Compton wavelength, and the rest energy of the charge. It is given by "α" ≈ , which is approximately equal to .
When electrons and positrons collide, they annihilate each other, giving rise to two or more gamma ray photons. If the electron and positron have negligible momentum, a positronium atom can form before annihilation results in two or three gamma ray photons totalling 1.022 MeV. On the other hand, a high-energy photon can transform into an electron and a positron by a process called pair production, but only in the presence of a nearby charged particle, such as a nucleus.
In the theory of electroweak interaction, the left-handed component of electron's wavefunction forms a weak isospin doublet with the electron neutrino. This means that during weak interactions, electron neutrinos behave like electrons. Either member of this doublet can undergo a charged current interaction by emitting or absorbing a and be converted into the other member. Charge is conserved during this reaction because the W boson also carries a charge, canceling out any net change during the transmutation. Charged current interactions are responsible for the phenomenon of beta decay in a radioactive atom. Both the electron and electron neutrino can undergo a neutral current interaction via a exchange, and this is responsible for neutrino–electron elastic scattering.
Atoms and molecules.
An electron can be "bound" to the nucleus of an atom by the attractive Coulomb force. A system of one or more electrons bound to a nucleus is called an atom. If the number of electrons is different from the nucleus's electrical charge, such an atom is called an ion. The wave-like behavior of a bound electron is described by a function called an atomic orbital. Each orbital has its own set of quantum numbers such as energy, angular momentum and projection of angular momentum, and only a discrete set of these orbitals exist around the nucleus. According to the Pauli exclusion principle each orbital can be occupied by up to two electrons, which must differ in their spin quantum number.
Electrons can transfer between different orbitals by the emission or absorption of photons with an energy that matches the difference in potential. Other methods of orbital transfer include collisions with particles, such as electrons, and the Auger effect. To escape the atom, the energy of the electron must be increased above its binding energy to the atom. This occurs, for example, with the photoelectric effect, where an incident photon exceeding the atom's ionization energy is absorbed by the electron.
The orbital angular momentum of electrons is quantized. Because the electron is charged, it produces an orbital magnetic moment that is proportional to the angular momentum. The net magnetic moment of an atom is equal to the vector sum of orbital and spin magnetic moments of all electrons and the nucleus. The magnetic moment of the nucleus is negligible compared with that of the electrons. The magnetic moments of the electrons that occupy the same orbital, called paired electrons, cancel each other out.
The chemical bond between atoms occurs as a result of electromagnetic interactions, as described by the laws of quantum mechanics. The strongest bonds are formed by the sharing or transfer of electrons between atoms, allowing the formation of molecules. Within a molecule, electrons move under the influence of several nuclei, and occupy molecular orbitals; much as they can occupy atomic orbitals in isolated atoms. A fundamental factor in these molecular structures is the existence of electron pairs. These are electrons with opposed spins, allowing them to occupy the same molecular orbital without violating the Pauli exclusion principle (much like in atoms). Different molecular orbitals have different spatial distribution of the electron density. For instance, in bonded pairs (i.e. in the pairs that actually bind atoms together) electrons can be found with the maximal probability in a relatively small volume between the nuclei. By contrast, in non-bonded pairs electrons are distributed in a large volume around nuclei.
Conductivity.
If a body has more or fewer electrons than are required to balance the positive charge of the nuclei, then that object has a net electric charge. When there is an excess of electrons, the object is said to be negatively charged. When there are fewer electrons than the number of protons in nuclei, the object is said to be positively charged. When the number of electrons and the number of protons are equal, their charges cancel each other and the object is said to be electrically neutral. A macroscopic body can develop an electric charge through rubbing, by the triboelectric effect.
Independent electrons moving in vacuum are termed "free" electrons. Electrons in metals also behave as if they were free. In reality the particles that are commonly termed electrons in metals and other solids are quasi-electrons—quasiparticles, which have the same electrical charge, spin, and magnetic moment as real electrons but might have a different mass. When free electrons—both in vacuum and metals—move, they produce a net flow of charge called an electric current, which generates a magnetic field. Likewise a current can be created by a changing magnetic field. These interactions are described mathematically by Maxwell's equations.
At a given temperature, each material has an electrical conductivity that determines the value of electric current when an electric potential is applied. Examples of good conductors include metals such as copper and gold, whereas glass and Teflon are poor conductors. In any dielectric material, the electrons remain bound to their respective atoms and the material behaves as an insulator. Most semiconductors have a variable level of conductivity that lies between the extremes of conduction and insulation. On the other hand, metals have an electronic band structure containing partially filled electronic bands. The presence of such bands allows electrons in metals to behave as if they were free or delocalized electrons. These electrons are not associated with specific atoms, so when an electric field is applied, they are free to move like a gas (called Fermi gas) through the material much like free electrons.
Because of collisions between electrons and atoms, the drift velocity of electrons in a conductor is on the order of millimeters per second. However, the speed at which a change of current at one point in the material causes changes in currents in other parts of the material, the velocity of propagation, is typically about 75% of light speed. This occurs because electrical signals propagate as a wave, with the velocity dependent on the dielectric constant of the material.
Metals make relatively good conductors of heat, primarily because the delocalized electrons are free to transport thermal energy between atoms. However, unlike electrical conductivity, the thermal conductivity of a metal is nearly independent of temperature. This is expressed mathematically by the Wiedemann–Franz law, which states that the ratio of thermal conductivity to the electrical conductivity is proportional to the temperature. The thermal disorder in the metallic lattice increases the electrical resistivity of the material, producing a temperature dependence for electric current.
When cooled below a point called the critical temperature, materials can undergo a phase transition in which they lose all resistivity to electric current, in a process known as superconductivity. In BCS theory, pairs of electrons called Cooper pairs have their motion coupled to nearby matter via lattice vibrations called phonons, thereby avoiding the collisions with atoms that normally create electrical resistance. (Cooper pairs have a radius of roughly 100 nm, so they can overlap each other.) However, the mechanism by which higher temperature superconductors operate remains uncertain.
Electrons inside conducting solids, which are quasi-particles themselves, when tightly confined at temperatures close to absolute zero, behave as though they had split into three other quasiparticles: spinons, orbitons and holons. The former carries spin and magnetic moment, the next carries its orbital location while the latter electrical charge.
Motion and energy.
According to Einstein's theory of special relativity, as an electron's speed approaches the speed of light, from an observer's point of view its relativistic mass increases, thereby making it more and more difficult to accelerate it from within the observer's frame of reference. The speed of an electron can approach, but never reach, the speed of light in vacuum, "c". However, when relativistic electrons—that is, electrons moving at a speed close to "c"—are injected into a dielectric medium such as water, where the local speed of light is significantly less than "c", the electrons temporarily travel faster than light in the medium. As they interact with the medium, they generate a faint light called Cherenkov radiation.
The effects of special relativity are based on a quantity known as the Lorentz factor, defined as formula_0 where "v" is the speed of the particle. The kinetic energy "K"e of an electron moving with velocity "v" is:
formula_1
where "m"e is the mass of electron. For example, the Stanford linear accelerator can accelerate an electron to roughly 51 GeV.
Since an electron behaves as a wave, at a given velocity it has a characteristic de Broglie wavelength. This is given by "λ"e = "h"/"p" where "h" is the Planck constant and "p" is the momentum. For the 51 GeV electron above, the wavelength is about , small enough to explore structures well below the size of an atomic nucleus.
Formation.
The Big Bang theory is the most widely accepted scientific theory to explain the early stages in the evolution of the Universe. For the first millisecond of the Big Bang, the temperatures were over 10 billion kelvins and photons had mean energies over a million electronvolts. These photons were sufficiently energetic that they could react with each other to form pairs of electrons and positrons. Likewise, positron–electron pairs annihilated each other and emitted energetic photons:
An equilibrium between electrons, positrons and photons was maintained during this phase of the evolution of the Universe. After 15 seconds had passed, however, the temperature of the universe dropped below the threshold where electron-positron formation could occur. Most of the surviving electrons and positrons annihilated each other, releasing gamma radiation that briefly reheated the universe.
For reasons that remain uncertain, during the annihilation process there was an excess in the number of particles over antiparticles. Hence, about one electron for every billion electron–positron pairs survived. This excess matched the excess of protons over antiprotons, in a condition known as baryon asymmetry, resulting in a net charge of zero for the universe. The surviving protons and neutrons began to participate in reactions with each other—in the process known as nucleosynthesis, forming isotopes of hydrogen and helium, with trace amounts of lithium. This process peaked after about five minutes. Any leftover neutrons underwent negative beta decay with a half-life of about a thousand seconds, releasing a proton and electron in the process,
For about the next –, the excess electrons remained too energetic to bind with atomic nuclei. What followed is a period known as recombination, when neutral atoms were formed and the expanding universe became transparent to radiation.
Roughly one million years after the big bang, the first generation of stars began to form. Within a star, stellar nucleosynthesis results in the production of positrons from the fusion of atomic nuclei. These antimatter particles immediately annihilate with electrons, releasing gamma rays. The net result is a steady reduction in the number of electrons, and a matching increase in the number of neutrons. However, the process of stellar evolution can result in the synthesis of radioactive isotopes. Selected isotopes can subsequently undergo negative beta decay, emitting an electron and antineutrino from the nucleus. An example is the cobalt-60 (60Co) isotope, which decays to form nickel-60 (Ni).
At the end of its lifetime, a star with more than about 20 solar masses can undergo gravitational collapse to form a black hole. According to classical physics, these massive stellar objects exert a gravitational attraction that is strong enough to prevent anything, even electromagnetic radiation, from escaping past the Schwarzschild radius. However, quantum mechanical effects are believed to potentially allow the emission of Hawking radiation at this distance. Electrons (and positrons) are thought to be created at the event horizon of these stellar remnants.
When a pair of virtual particles (such as an electron and positron) is created in the vicinity of the event horizon, random spatial positioning might result in one of them to appear on the exterior; this process is called quantum tunnelling. The gravitational potential of the black hole can then supply the energy that transforms this virtual particle into a real particle, allowing it to radiate away into space. In exchange, the other member of the pair is given negative energy, which results in a net loss of mass–energy by the black hole. The rate of Hawking radiation increases with decreasing mass, eventually causing the black hole to evaporate away until, finally, it explodes.
Cosmic rays are particles traveling through space with high energies. Energy events as high as have been recorded. When these particles collide with nucleons in the Earth's atmosphere, a shower of particles is generated, including pions. More than half of the cosmic radiation observed from the Earth's surface consists of muons. The particle called a muon is a lepton produced in the upper atmosphere by the decay of a pion.
A muon, in turn, can decay to form an electron or positron.
Observation.
Remote observation of electrons requires detection of their radiated energy. For example, in high-energy environments such as the corona of a star, free electrons form a plasma that radiates energy due to Bremsstrahlung radiation. Electron gas can undergo plasma oscillation, which is waves caused by synchronized variations in electron density, and these produce energy emissions that can be detected by using radio telescopes.
The frequency of a photon is proportional to its energy. As a bound electron transitions between different energy levels of an atom, it absorbs or emits photons at characteristic frequencies. For instance, when atoms are irradiated by a source with a broad spectrum, distinct dark lines appear in the spectrum of transmitted radiation in places where the corresponding frequency is absorbed by the atom's electrons. Each element or molecule displays a characteristic set of spectral lines, such as the hydrogen spectral series. When detected, spectroscopic measurements of the strength and width of these lines allow the composition and physical properties of a substance to be determined.
In laboratory conditions, the interactions of individual electrons can be observed by means of particle detectors, which allow measurement of specific properties such as energy, spin and charge. The development of the Paul trap and Penning trap allows charged particles to be contained within a small region for long durations. This enables precise measurements of the particle properties. For example, in one instance a Penning trap was used to contain a single electron for a period of 10 months. The magnetic moment of the electron was measured to a precision of eleven digits, which, in 1980, was a greater accuracy than for any other physical constant.
The first video images of an electron's energy distribution were captured by a team at Lund University in Sweden, February 2008. The scientists used extremely short flashes of light, called attosecond pulses, which allowed an electron's motion to be observed for the first time.
The distribution of the electrons in solid materials can be visualized by angle-resolved photoemission spectroscopy (ARPES). This technique employs the photoelectric effect to measure the reciprocal space—a mathematical representation of periodic structures that is used to infer the original structure. ARPES can be used to determine the direction, speed and scattering of electrons within the material.
Plasma applications.
Particle beams.
Electron beams are used in welding. They allow energy densities up to across a narrow focus diameter of 0.1–1.3 mm and usually require no filler material. This welding technique must be performed in a vacuum to prevent the electrons from interacting with the gas before reaching their target, and it can be used to join conductive materials that would otherwise be considered unsuitable for welding.
Electron-beam lithography (EBL) is a method of etching semiconductors at resolutions smaller than a micrometer. This technique is limited by high costs, slow performance, the need to operate the beam in the vacuum and the tendency of the electrons to scatter in solids. The last problem limits the resolution to about 10 nm. For this reason, EBL is primarily used for the production of small numbers of specialized integrated circuits.
Electron beam processing is used to irradiate materials in order to change their physical properties or sterilize medical and food products. Electron beams fluidise or quasi-melt glasses without significant increase of temperature on intensive irradiation: e.g. intensive electron radiation causes a many orders of magnitude decrease of viscosity and stepwise decrease of its activation energy.
Linear particle accelerators generate electron beams for treatment of superficial tumors in radiation therapy. Electron therapy can treat such skin lesions as basal-cell carcinomas because an electron beam only penetrates to a limited depth before being absorbed, typically up to 5 cm for electron energies in the range 5–20 MeV. An electron beam can be used to supplement the treatment of areas that have been irradiated by X-rays.
Particle accelerators use electric fields to propel electrons and their antiparticles to high energies. These particles emit synchrotron radiation as they pass through magnetic fields. The dependency of the intensity of this radiation upon spin polarizes the electron beam—a process known as the Sokolov–Ternov effect. Polarized electron beams can be useful for various experiments. Synchrotron radiation can also cool the electron beams to reduce the momentum spread of the particles. Electron and positron beams are collided upon the particles' accelerating to the required energies; particle detectors observe the resulting energy emissions, which particle physics studies.
Imaging.
Low-energy electron diffraction (LEED) is a method of bombarding a crystalline material with a collimated beam of electrons and then observing the resulting diffraction patterns to determine the structure of the material. The required energy of the electrons is typically in the range 20–200 eV. The reflection high-energy electron diffraction (RHEED) technique uses the reflection of a beam of electrons fired at various low angles to characterize the surface of crystalline materials. The beam energy is typically in the range 8–20 keV and the angle of incidence is 1–4°.
The electron microscope directs a focused beam of electrons at a specimen. Some electrons change their properties, such as movement direction, angle, and relative phase and energy as the beam interacts with the material. Microscopists can record these changes in the electron beam to produce atomically resolved images of the material. In blue light, conventional optical microscopes have a diffraction-limited resolution of about 200 nm. By comparison, electron microscopes are limited by the de Broglie wavelength of the electron. This wavelength, for example, is equal to 0.0037 nm for electrons accelerated across a 100,000-volt potential. The Transmission Electron Aberration-Corrected Microscope is capable of sub-0.05 nm resolution, which is more than enough to resolve individual atoms. This capability makes the electron microscope a useful laboratory instrument for high resolution imaging. However, electron microscopes are expensive instruments that are costly to maintain.
Two main types of electron microscopes exist: transmission and scanning. Transmission electron microscopes function like overhead projectors, with a beam of electrons passing through a slice of material then being projected by lenses on a photographic slide or a charge-coupled device. Scanning electron microscopes rasteri a finely focused electron beam, as in a TV set, across the studied sample to produce the image. Magnifications range from 100× to 1,000,000× or higher for both microscope types. The scanning tunneling microscope uses quantum tunneling of electrons from a sharp metal tip into the studied material and can produce atomically resolved images of its surface.
Other applications.
In the free-electron laser (FEL), a relativistic electron beam passes through a pair of undulators that contain arrays of dipole magnets whose fields point in alternating directions. The electrons emit synchrotron radiation that coherently interacts with the same electrons to strongly amplify the radiation field at the resonance frequency. FEL can emit a coherent high-brilliance electromagnetic radiation with a wide range of frequencies, from microwaves to soft X-rays. These devices are used in manufacturing, communication, and in medical applications, such as soft tissue surgery.
Electrons are important in cathode-ray tubes, which have been extensively used as display devices in laboratory instruments, computer monitors and television sets. In a photomultiplier tube, every photon striking the photocathode initiates an avalanche of electrons that produces a detectable current pulse. Vacuum tubes use the flow of electrons to manipulate electrical signals, and they played a critical role in the development of electronics technology. However, they have been largely supplanted by solid-state devices such as the transistor.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\scriptstyle\\gamma=1/ \\sqrt{ 1-{v^2}/{c^2} }"
},
{
"math_id": 1,
"text": "\\displaystyle K_{\\mathrm{e}} = (\\gamma - 1)m_{\\mathrm{e}} c^2,"
}
] | https://en.wikipedia.org/wiki?curid=9476 |
9477975 | Completion of a ring | In abstract algebra, a completion is any of several related functors on rings and modules that result in complete topological rings and modules. Completion is similar to localization, and together they are among the most basic tools in analysing commutative rings. Complete commutative rings have a simpler structure than general ones, and Hensel's lemma applies to them. In algebraic geometry, a completion of a ring of functions "R" on a space "X" concentrates on a formal neighborhood of a point of "X": heuristically, this is a neighborhood so small that "all" Taylor series centered at the point are convergent. An algebraic completion is constructed in a manner analogous to completion of a metric space with Cauchy sequences, and agrees with it in the case when "R" has a metric given by a non-Archimedean absolute value.
General construction.
Suppose that "E" is an abelian group with a descending filtration
formula_0
of subgroups. One then defines the completion (with respect to the filtration) as the inverse limit:
formula_1
This is again an abelian group. Usually "E" is an "additive" abelian group. If "E" has additional algebraic structure compatible with the filtration, for instance "E" is a filtered ring, a filtered module, or a filtered vector space, then its completion is again an object with the same structure that is complete in the topology determined by the filtration. This construction may be applied both to commutative and noncommutative rings. As may be expected, when the intersection of the formula_2 equals zero, this produces a complete topological ring.
Krull topology.
In commutative algebra, the filtration on a commutative ring "R" by the powers of a proper ideal "I" determines the Krull (after Wolfgang Krull) or "I"-adic topology on "R". The case of a "maximal" ideal formula_3 is especially important, for example the distinguished maximal ideal of a valuation ring. The basis of open neighbourhoods of 0 in "R" is given by the powers "I""n", which are "nested" and form a descending filtration on "R":
formula_4
(Open neighborhoods of any "r" ∈ "R" are given by cosets "r" + "I""n".) The ("I"-adic) completion is the inverse limit of the factor rings,
formula_5
pronounced "R I hat". The kernel of the canonical map π from the ring to its completion is the intersection of the powers of "I". Thus π is injective if and only if this intersection reduces to the zero element of the ring; by the Krull intersection theorem, this is the case for any commutative Noetherian ring which is an integral domain or a local ring.
There is a related topology on "R"-modules, also called Krull or "I"-adic topology. A basis of open neighborhoods of a module "M" is given by the sets of the form
formula_6
The "I"-adic completion of an "R"-module "M" is the inverse limit of the quotients
formula_7
This procedure converts any module over "R" into a complete topological module over formula_8. [that is wrong in general! Only if the ideal is finite generated it is the case.]
formula_16
The kernel is the ideal formula_17
Examples.
Completions can also be used to analyze the local structure of singularities of a scheme. For example, the affine schemes associated to formula_18 and the nodal cubic plane curve formula_19 have similar looking singularities at the origin when viewing their graphs (both look like a plus sign). Notice that in the second case, any Zariski neighborhood of the origin is still an irreducible curve. If we use completions, then we are looking at a "small enough" neighborhood where the node has two components. Taking the localizations of these rings along the ideal formula_20 and completing gives formula_21 and formula_22 respectively, where formula_23 is the formal square root of formula_24 in formula_25 More explicitly, the power series:
formula_26
Since both rings are given by the intersection of two ideals generated by a homogeneous degree 1 polynomial, we can see algebraically that the singularities "look" the same. This is because such a scheme is the union of two non-equal linear subspaces of the affine plane.
Moreover, if "M" and "N" are two modules over the same topological ring "R" and "f": "M" → "N" is a continuous module map then "f" uniquely extends to the map of the completions:
formula_28
where formula_29 are modules over formula_30
formula_31
Together with the previous property, this implies that the functor of completion on finitely generated "R"-modules is exact: it preserves short exact sequences. In particular, taking quotients of rings commutes with completion, meaning that for any quotient "R"-algebra formula_32, there is an isomorphism
formula_33
formula_35
for some "n" and some ideal "I" (Eisenbud, Theorem 7.7).
Citations.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "E = F^0 E \\supset F^1 E \\supset F^2 E \\supset \\cdots \\,"
},
{
"math_id": 1,
"text": "\\widehat{E} = \\varprojlim (E/F^n E)=\\left\\{\\left.(\\overline{a_n})_{n\\geq0} \\in \\prod_{n\\geq0}(E/F^nE) \\;\\right|\\; a_i \\equiv a_j\\pmod{F^iE} \\text{ for all } i \\leq j\\right\\}. \\,"
},
{
"math_id": 2,
"text": "F^i E"
},
{
"math_id": 3,
"text": "I=\\mathfrak{m}"
},
{
"math_id": 4,
"text": " F^0 R = R\\supset I\\supset I^2\\supset\\cdots, \\quad F^n R = I^n."
},
{
"math_id": 5,
"text": " \\widehat{R}_I=\\varprojlim (R/I^n) "
},
{
"math_id": 6,
"text": "x + I^n M \\quad\\text{for }x \\in M."
},
{
"math_id": 7,
"text": "\\widehat{M}_I=\\varprojlim (M/I^n M)."
},
{
"math_id": 8,
"text": "\\widehat{R}_I"
},
{
"math_id": 9,
"text": "\\Z_p"
},
{
"math_id": 10,
"text": "\\Z"
},
{
"math_id": 11,
"text": "\\mathfrak{m}=(x_1,\\ldots,x_n)"
},
{
"math_id": 12,
"text": "\\widehat{R}_{\\mathfrak{m}}"
},
{
"math_id": 13,
"text": "R"
},
{
"math_id": 14,
"text": "I = (f_1,\\ldots, f_n),"
},
{
"math_id": 15,
"text": "I"
},
{
"math_id": 16,
"text": "\\begin{cases} R[[x_1, \\ldots, x_n]] \\to \\widehat{R}_I \\\\ x_i \\mapsto f_i \\end{cases}"
},
{
"math_id": 17,
"text": "(x_1 - f_1, \\ldots, x_n - f_n)."
},
{
"math_id": 18,
"text": "\\Complex[x,y]/(xy)"
},
{
"math_id": 19,
"text": "\\Complex[x,y]/(y^2 - x^2(1+x))"
},
{
"math_id": 20,
"text": "(x,y)"
},
{
"math_id": 21,
"text": "\\Complex[[x,y]]/(xy)"
},
{
"math_id": 22,
"text": "\\Complex[[x,y]]/((y+u)(y-u))"
},
{
"math_id": 23,
"text": "u"
},
{
"math_id": 24,
"text": "x^2(1+x)"
},
{
"math_id": 25,
"text": "\\Complex[[x,y]]."
},
{
"math_id": 26,
"text": "u = x\\sqrt{1+x} = \\sum_{n=0}^\\infty \\frac{(-1)^n(2n)!}{(1-2n)(n!)^2(4^n)}x^{n+1}."
},
{
"math_id": 27,
"text": "\\widehat{f}: \\widehat{R}\\to\\widehat{S}."
},
{
"math_id": 28,
"text": "\\widehat{f}: \\widehat{M}\\to\\widehat{N},"
},
{
"math_id": 29,
"text": "\\widehat{M},\\widehat{N}"
},
{
"math_id": 30,
"text": "\\widehat{R}."
},
{
"math_id": 31,
"text": " \\widehat{M}=M\\otimes_R \\widehat{R}."
},
{
"math_id": 32,
"text": "R / I"
},
{
"math_id": 33,
"text": "\\widehat{R / I} \\cong \\widehat R / \\widehat I."
},
{
"math_id": 34,
"text": "\\mathfrak{m}"
},
{
"math_id": 35,
"text": "R\\simeq K[[x_1,\\ldots,x_n]]/I"
}
] | https://en.wikipedia.org/wiki?curid=9477975 |
9478630 | Integral element | In commutative algebra, an element "b" of a commutative ring "B" is said to be integral over a subring "A" of "B" if "b" is a root of some monic polynomial over "A".
If "A", "B" are fields, then the notions of "integral over" and of an "integral extension" are precisely "algebraic over" and "algebraic extensions" in field theory (since the root of any polynomial is the root of a monic polynomial).
The case of greatest interest in number theory is that of complex numbers integral over Z (e.g., formula_0 or formula_1); in this context, the integral elements are usually called algebraic integers. The algebraic integers in a finite extension field "k" of the rationals Q form a subring of "k", called the ring of integers of "k", a central object of study in algebraic number theory.
In this article, the term "ring" will be understood to mean "commutative ring" with a multiplicative identity.
Definition.
Let formula_2 be a ring and let formula_3 be a subring of formula_4
An element formula_5 of formula_2 is said to be integral over formula_6 if for some formula_7 there exists formula_8 in formula_6 such that
formula_9
The set of elements of formula_2 that are integral over formula_6 is called the integral closure of formula_6 in formula_4 The integral closure of any subring formula_6 in formula_2 is, itself, a subring of formula_2 and contains formula_10 If every element of formula_2 is integral over formula_11 then we say that formula_2 is integral over formula_6, or equivalently formula_2 is an integral extension of formula_10
Examples.
Integral closure in algebraic number theory.
There are many examples of integral closure which can be found in algebraic number theory since it is fundamental for defining the ring of integers for an algebraic field extension formula_12 (or formula_13).
Integral closure of integers in rationals.
Integers are the only elements of Q that are integral over Z. In other words, Z is the integral closure of Z in Q.
Quadratic extensions.
The Gaussian integers are the complex numbers of the form formula_14, and are integral over Z. formula_15 is then the integral closure of Z in formula_16. Typically this ring is denoted formula_17.
The integral closure of Z in formula_18 is the ring
formula_19
This example and the previous one are examples of quadratic integers. The integral closure of a quadratic extension formula_20 can be found by constructing the minimal polynomial of an arbitrary element formula_21 and finding number-theoretic criterion for the polynomial to have integral coefficients. This analysis can be found in the quadratic extensions article.
Roots of unity.
Let ζ be a root of unity. Then the integral closure of Z in the cyclotomic field Q(ζ) is Z[ζ]. This can be found by using the minimal polynomial and using Eisenstein's criterion.
Ring of algebraic integers.
The integral closure of Z in the field of complex numbers C, or the algebraic closure formula_22 is called the "ring of algebraic integers".
Other.
The roots of unity, nilpotent elements and idempotent elements in any ring are integral over Z.
Integral closure in algebraic geometry.
In geometry, integral closure is closely related with normalization and normal schemes. It is the first step in resolution of singularities since it gives a process for resolving singularities of codimension 1.
formula_29
Equivalent definitions.
Let "B" be a ring, and let "A" be a subring of "B". Given an element "b" in "B", the following conditions are equivalent:
(i) "b" is integral over "A";
(ii) the subring "A"["b"] of "B" generated by "A" and "b" is a finitely generated "A"-module;
(iii) there exists a subring "C" of "B" containing "A"["b"] and which is a finitely generated "A"-module;
(iv) there exists a faithful "A"["b"]-module "M" such that "M" is finitely generated as an "A"-module.
The usual proof of this uses the following variant of the Cayley–Hamilton theorem on determinants:
Theorem Let "u" be an endomorphism of an "A"-module "M" generated by "n" elements and "I" an ideal of "A" such that formula_34. Then there is a relation:
formula_35
This theorem (with "I" = "A" and "u" multiplication by "b") gives (iv) ⇒ (i) and the rest is easy. Coincidentally, Nakayama's lemma is also an immediate consequence of this theorem.
Elementary properties.
Integral closure forms a ring.
It follows from the above four equivalent statements that the set of elements of formula_2 that are integral over formula_6 forms a subring of "formula_2" containing formula_6. (Proof: If "x", "y" are elements of "formula_2" that are integral over formula_6, then formula_36 are integral over formula_6 since they stabilize formula_37, which is a finitely generated module over formula_6 and is annihilated only by zero.) This ring is called the integral closure of formula_6 in formula_2.
Transitivity of integrality.
Another consequence of the above equivalence is that "integrality" is transitive, in the following sense. Let formula_38 be a ring containing formula_2 and formula_39. If formula_40 is integral over "formula_2" and "formula_2" integral over formula_6, then formula_40 is integral over formula_6. In particular, if formula_38 is itself integral over "formula_2" and "formula_2" is integral over formula_6, then formula_38 is also integral over formula_6.
Integral closed in fraction field.
If formula_6 happens to be the integral closure of formula_6 in "formula_2", then "A" is said to be integrally closed in "formula_2". If formula_2 is the total ring of fractions of formula_6, (e.g., the field of fractions when formula_6 is an integral domain), then one sometimes drops the qualification "in formula_2" and simply says "integral closure of formula_6" and "formula_6 is integrally closed." For example, the ring of integers formula_41 is integrally closed in the field formula_42.
Transitivity of integral closure with integrally closed domains.
Let "A" be an integral domain with the field of fractions "K" and "A' " the integral closure of "A" in an algebraic field extension "L" of "K". Then the field of fractions of "A' " is "L". In particular, "A' " is an integrally closed domain.
Transitivity in algebraic number theory.
This situation is applicable in algebraic number theory when relating the ring of integers and a field extension. In particular, given a field extension formula_43 the integral closure of formula_41 in formula_44 is the ring of integers formula_45.
Remarks.
Note that transitivity of integrality above implies that if formula_2 is integral over formula_6, then formula_2 is a union (equivalently an inductive limit) of subrings that are finitely generated formula_6-modules.
If formula_6 is noetherian, transitivity of integrality can be weakened to the statement:
There exists a finitely generated formula_6-submodule of formula_2 that contains formula_46.
Relation with finiteness conditions.
Finally, the assumption that formula_6 be a subring of formula_2 can be modified a bit. If formula_47 is a ring homomorphism, then one says formula_48 is integral if formula_2 is integral over formula_49. In the same way one says formula_48 is finite (formula_2 finitely generated formula_6-module) or of finite type (formula_2 finitely generated formula_6-algebra). In this viewpoint, one has that
formula_48 is finite if and only if formula_48 is integral and of finite type.
Or more explicitly,
formula_2 is a finitely generated formula_6-module if and only if formula_2 is generated as an formula_6-algebra by a finite number of elements integral over formula_6.
Integral extensions.
Cohen-Seidenberg theorems.
An integral extension "A" ⊆ "B" has the going-up property, the lying over property, and the incomparability property (Cohen–Seidenberg theorems). Explicitly, given a chain of prime ideals formula_50 in "A" there exists a formula_51 in "B" with formula_52 (going-up and lying over) and two distinct prime ideals with inclusion relation cannot contract to the same prime ideal (incomparability). In particular, the Krull dimensions of "A" and "B" are the same. Furthermore, if "A" is an integrally closed domain, then the going-down holds (see below).
In general, the going-up implies the lying-over. Thus, in the below, we simply say the "going-up" to mean "going-up" and "lying-over".
When "A", "B" are domains such that "B" is integral over "A", "A" is a field if and only if "B" is a field. As a corollary, one has: given a prime ideal formula_53 of "B", formula_53 is a maximal ideal of "B" if and only if formula_54 is a maximal ideal of "A". Another corollary: if "L"/"K" is an algebraic extension, then any subring of "L" containing "K" is a field.
Applications.
Let "B" be a ring that is integral over a subring "A" and "k" an algebraically closed field. If formula_55 is a homomorphism, then "f" extends to a homomorphism "B" → "k". This follows from the going-up.
Geometric interpretation of going-up.
Let formula_56 be an integral extension of rings. Then the induced map
formula_57
is a closed map; in fact, formula_58 for any ideal "I" and formula_59 is surjective if "f" is injective. This is a geometric interpretation of the going-up.
Geometric interpretation of integral extensions.
Let "B" be a ring and "A" a subring that is a noetherian integrally closed domain (i.e., formula_60 is a normal scheme). If "B" is integral over "A", then formula_61 is submersive; i.e., the topology of formula_60 is the quotient topology. The proof uses the notion of constructible sets. (See also: Torsor (algebraic geometry).)
Integrality, base-change, universally-closed, and geometry.
If formula_2 is integral over formula_6, then formula_62 is integral over "R" for any "A"-algebra "R". In particular, formula_63 is closed; i.e., the integral extension induces a "universally closed" map. This leads to a geometric characterization of integral extension. Namely, let "B" be a ring with only finitely many minimal prime ideals (e.g., integral domain or noetherian ring). Then "B" is integral over a (subring) "A" if and only if formula_63 is closed for any "A"-algebra "R". In particular, every proper map is universally closed.
Proposition. Let "A" be an integrally closed domain with the field of fractions "K", "L" a finite normal extension of "K", "B" the integral closure of "A" in "L". Then the group formula_64 acts transitively on each fiber of formula_61.
Galois actions on integral extensions of integrally closed domains.
Proof. Suppose formula_65 for any formula_66 in "G". Then, by prime avoidance, there is an element "x" in formula_67 such that formula_68 for any formula_66. "G" fixes the element formula_69 and thus "y" is purely inseparable over "K". Then some power formula_70 belongs to "K"; since "A" is integrally closed we have: formula_71 Thus, we found formula_70 is in formula_72 but not in formula_73; i.e., formula_74.
Application to algebraic number theory.
The Galois group formula_75 then acts on all of the prime ideals formula_76 lying over a fixed prime ideal formula_77. That is, if
formula_78
then there is a Galois action on the set formula_79. This is called the Splitting of prime ideals in Galois extensions.
Remarks.
The same idea in the proof shows that if formula_43 is a purely inseparable extension (need not be normal), then formula_61 is bijective.
Let "A", "K", etc. as before but assume "L" is only a finite field extension of "K". Then
(i) formula_61 has finite fibers.
(ii) the going-down holds between "A" and "B": given formula_80, there exists formula_51 that contracts to it.
Indeed, in both statements, by enlarging "L", we can assume "L" is a normal extension. Then (i) is immediate. As for (ii), by the going-up, we can find a chain formula_81 that contracts to formula_82. By transitivity, there is formula_83 such that formula_84 and then formula_85 are the desired chain.
Integral closure.
Let "A" ⊂ "B" be rings and "A' " the integral closure of "A" in "B". (See above for the definition.)
Integral closures behave nicely under various constructions. Specifically, for a multiplicatively closed subset "S" of "A", the localization "S"−1"A' " is the integral closure of "S"−1"A" in "S"−1"B", and formula_86 is the integral closure of formula_87 in formula_88. If formula_89 are subrings of rings formula_90, then the integral closure of formula_91 in formula_92 is formula_93 where formula_94 are the integral closures of formula_89 in formula_95.
The integral closure of a local ring "A" in, say, "B", need not be local. (If this is the case, the ring is called unibranch.) This is the case for example when "A" is Henselian and "B" is a field extension of the field of fractions of "A".
If "A" is a subring of a field "K", then the integral closure of "A" in "K" is the intersection of all valuation rings of "K" containing "A".
Let "A" be an formula_96-graded subring of an formula_96-graded ring "B". Then the integral closure of "A" in "B" is an formula_96-graded subring of "B".
There is also a concept of the integral closure of an ideal. The integral closure of an ideal formula_98, usually denoted by formula_99, is the set of all elements formula_100 such that there exists a monic polynomial
formula_101
with formula_102 with formula_103 as a root. The radical of an ideal is integrally closed.
For noetherian rings, there are alternate definitions as well.
The notion of integral closure of an ideal is used in some proofs of the going-down theorem.
Conductor.
Let "B" be a ring and "A" a subring of "B" such that "B" is integral over "A". Then the annihilator of the "A"-module "B"/"A" is called the "conductor" of "A" in "B". Because the notion has origin in algebraic number theory, the conductor is denoted by formula_109. Explicitly, formula_110 consists of elements "a" in "A" such that formula_111. (cf. idealizer in abstract algebra.) It is the largest ideal of "A" that is also an ideal of "B". If "S" is a multiplicatively closed subset of "A", then
formula_112.
If "B" is a subring of the total ring of fractions of "A", then we may identify
formula_113.
Example: Let "k" be a field and let formula_114 (i.e., "A" is the coordinate ring of the affine curve formula_115). "B" is the integral closure of "A" in formula_116. The conductor of "A" in "B" is the ideal formula_117. More generally, the conductor of formula_118, "a", "b" relatively prime, is formula_119 with formula_120.
Suppose "B" is the integral closure of an integral domain "A" in the field of fractions of "A" such that the "A"-module formula_121 is finitely generated. Then the conductor formula_110 of "A" is an ideal defining the support of formula_121; thus, "A" coincides with "B" in the complement of formula_122 in formula_123. In particular, the set formula_124, the complement of formula_122, is an open set.
Finiteness of integral closure.
An important but difficult question is on the finiteness of the integral closure of a finitely generated algebra. There are several known results.
The integral closure of a Dedekind domain in a finite extension of the field of fractions is a Dedekind domain; in particular, a noetherian ring. This is a consequence of the Krull–Akizuki theorem. In general, the integral closure of a noetherian domain of dimension at most 2 is noetherian; Nagata gave an example of dimension 3 noetherian domain whose integral closure is not noetherian. A nicer statement is this: the integral closure of a noetherian domain is a Krull domain (Mori–Nagata theorem). Nagata also gave an example of dimension 1 noetherian local domain such that the integral closure is not finite over that domain.
Let "A" be a noetherian integrally closed domain with field of fractions "K". If "L"/"K" is a finite separable extension, then the integral closure formula_97 of "A" in "L" is a finitely generated "A"-module. This is easy and standard (uses the fact that the trace defines a non-degenerate bilinear form).
Let "A" be a finitely generated algebra over a field "k" that is an integral domain with field of fractions "K". If "L" is a finite extension of "K", then the integral closure formula_97 of "A" in "L" is a finitely generated "A"-module and is also a finitely generated "k"-algebra. The result is due to Noether and can be shown using the Noether normalization lemma as follows. It is clear that it is enough to show the assertion when "L"/"K" is either separable or purely inseparable. The separable case is noted above, so assume "L"/"K" is purely inseparable. By the normalization lemma, "A" is integral over the polynomial ring formula_125. Since "L"/"K" is a finite purely inseparable extension, there is a power "q" of a prime number such that every element of "L" is a "q"-th root of an element in "K". Let formula_126 be a finite extension of "k" containing all "q"-th roots of coefficients of finitely many rational functions that generate "L". Then we have: formula_127 The ring on the right is the field of fractions of formula_128, which is the integral closure of "S"; thus, contains formula_97. Hence, formula_97 is finite over "S"; a fortiori, over "A". The result remains true if we replace "k" by Z.
The integral closure of a complete local noetherian domain "A" in a finite extension of the field of fractions of "A" is finite over "A". More precisely, for a local noetherian ring "R", we have the following chains of implications:
(i) "A" complete formula_129 "A" is a Nagata ring
(ii) "A" is a Nagata domain formula_129 "A" analytically unramified formula_129 the integral closure of the completion formula_130 is finite over formula_130 formula_129 the integral closure of "A" is finite over A.
Noether's normalization lemma.
Noether's normalisation lemma is a theorem in commutative algebra. Given a field "K" and a finitely generated "K"-algebra "A", the theorem says it is possible to find elements "y"1, "y"2, ..., "y""m" in "A" that are algebraically independent over "K" such that "A" is finite (and hence integral) over "B" = "K"["y"1..., "y""m"]. Thus the extension "K" ⊂ "A" can be written as a composite "K" ⊂ "B" ⊂ "A" where "K" ⊂ "B" is a purely transcendental extension and "B" ⊂ "A" is finite.
Integral morphisms.
In algebraic geometry, a morphism formula_131 of schemes is "integral" if it is affine and if for some (equivalently, every) affine open cover formula_132 of "Y", every map formula_133 is of the form formula_134 where "A" is an integral "B"-algebra. The class of integral morphisms is more general than the class of finite morphisms because there are integral extensions that are not finite, such as, in many cases, the algebraic closure of a field over the field.
Absolute integral closure.
Let "A" be an integral domain and "L" (some) algebraic closure of the field of fractions of "A". Then the integral closure formula_135 of "A" in "L" is called the absolute integral closure of "A". It is unique up to a non-canonical isomorphism. The ring of all algebraic integers is an example (and thus formula_135 is typically not noetherian).
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\sqrt{2}"
},
{
"math_id": 1,
"text": "1+i"
},
{
"math_id": 2,
"text": "B"
},
{
"math_id": 3,
"text": "A \\subset B"
},
{
"math_id": 4,
"text": "B."
},
{
"math_id": 5,
"text": "b"
},
{
"math_id": 6,
"text": "A"
},
{
"math_id": 7,
"text": "n \\geq 1,"
},
{
"math_id": 8,
"text": "a_0,\\ a_1, \\ \\dots,\\ a_{n-1}"
},
{
"math_id": 9,
"text": "b^n + a_{n-1} b^{n-1} + \\cdots + a_1 b + a_0 = 0."
},
{
"math_id": 10,
"text": "A."
},
{
"math_id": 11,
"text": "A,"
},
{
"math_id": 12,
"text": "K/\\mathbb{Q}"
},
{
"math_id": 13,
"text": "L/\\mathbb{Q}_p"
},
{
"math_id": 14,
"text": "a + b \\sqrt{-1},\\, a, b \\in \\mathbf{Z}"
},
{
"math_id": 15,
"text": "\\mathbf{Z}[\\sqrt{-1}]"
},
{
"math_id": 16,
"text": "\\mathbf{Q}(\\sqrt{-1})"
},
{
"math_id": 17,
"text": "\\mathcal{O}_{\\mathbb{Q}[i]}"
},
{
"math_id": 18,
"text": "\\mathbf{Q}(\\sqrt{5})"
},
{
"math_id": 19,
"text": "\\mathcal{O}_{\\mathbb{Q}[\\sqrt{5}]} = \\mathbb{Z}\\!\\left[ \\frac{1 + \\sqrt{5}}{2} \\right]"
},
{
"math_id": 20,
"text": "\\mathbb{Q}(\\sqrt{d})"
},
{
"math_id": 21,
"text": "a + b \\sqrt{d}"
},
{
"math_id": 22,
"text": "\\overline{\\mathbb{Q}}"
},
{
"math_id": 23,
"text": "\\mathbb{C}[x,y,z]/(xy)"
},
{
"math_id": 24,
"text": "\\mathbb{C}[x,z] \\times \\mathbb{C}[y,z]"
},
{
"math_id": 25,
"text": "xz"
},
{
"math_id": 26,
"text": "yz"
},
{
"math_id": 27,
"text": "z"
},
{
"math_id": 28,
"text": "R[u] \\cap R[u^{-1}]"
},
{
"math_id": 29,
"text": "\\bigoplus_{n \\ge 0} \\operatorname{H}^0(X, \\mathcal{O}_X(n))."
},
{
"math_id": 30,
"text": "\\overline{k}"
},
{
"math_id": 31,
"text": "\\overline{k}[x_1, \\dots, x_n]"
},
{
"math_id": 32,
"text": "k[x_1, \\dots, x_n]."
},
{
"math_id": 33,
"text": "\\mathbf{C}[[x^{1/n}]]"
},
{
"math_id": 34,
"text": "u(M) \\subset IM"
},
{
"math_id": 35,
"text": "u^n + a_1 u^{n-1} + \\cdots + a_{n-1} u + a_n = 0, \\, a_i \\in I^i."
},
{
"math_id": 36,
"text": "x + y, xy, -x"
},
{
"math_id": 37,
"text": "A[x]A[y]"
},
{
"math_id": 38,
"text": "C"
},
{
"math_id": 39,
"text": "c \\in C"
},
{
"math_id": 40,
"text": "c"
},
{
"math_id": 41,
"text": "\\mathcal{O}_K"
},
{
"math_id": 42,
"text": "K"
},
{
"math_id": 43,
"text": "L/K"
},
{
"math_id": 44,
"text": "L"
},
{
"math_id": 45,
"text": "\\mathcal{O}_L"
},
{
"math_id": 46,
"text": "A[b]"
},
{
"math_id": 47,
"text": "f:A \\to B"
},
{
"math_id": 48,
"text": "f"
},
{
"math_id": 49,
"text": "f(A)"
},
{
"math_id": 50,
"text": "\\mathfrak{p}_1 \\subset \\cdots \\subset \\mathfrak{p}_n"
},
{
"math_id": 51,
"text": "\\mathfrak{p}'_1 \\subset \\cdots \\subset \\mathfrak{p}'_n"
},
{
"math_id": 52,
"text": "\\mathfrak{p}_i = \\mathfrak{p}'_i \\cap A"
},
{
"math_id": 53,
"text": "\\mathfrak{q}"
},
{
"math_id": 54,
"text": "\\mathfrak{q} \\cap A"
},
{
"math_id": 55,
"text": "f: A \\to k"
},
{
"math_id": 56,
"text": "f: A \\to B"
},
{
"math_id": 57,
"text": "\\begin{cases} f^\\#: \\operatorname{Spec} B \\to \\operatorname{Spec} A \\\\ p \\mapsto f^{-1}(p)\\end{cases}"
},
{
"math_id": 58,
"text": "f^\\#(V(I)) = V(f^{-1}(I))"
},
{
"math_id": 59,
"text": "f^\\#"
},
{
"math_id": 60,
"text": "\\operatorname{Spec} A"
},
{
"math_id": 61,
"text": "\\operatorname{Spec} B \\to \\operatorname{Spec} A"
},
{
"math_id": 62,
"text": "B \\otimes_A R"
},
{
"math_id": 63,
"text": "\\operatorname{Spec} (B \\otimes_A R) \\to \\operatorname{Spec} R"
},
{
"math_id": 64,
"text": "G = \\operatorname{Gal}(L/K)"
},
{
"math_id": 65,
"text": "\\mathfrak{p}_2 \\ne \\sigma(\\mathfrak{p}_1)"
},
{
"math_id": 66,
"text": "\\sigma"
},
{
"math_id": 67,
"text": "\\mathfrak{p}_2"
},
{
"math_id": 68,
"text": "\\sigma(x) \\not\\in \\mathfrak{p}_1"
},
{
"math_id": 69,
"text": "y = \\prod\\nolimits_{\\sigma} \\sigma(x)"
},
{
"math_id": 70,
"text": "y^e"
},
{
"math_id": 71,
"text": "y^e \\in A."
},
{
"math_id": 72,
"text": "\\mathfrak{p}_2 \\cap A"
},
{
"math_id": 73,
"text": "\\mathfrak{p}_1 \\cap A"
},
{
"math_id": 74,
"text": "\\mathfrak{p}_1 \\cap A \\ne \\mathfrak{p}_2 \\cap A"
},
{
"math_id": 75,
"text": "\\operatorname{Gal}(L/K)"
},
{
"math_id": 76,
"text": "\\mathfrak{q}_1,\\ldots, \\mathfrak{q}_k \\in \\text{Spec}(\\mathcal{O}_L)"
},
{
"math_id": 77,
"text": "\\mathfrak{p} \\in \\text{Spec}(\\mathcal{O}_K)"
},
{
"math_id": 78,
"text": "\\mathfrak{p} = \\mathfrak{q}_1^{e_1}\\cdots\\mathfrak{q}_k^{e_k} \\subset \\mathcal{O}_L"
},
{
"math_id": 79,
"text": "S_\\mathfrak{p} = \\{\\mathfrak{q}_1,\\ldots,\\mathfrak{q}_k \\}"
},
{
"math_id": 80,
"text": "\\mathfrak{p}_1 \\subset \\cdots \\subset \\mathfrak{p}_n = \\mathfrak{p}'_n \\cap A"
},
{
"math_id": 81,
"text": "\\mathfrak{p}''_i"
},
{
"math_id": 82,
"text": "\\mathfrak{p}'_i"
},
{
"math_id": 83,
"text": "\\sigma \\in G"
},
{
"math_id": 84,
"text": "\\sigma(\\mathfrak{p}''_n) = \\mathfrak{p}'_n"
},
{
"math_id": 85,
"text": "\\mathfrak{p}'_i = \\sigma(\\mathfrak{p}''_i)"
},
{
"math_id": 86,
"text": "A'[t]"
},
{
"math_id": 87,
"text": "A[t]"
},
{
"math_id": 88,
"text": "B[t]"
},
{
"math_id": 89,
"text": "A_i"
},
{
"math_id": 90,
"text": "B_i, 1 \\le i \\le n"
},
{
"math_id": 91,
"text": "\\prod A_i"
},
{
"math_id": 92,
"text": "\\prod B_i"
},
{
"math_id": 93,
"text": "\\prod {A_i}'"
},
{
"math_id": 94,
"text": "{A_i}'"
},
{
"math_id": 95,
"text": "B_i"
},
{
"math_id": 96,
"text": "\\mathbb{N}"
},
{
"math_id": 97,
"text": "A'"
},
{
"math_id": 98,
"text": "I \\subset R"
},
{
"math_id": 99,
"text": "\\overline I"
},
{
"math_id": 100,
"text": "r \\in R"
},
{
"math_id": 101,
"text": "x^n + a_{1} x^{n-1} + \\cdots + a_{n-1} x^1 + a_n"
},
{
"math_id": 102,
"text": "a_i \\in I^i"
},
{
"math_id": 103,
"text": "r"
},
{
"math_id": 104,
"text": "r \\in \\overline I"
},
{
"math_id": 105,
"text": "c \\in R"
},
{
"math_id": 106,
"text": "c r^n \\in I^n"
},
{
"math_id": 107,
"text": "n \\ge 1"
},
{
"math_id": 108,
"text": " r \\in \\overline I"
},
{
"math_id": 109,
"text": "\\mathfrak{f} = \\mathfrak{f}(B/A)"
},
{
"math_id": 110,
"text": "\\mathfrak{f}"
},
{
"math_id": 111,
"text": "aB \\subset A"
},
{
"math_id": 112,
"text": "S^{-1}\\mathfrak{f}(B/A) = \\mathfrak{f}(S^{-1}B/S^{-1}A)"
},
{
"math_id": 113,
"text": "\\mathfrak{f}(B/A)=\\operatorname{Hom}_A(B, A)"
},
{
"math_id": 114,
"text": "A = k[t^2, t^3] \\subset B = k[t]"
},
{
"math_id": 115,
"text": "x^2 = y^3"
},
{
"math_id": 116,
"text": "k(t)"
},
{
"math_id": 117,
"text": "(t^2, t^3) A"
},
{
"math_id": 118,
"text": "A = k[[t^a, t^b]]"
},
{
"math_id": 119,
"text": "(t^c, t^{c+1}, \\dots) A"
},
{
"math_id": 120,
"text": "c = (a-1)(b-1)"
},
{
"math_id": 121,
"text": "B/A"
},
{
"math_id": 122,
"text": "V(\\mathfrak{f})"
},
{
"math_id": 123,
"text": "\\operatorname{Spec}A"
},
{
"math_id": 124,
"text": "\\{ \\mathfrak{p} \\in \\operatorname{Spec}A \\mid A_\\mathfrak{p} \\text{ is integrally closed} \\}"
},
{
"math_id": 125,
"text": "S = k[x_1, ..., x_d]"
},
{
"math_id": 126,
"text": "k'"
},
{
"math_id": 127,
"text": "L \\subset k'(x_1^{1/q}, ..., x_d^{1/q})."
},
{
"math_id": 128,
"text": "k'[x_1^{1/q}, ..., x_d^{1/q}]"
},
{
"math_id": 129,
"text": "\\Rightarrow"
},
{
"math_id": 130,
"text": "\\widehat{A}"
},
{
"math_id": 131,
"text": "f:X \\to Y"
},
{
"math_id": 132,
"text": "U_i"
},
{
"math_id": 133,
"text": "f^{-1}(U_i)\\to U_i"
},
{
"math_id": 134,
"text": "\\operatorname{Spec}(A)\\to\\operatorname{Spec}(B)"
},
{
"math_id": 135,
"text": "A^+"
},
{
"math_id": 136,
"text": "k[x_1,\\ldots,x_n"
},
{
"math_id": 137,
"text": "k[f_1,\\ldots,f_n]"
},
{
"math_id": 138,
"text": "(f_1,\\ldots,f_n)"
}
] | https://en.wikipedia.org/wiki?curid=9478630 |
9479 | Einsteinium | Chemical element with atomic number 99 (Es)
Einsteinium is a synthetic chemical element; it has symbol Es and atomic number 99. It is named after Albert Einstein and is a member of the actinide series and is the seventh transuranium element.
Einsteinium was discovered as a component of the debris of the first hydrogen bomb explosion in 1952. Its most common isotope, einsteinium-253 (half-life 20.47 days), is produced artificially from decay of californium-253 in a few dedicated high-power nuclear reactors with a total yield on the order of one milligram per year. The reactor synthesis is followed by a complex process of separating einsteinium-253 from other actinides and products of their decay. Other isotopes are synthesized in various laboratories, but in much smaller amounts, by bombarding heavy actinide elements with light ions. Due to the small amounts of produced einsteinium and the short half-life of its most common isotope, there are no practical applications for it except basic scientific research. In particular, einsteinium was used to synthesize, for the first time, 17 atoms of the new element mendelevium in 1955.
Einsteinium is a soft, silvery, paramagnetic metal. Its chemistry is typical of the late actinides, with a preponderance of the +3 oxidation state; the +2 oxidation state is also accessible, especially in solids. The high radioactivity of einsteinium-253 produces a visible glow and rapidly damages its crystalline metal lattice, with released heat of about 1000 watts per gram. Difficulty in studying its properties is due to einsteinium-253's decay to berkelium-249 and then californium-249 at a rate of about 3% per day. The isotope of einsteinium with the longest half-life, einsteinium-252 (half-life 471.7 days) would be more suitable for investigation of physical properties, but it has proven far more difficult to produce and is available only in minute quantities, not in bulk. Einsteinium is the element with the highest atomic number which has been observed in macroscopic quantities in its pure form as einsteinium-253.
Like all synthetic transuranium elements, isotopes of einsteinium are very radioactive and are considered highly dangerous to health on ingestion.
History.
Einsteinium was first identified in December 1952 by Albert Ghiorso and co-workers at the University of California, Berkeley in collaboration with the Argonne and Los Alamos National Laboratories, in the fallout from the "Ivy Mike" nuclear test. The test was carried out on November 1, 1952, at Enewetak Atoll in the Pacific Ocean and was the first successful test of a thermonuclear weapon. Initial examination of the debris from the explosion had shown the production of a new isotope of plutonium, Pu, which could only have formed by the absorption of six neutrons by a uranium-238 nucleus followed by two beta decays.
<chem>^{238}_{92}U ->[\ce{+ 6(n,\gamma)}][-2\ \beta^-]{} ^{244}_{94}Pu</chem>
At the time, the multiple neutron absorption was thought to be an extremely rare process, but the identification of Pu indicated that still more neutrons could have been captured by the uranium nuclei, thereby producing new elements heavier than californium.
Ghiorso and co-workers analyzed filter papers which had been flown through the explosion cloud on airplanes (the same sampling technique that had been used to discover Pu). Larger amounts of radioactive material were later isolated from coral debris of the atoll, which were delivered to the U.S. The separation of suspected new elements was carried out in the presence of a citric acid/ammonium buffer solution in a weakly acidic medium (pH ≈ 3.5), using ion exchange at elevated temperatures; fewer than 200 atoms of einsteinium were recovered in the end. Nevertheless, element 99 (einsteinium), namely its 253Es isotope, could be detected via its characteristic high-energy alpha decay at 6.6 MeV. It was produced by the capture of 15 neutrons by uranium-238 nuclei followed by seven beta-decays, and had a half-life of 20.5 days. Such multiple neutron absorption was made possible by the high neutron flux density during the detonation, so that newly generated heavy isotopes had plenty of available neutrons to absorb before they could disintegrate into lighter elements. Neutron capture initially raised the mass number without changing the atomic number of the nuclide, and the concomitant beta-decays resulted in a gradual increase in the atomic number:
<chem>
^{238}_{92}U ->[\ce{+15n}][6 \beta^-] ^{253}_{98}Cf ->[\beta^-] ^{253}_{99}Es
</chem>
Some 238U atoms, however, could absorb two additional neutrons (for a total of 17), resulting in 255Es, as well as in the 255Fm isotope of another new element, fermium. The discovery of the new elements and the associated new data on multiple neutron capture were initially kept secret on the orders of the U.S. military until 1955 due to Cold War tensions and competition with Soviet Union in nuclear technologies. However, the rapid capture of so many neutrons would provide needed direct experimental confirmation of the so-called r-process multiple neutron absorption needed to explain the cosmic nucleosynthesis (production) of certain heavy chemical elements (heavier than nickel) in supernova explosions, before beta decay. Such a process is needed to explain the existence of many stable elements in the universe.
Meanwhile, isotopes of element 99 (as well as of new element 100, fermium) were produced in the Berkeley and Argonne laboratories, in a nuclear reaction between nitrogen-14 and uranium-238, and later by intense neutron irradiation of plutonium or californium:
<chem>^{252}_{98}Cf ->[\ce{(n,\gamma)}] ^{253}_{98}Cf ->[\beta^-][17.81 \ce{d}] ^{253}_{99}Es ->[\ce{(n,\gamma)}] ^{254}_{99}Es ->[\beta^-] ^{254}_{100}Fm</chem>
These results were published in several articles in 1954 with the disclaimer that these were not the first studies that had been carried out on the elements. The Berkeley team also reported some results on the chemical properties of einsteinium and fermium. The "Ivy Mike" results were declassified and published in 1955.
In their discovery of the elements 99 and 100, the American teams had competed with a group at the Nobel Institute for Physics, Stockholm, Sweden. In late 1953 – early 1954, the Swedish group succeeded in the synthesis of light isotopes of element 100, in particular 250Fm, by bombarding uranium with oxygen nuclei. These results were also published in 1954. Nevertheless, the priority of the Berkeley team was generally recognized, as its publications preceded the Swedish article, and they were based on the previously undisclosed results of the 1952 thermonuclear explosion; thus the Berkeley team was given the privilege to name the new elements. As the effort which had led to the design of "Ivy Mike" was codenamed Project PANDA, element 99 had been jokingly nicknamed "Pandemonium" but the official names suggested by the Berkeley group derived from two prominent scientists, Albert Einstein and Enrico Fermi: "We suggest for the name for the element with the atomic number 99, einsteinium (symbol E) after Albert Einstein and for the name for the element with atomic number 100, fermium (symbol Fm), after Enrico Fermi." Both Einstein and Fermi died between the time the names were originally proposed and when they were announced. The discovery of these new elements was announced by Albert Ghiorso at the first Geneva Atomic Conference held on 8–20 August 1955. The symbol for einsteinium was first given as "E" and later changed to "Es" by IUPAC.
Characteristics.
Physical.
Einsteinium is a synthetic, silver, radioactive metal. In the periodic table, it is located to the right of the actinide californium, to the left of the actinide fermium and below the lanthanide holmium with which it shares many similarities in physical and chemical properties. Its density of 8.84 g/cm3 is lower than that of californium (15.1 g/cm3) and is nearly the same as that of holmium (8.79 g/cm3), despite atomic einsteinium being much heavier than holmium. The melting point of einsteinium (860 °C) is also relatively low – below californium (900 °C), fermium (1527 °C) and holmium (1461 °C). Einsteinium is a soft metal, with the bulk modulus of only 15 GPa, which value is one of the lowest among non-alkali metals.
Contrary to the lighter actinides californium, berkelium, curium and americium, which crystallize in a double hexagonal structure at ambient conditions, einsteinium is believed to have a face-centered cubic ("fcc") symmetry with the space group "Fm"3"m" and the lattice constant "a" = 575 pm. However, there is a report of room-temperature hexagonal einsteinium metal with "a" = 398 pm and "c" = 650 pm, which converted to the "fcc" phase upon heating to 300 °C.
The self-damage induced by the radioactivity of einsteinium is so strong that it rapidly destroys the crystal lattice, and the energy release during this process, 1000 watts per gram of 253Es, induces a visible glow. These processes may contribute to the relatively low density and melting point of einsteinium. Further, owing to the small size of the available samples, the melting point of einsteinium was often deduced by observing the sample being heated inside an electron microscope. Thus, the surface effects in small samples could reduce the melting point value.
The metal is trivalent and has a noticeably high volatility. In order to reduce the self-radiation damage, most measurements of solid einsteinium and its compounds are performed right after thermal annealing. Also, some compounds are studied under the atmosphere of the reductant gas, for example H2O+HCl for EsOCl so that the sample is partly regrown during its decomposition.
Apart from the self-destruction of solid einsteinium and its compounds, other intrinsic difficulties in studying this element include scarcity – the most common 253Es isotope is available only once or twice a year in sub-milligram amounts – and self-contamination due to rapid conversion of einsteinium to berkelium and then to californium at a rate of about 3.3% per day:
<chem>
^{253}_{99}Es ->[\alpha][20 \ce{d}] ^{249}_{97}Bk ->[\beta^-][314 \ce{d}] ^{249}_{98}Cf
</chem>
Thus, most einsteinium samples are contaminated, and their intrinsic properties are often deduced by extrapolating back experimental data accumulated over time. Other experimental techniques to circumvent the contamination problem include selective optical excitation of einsteinium ions by a tunable laser, such as in studying its luminescence properties.
Magnetic properties have been studied for einsteinium metal, its oxide and fluoride. All three materials showed Curie–Weiss paramagnetic behavior from liquid helium to room temperature. The effective magnetic moments were deduced as for Es2O3 and for the EsF3, which are the highest values among actinides, and the corresponding Curie temperatures are 53 and 37 K.
Chemical.
Like all actinides, einsteinium is rather reactive. Its trivalent oxidation state is most stable in solids and aqueous solution where it induces a pale pink color. The existence of divalent einsteinium is firmly established, especially in the solid phase; such +2 state is not observed in many other actinides, including protactinium, uranium, neptunium, plutonium, curium and berkelium. Einsteinium(II) compounds can be obtained, for example, by reducing einsteinium(III) with samarium(II) chloride.
Isotopes.
Eighteen isotopes and four nuclear isomers are known for einsteinium, with mass numbers ranging from 240 to 257. All are radioactive and the most stable nuclide, 252Es, has a half-life of 471.7 days. The next most stable isotopes are 254Es (half-life 275.7 days), 255Es (39.8 days), and 253Es (20.47 days). All of the remaining isotopes have half-lives shorter than 40 hours, most shorter than 30 minutes. Of the five nuclear isomers, the most stable is 254mEs with a half-life of 39.3 hours.
Nuclear fission.
Einsteinium has a high rate of nuclear fission that results in a low critical mass for a sustained nuclear chain reaction. This mass is 9.89 kilograms for a bare sphere of 254Es isotope, and can be lowered to 2.9 kilograms by adding a 30-centimeter-thick steel neutron reflector, or even to 2.26 kilograms with a 20-cm-thick reflector made of water. However, even this small critical mass greatly exceeds the total amount of einsteinium isolated thus far, especially of the rare 254Es isotope.
Natural occurrence.
Because of the short half-life of all isotopes of einsteinium, any primordial einsteinium—that is, einsteinium that could have been present on the Earth at its formation—has long since decayed. Synthesis of einsteinium from naturally-occurring actinides uranium and thorium in the Earth's crust requires multiple neutron capture, which is an extremely unlikely event. Therefore, all terrestrial einsteinium is produced in scientific laboratories, high-power nuclear reactors, or in nuclear weapons tests, and exists only within a few years from the time of the synthesis.
The transuranic elements from americium to fermium, including einsteinium, were once created in the natural nuclear fission reactor at Oklo, but any quantities produced then would have long since decayed away.
Einsteinium was theoretically observed in the spectrum of Przybylski's Star. However, the lead author of the studies finding einsteinium and other short-lived actinides in Przybylski's Star, Vera F. Gopka, admitted that "the position of lines of the radioactive elements under search were simply visualized in synthetic spectrum as vertical markers because there are not any atomic data for these lines except for their wavelengths (Sansonetti et al. 2004), enabling one to calculate their profiles with more or less real intensities." The signature spectra of einsteinium's isotopes have since been comprehensively analyzed experimentally (in 2021), though there is no published research confirming whether the theorized einsteinium signatures proposed to be found in the star's spectrum match the lab-determined results.
Synthesis and extraction.
Einsteinium is produced in minute quantities by bombarding lighter actinides with neutrons in dedicated high-flux nuclear reactors. The world's major irradiation sources are the 85-megawatt High Flux Isotope Reactor (HFIR) at the Oak Ridge National Laboratory in Tennessee, U.S., and the SM-2 loop reactor at the Research Institute of Atomic Reactors (NIIAR) in Dimitrovgrad, Russia, which are both dedicated to the production of transcurium ("Z" > 96) elements. These facilities have similar power and flux levels, and are expected to have comparable production capacities for transcurium elements, although the quantities produced at NIIAR are not widely reported. In a "typical processing campaign" at Oak Ridge, tens of grams of curium are irradiated to produce decigram quantities of californium, milligram quantities of berkelium (249Bk) and einsteinium and picogram quantities of fermium.
The first microscopic sample of 253Es sample weighing about 10 nanograms was prepared in 1961 at HFIR. A special magnetic balance was designed to estimate its weight. Larger batches were produced later starting from several kilograms of plutonium with the einsteinium yields (mostly 253Es) of 0.48 milligrams in 1967–1970, 3.2 milligrams in 1971–1973, followed by steady production of about 3 milligrams per year between 1974 and 1978. These quantities however refer to the integral amount in the target right after irradiation. Subsequent separation procedures reduced the amount of isotopically pure einsteinium roughly tenfold.
Laboratory synthesis.
Heavy neutron irradiation of plutonium results in four major isotopes of einsteinium: 253Es (α-emitter with half-life of 20.47 days and with a spontaneous fission half-life of 7×105 years); 254mEs (β-emitter with half-life of 39.3 hours), 254Es (α-emitter with half-life of about 276 days) and 255Es (β-emitter with half-life of 39.8 days). An alternative route involves bombardment of uranium-238 with high-intensity nitrogen or oxygen ion beams.
Einsteinium-247 (half-life 4.55 minutes) was produced by irradiating americium-241 with carbon or uranium-238 with nitrogen ions. The latter reaction was first realized in 1967 in Dubna, Russia, and the involved scientists were awarded the Lenin Komsomol Prize.
The isotope 248Es was produced by irradiating 249Cf with deuterium ions. It mainly decays by emission of electrons to 248Cf with a half-life of minutes, but also releases α-particles of 6.87 MeV energy, with the ratio of electrons to α-particles of about 400.
formula_0
The heavier isotopes 249Es, 250Es, 251Es and 252Es were obtained by bombarding 249Bk with α-particles. One to four neutrons are liberated in this process making possible the formation of four different isotopes in one reaction.
<chem>^{249}_{97}Bk ->[+\alpha] ^{249,250,251,252}_{99}Es</chem>
Einsteinium-253 was produced by irradiating a 0.1–0.2 milligram 252Cf target with a thermal neutron flux of (2–5)×1014 neutrons·cm−2·s−1 for 500–900 hours:
<chem>^{252}_{98}Cf ->[\ce{(n,\gamma)}] ^{253}_{98}Cf ->[\beta^-][17.81 \ce{d}] ^{253}_{99}Es</chem>
In 2020, scientists at the Oak Ridge National Laboratory were able to create about 200 nanograms of 254Es. This allowed some chemical properties of the element to be studied for the first time.
Synthesis in nuclear explosions.
The analysis of the debris at the 10-megaton "Ivy Mike" nuclear test was a part of long-term project. One of the goals of which was studying the efficiency of production of transuranium elements in high-power nuclear explosions. The motivation for these experiments was that synthesis of such elements from uranium requires multiple neutron capture. The probability of such events increases with the neutron flux, and nuclear explosions are the most powerful man-made neutron sources, providing densities of the order 1023 neutrons/cm2 within a microsecond, or about 1029 neutrons/(cm2·s). In comparison, the flux of the HFIR reactor is 5×1015 neutrons/(cm2·s). A dedicated laboratory was set up right at Enewetak Atoll for preliminary analysis of debris, as some isotopes could have decayed by the time the debris samples reached the mainland U.S. The laboratory was receiving samples for analysis as soon as possible, from airplanes equipped with paper filters which flew over the atoll after the tests. Whereas it was hoped to discover new chemical elements heavier than fermium, none of these were found even after a series of megaton explosions conducted between 1954 and 1956 at the atoll.
The atmospheric results were supplemented by the underground test data accumulated in the 1960s at the Nevada Test Site, as it was hoped that powerful explosions conducted in confined space might result in improved yields and heavier isotopes. Apart from traditional uranium charges, combinations of uranium with americium and thorium have been tried, as well as a mixed plutonium-neptunium charge, but they were less successful in terms of yield and was attributed to stronger losses of heavy isotopes due to enhanced fission rates in heavy-element charges. Product isolation was problematic as the explosions were spreading debris through melting and vaporizing the surrounding rocks at depths of 300–600 meters. Drilling to such depths to extract the products was both slow and inefficient in terms of collected volumes.
Among the nine underground tests that were carried between 1962 and 1969, the last one was the most powerful and had the highest yield of transuranium elements. Milligrams of einsteinium that would normally take a year of irradiation in a high-power reactor, were produced within a microsecond. However, the major practical problem of the entire proposal was collecting the radioactive debris dispersed by the powerful blast. Aircraft filters adsorbed only about 4×10-14 of the total amount, and collection of tons of corals at Enewetak Atoll increased this fraction by only two orders of magnitude. Extraction of about 500 kilograms of underground rocks 60 days after the Hutch explosion recovered only about 1×10-7 of the total charge. The amount of transuranium elements in this 500-kg batch was only 30 times higher than in a 0.4 kg rock picked up 7 days after the test which demonstrated the highly non-linear dependence of the transuranium elements yield on the amount of retrieved radioactive rock. Shafts were drilled at the site before the test in order to accelerate sample collection after explosion, so that explosion would expel radioactive material from the epicenter through the shafts and to collecting volumes near the surface. This method was tried in two tests and instantly provided hundreds kilograms of material, but with actinide concentration 3 times lower than in samples obtained after drilling. Whereas such method could have been efficient in scientific studies of short-lived isotopes, it could not improve the overall collection efficiency of the produced actinides.
Although no new elements (apart from einsteinium and fermium) could be detected in the nuclear test debris, and the total yields of transuranium elements were disappointingly low, these tests did provide significantly higher amounts of rare heavy isotopes than previously available in laboratories.
Separation.
Separation procedure of einsteinium depends on the synthesis method. In the case of light-ion bombardment inside a cyclotron, the heavy ion target is attached to a thin foil, and the generated einsteinium is simply washed off the foil after the irradiation. However, the produced amounts in such experiments are relatively low. The yields are much higher for reactor irradiation, but there, the product is a mixture of various actinide isotopes, as well as lanthanides produced in the nuclear fission decays. In this case, isolation of einsteinium is a tedious procedure which involves several repeating steps of cation exchange, at elevated temperature and pressure, and chromatography. Separation from berkelium is important, because the most common einsteinium isotope produced in nuclear reactors, 253Es, decays with a half-life of only 20 days to 249Bk, which is fast on the timescale of most experiments. Such separation relies on the fact that berkelium easily oxidizes to the solid +4 state and precipitates, whereas other actinides, including einsteinium, remain in their +3 state in solutions.
Separation of trivalent actinides from lanthanide fission products can be done by a cation-exchange resin column using a 90% water/10% ethanol solution saturated with hydrochloric acid (HCl) as eluant. It is usually followed by anion-exchange chromatography using 6 molar HCl as eluant. A cation-exchange resin column (Dowex-50 exchange column) treated with ammonium salts is then used to separate fractions containing elements 99, 100 and 101. These elements can be then identified simply based on their elution position/time, using α-hydroxyisobutyrate solution (α-HIB), for example, as eluant.
Separation of the 3+ actinides can also be achieved by solvent extraction chromatography, using bis-(2-ethylhexyl) phosphoric acid (abbreviated as HDEHP) as the stationary organic phase, and nitric acid as the mobile aqueous phase. The actinide elution sequence is reversed from that of the cation-exchange resin column. The einsteinium separated by this method has the advantage to be free of organic complexing agent, as compared to the separation using a resin column.
Preparation of the metal.
Einsteinium is highly reactive and therefore strong reducing agents are required to obtain the pure metal from its compounds. This can be achieved by reduction of einsteinium(III) fluoride with metallic lithium:
EsF3 + 3 Li → Es + 3 LiF
However, owing to its low melting point and high rate of self-radiation damage, einsteinium has a higher vapor pressure than lithium fluoride. This makes this reduction reaction rather inefficient. It was tried in the early preparation attempts and quickly abandoned in favor of reduction of einsteinium(III) oxide with lanthanum metal:
Es2O3 + 2 La → 2 Es + La2O3
Chemical compounds.
Oxides.
Einsteinium(III) oxide (Es2O3) was obtained by burning einsteinium(III) nitrate. It forms colorless cubic crystals, which were first characterized from microgram samples sized about 30 nanometers. Two other phases, monoclinic and hexagonal, are known for this oxide. The formation of a certain Es2O3 phase depends on the preparation technique and sample history, and there is no clear phase diagram. Interconversions between the three phases can occur spontaneously, as a result of self-irradiation or self-heating. The hexagonal phase is isotypic with lanthanum oxide where the Es3+ ion is surrounded by a 6-coordinated group of O2− ions.
Halides.
Einsteinium halides are known for the oxidation states +2 and +3. The most stable state is +3 for all halides from fluoride to iodide.
Einsteinium(III) fluoride (EsF3) can be precipitated from einsteinium(III) chloride solutions upon reaction with fluoride ions. An alternative preparation procedure is to exposure einsteinium(III) oxide to chlorine trifluoride (ClF3) or F2 gas at a pressure of 1–2 atmospheres and a temperature between 300 and 400 °C. The EsF3 crystal structure is hexagonal, as in californium(III) fluoride (CfF3) where the Es3+ ions are 8-fold coordinated by fluorine ions in a bicapped trigonal prism arrangement.
Einsteinium(III) chloride (EsCl3) can be prepared by annealing einsteinium(III) oxide in the atmosphere of dry hydrogen chloride vapors at about 500 °C for some 20 minutes. It crystallizes upon cooling at about 425 °C into an orange solid with a hexagonal structure of UCl3 type, where einsteinium atoms are 9-fold coordinated by chlorine atoms in a tricapped trigonal prism geometry. Einsteinium(III) bromide (EsBr3) is a pale-yellow solid with a monoclinic structure of AlCl3 type, where the einsteinium atoms are octahedrally coordinated by bromine (coordination number 6).
The divalent compounds of einsteinium are obtained by reducing the trivalent halides with hydrogen:
2 EsX3 + H2 → 2 EsX2 + 2 HX, X = F, Cl, Br, I
Einsteinium(II) chloride (EsCl2), einsteinium(II) bromide (EsBr2), and einsteinium(II) iodide (EsI2) have been produced and characterized by optical absorption, with no structural information available yet.
Known oxyhalides of einsteinium include EsOCl, EsOBr and EsOI. These salts are synthesized by treating a trihalide with a vapor mixture of water and the corresponding hydrogen halide: for example, EsCl3 + H2O/HCl to obtain EsOCl.
Organoeinsteinium compounds.
The high radioactivity of einsteinium has a potential use in radiation therapy, and organometallic complexes have been synthesized in order to deliver einsteinium atoms to an appropriate organ in the body. Experiments have been performed on injecting einsteinium citrate (as well as fermium compounds) to dogs. Einsteinium(III) was also incorporated into beta-diketone chelate complexes, since analogous complexes with lanthanides previously showed strongest UV-excited luminescence among metallorganic compounds. When preparing einsteinium complexes, the Es3+ ions were 1000 times diluted with Gd3+ ions. This allowed reducing the radiation damage so that the compounds did not disintegrate during the period of 20 minutes required for the measurements. The resulting luminescence from Es3+ was much too weak to be detected. This was explained by the unfavorable relative energies of the individual constituents of the compound that hindered efficient energy transfer from the chelate matrix to Es3+ ions. Similar conclusion was drawn for other actinides americium, berkelium and fermium.
Luminescence of Es3+ ions was however observed in inorganic hydrochloric acid solutions as well as in organic solution with di(2-ethylhexyl)orthophosphoric acid. It shows a broad peak at about 1064 nanometers (half-width about 100 nm) which can be resonantly excited by green light (ca. 495 nm wavelength). The luminescence has a lifetime of several microseconds and the quantum yield below 0.1%. The relatively high, compared to lanthanides, non-radiative decay rates in Es3+ were associated with the stronger interaction of f-electrons with the inner Es3+ electrons.
Applications.
There is almost no use for any isotope of einsteinium outside basic scientific research aiming at production of higher transuranium elements and superheavy elements.
In 1955, mendelevium was synthesized by irradiating a target consisting of about 109 atoms of 253Es in the 60-inch cyclotron at Berkeley Laboratory. The resulting 253Es(α,n)256Md reaction yielded 17 atoms of the new element with the atomic number of 101.
The rare isotope 254Es is favored for production of superheavy elements because of its large mass, relatively long half-life of 270 days, and availability in significant amounts of several micrograms. Hence 254Es was used as a target in the attempted synthesis of ununennium (element 119) in 1985 by bombarding it with calcium-48 ions at the superHILAC linear particle accelerator at Berkeley, California. No atoms were identified, setting an upper limit for the cross section of this reaction at 300 nanobarns.
<chem>{^{254}_{99}Es} + {^{48}_{20}Ca} -> {^{302}_{119}Uue^\ast} -> no\ atoms</chem>
254Es was used as the calibration marker in the chemical analysis spectrometer ("alpha-scattering surface analyzer") of the Surveyor 5 lunar probe. The large mass of this isotope reduced the spectral overlap between signals from the marker and the studied lighter elements of the lunar surface.
Safety.
Most of the available einsteinium toxicity data is from research on animals. Upon ingestion by rats, only ~0.01% of it ends in the bloodstream. From there, about 65% goes to the bones, where it would remain for ~50 years if not for its radioactive decay, not to speak of the 3-year maximum lifespan of rats, 25% to the lungs (biological half-life ~20 years, though this is again rendered irrelevant by the short half-life of einsteinium), 0.035% to the testicles or 0.01% to the ovaries – where einsteinium stays indefinitely. About 10% of the ingested amount is excreted. The distribution of einsteinium over bone surfaces is uniform and is similar to that of plutonium.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\ce{^{249}_{98}Cf + ^{2}_{1}D -> ^{248}_{99}Es + 3^{1}_{0}n} \\quad \\left( \\ce{^{248}_{99}Es ->[\\epsilon][27 \\ce{min}] ^{248}_{98}Cf} \\right)"
}
] | https://en.wikipedia.org/wiki?curid=9479 |
948014 | Order and disorder | Presence/absence of symmetry or correlation in a many-particle system
In physics, the terms order and disorder designate the presence or absence of some symmetry or correlation in a many-particle system.
In condensed matter physics, systems typically are ordered at low temperatures; upon heating, they undergo one or several phase transitions into less ordered states.
Examples for such an order-disorder transition are:
The degree of freedom that is ordered or disordered can be translational (crystalline ordering), rotational (ferroelectric ordering), or a spin state (magnetic ordering).
The order can consist either in a full crystalline space group symmetry, or in a correlation. Depending on how the correlations decay with distance, one speaks of long range order or short range order.
If a disordered state is not in thermodynamic equilibrium, one speaks of quenched disorder. For instance, a glass is obtained by quenching (supercooling) a liquid. By extension, other quenched states are called spin glass, orientational glass. In some contexts, the opposite of quenched disorder is annealed disorder.
Characterizing order.
Lattice periodicity and X-ray crystallinity.
The strictest form of order in a solid is lattice periodicity: a certain pattern (the arrangement of atoms in a unit cell) is repeated again and again to form a translationally invariant tiling of space. This is the defining property of a crystal. Possible symmetries have been classified in 14 Bravais lattices and 230 space groups.
Lattice periodicity implies long-range order: if only one unit cell is known, then by virtue of the translational symmetry it is possible to accurately predict all atomic positions at arbitrary distances. During much of the 20th century, the converse was also taken for granted – until the discovery of quasicrystals in 1982 showed that there are perfectly deterministic tilings that do not possess lattice periodicity.
Besides structural order, one may consider charge ordering, spin ordering, magnetic ordering, and compositional ordering. Magnetic ordering is observable in neutron diffraction.
It is a thermodynamic entropy concept often displayed by a second-order phase transition. Generally speaking, high thermal energy is associated with disorder and low thermal energy with ordering, although there have been violations of this. Ordering peaks become apparent in diffraction experiments at low energy.
Long-range order.
Long-range order characterizes physical systems in which remote portions of the same sample exhibit correlated behavior.
This can be expressed as a correlation function, namely the spin-spin correlation function:
formula_0
where "s" is the spin quantum number and "x" is the distance function within the particular system.
This function is equal to unity when formula_1 and decreases as the distance formula_2 increases. Typically, it decays exponentially to zero at large distances, and the system is considered to be disordered. But if the correlation function decays to a constant value at large formula_2 then the system is said to possess long-range order. If it decays to zero as a power of the distance then it is called quasi-long-range order (for details see Chapter 11 in the textbook cited below. See also Berezinskii–Kosterlitz–Thouless transition). Note that what constitutes a large value of formula_2 is understood in the sense of asymptotics.
Quenched disorder.
In statistical physics, a system is said to present quenched disorder when some parameters defining its behavior are random variables which do not evolve with time. These parameters are said to be quenched or frozen. Spin glasses are a typical example. Quenched disorder is contrasted with annealed disorder in which the parameters are allowed to evolve themselves.
Mathematically, quenched disorder is more difficult to analyze than its annealed counterpart as averages over thermal noise and quenched disorder play distinct roles. Few techniques to approach each are known, most of which rely on approximations. Common techniques used to analyzed systems with quenched disorder include the replica trick, based on analytic continuation, and the cavity method, where a system's response to the perturbation due to an added constituent is analyzed. While these methods yield results agreeing with experiments in many systems, the procedures have not been formally mathematically justified. Recently, rigorous methods have shown that in the Sherrington-Kirkpatrick model, an archetypal spin glass model, the replica-based solution is exact. The generating functional formalism, which relies on the computation of path integrals, is a fully exact method but is more difficult to apply than the replica or cavity procedures in practice.
Annealed disorder.
A system is said to present annealed disorder when some parameters entering its definition are random variables, but whose evolution is related to that of the degrees of freedom defining the system. It is defined in opposition to quenched disorder, where the random variables may not change their values.
Systems with annealed disorder are usually considered to be easier to deal with mathematically, since the average on the disorder and the thermal average may be treated on the same footing.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "G(x,x') = \\langle s(x),s(x') \\rangle. \\, "
},
{
"math_id": 1,
"text": "x=x'"
},
{
"math_id": 2,
"text": "|x-x'|"
}
] | https://en.wikipedia.org/wiki?curid=948014 |
9480299 | Ethernet frame | Unit of data on an Ethernet network
In computer networking, an Ethernet frame is a data link layer protocol data unit and uses the underlying Ethernet physical layer transport mechanisms. In other words, a data unit on an Ethernet link transports an Ethernet frame as its payload.
An Ethernet frame is preceded by a preamble and start frame delimiter (SFD), which are both part of the Ethernet packet at the physical layer. Each Ethernet frame starts with an Ethernet header, which contains destination and source MAC addresses as its first two fields. The middle section of the frame is payload data including any headers for other protocols (for example, Internet Protocol) carried in the frame. The frame ends with a frame check sequence (FCS), which is a 32-bit cyclic redundancy check used to detect any in-transit corruption of data.
Structure.
A data packet on the wire and the frame as its payload consist of binary data. Ethernet transmits data with the most-significant octet (byte) first; within each octet, however, the least-significant bit is transmitted first.
The internal structure of an Ethernet frame is specified in IEEE 802.3. The table below shows the complete Ethernet packet and the frame inside, as transmitted, for the payload size up to the MTU of 1500 octets. Some implementations of Gigabit Ethernet and other higher-speed variants of Ethernet support larger frames, known as jumbo frames.
The optional 802.1Q tag consumes additional space in the frame. Field sizes for this option are shown in brackets in the table above. IEEE 802.1ad (Q-in-Q) allows for multiple tags in each frame. This option is not illustrated here.
Ethernet packet – physical layer.
Preamble and start frame delimiter.
An Ethernet packet starts with a seven-octet (56-bit) preamble and one-octet (8-bit) "start frame delimiter" (SFD). The preamble bit values alternate 1 and 0, allowing receivers to synchronize their clock at the bit-level with the transmitter. The preamble is followed by the SFD which ends with a 1 instead of 0, to break the bit pattern of the preamble and signal the start of the actual frame.
Physical layer transceiver circuitry (PHY for short) is required to connect the Ethernet MAC to the physical medium. The connection between a PHY and MAC is independent of the physical medium and uses a bus from the media independent interface family (MII, GMII, RGMII, SGMII, XGMII). The preamble and SFD representation depends on the width of the bus:
The SFD is immediately followed by the destination MAC address, which is the first field in an Ethernet frame.
Frame – data link layer.
Header.
The header features destination and source MAC addresses (each six octets in length), the EtherType field and, optionally, an IEEE 802.1Q tag or IEEE 802.1ad tag.
The EtherType field is two octets long and it can be used for two different purposes. Values of 1500 and below mean that it is used to indicate the size of the payload in octets, while values of 1536 and above indicate that it is used as an EtherType, to indicate which protocol is encapsulated in the payload of the frame. When used as EtherType, the length of the frame is determined by the location of the interpacket gap and valid frame check sequence (FCS).
The IEEE 802.1Q tag or IEEE 802.1ad tag, if present, is a four-octet field that indicates virtual LAN (VLAN) membership and IEEE 802.1p priority. The first two octets of the tag are called the Tag Protocol IDentifier (TPID) and double as the EtherType field indicating that the frame is either 802.1Q or 802.1ad tagged. 802.1Q uses a TPID of 0x8100. 802.1ad uses a TPID of 0x88a8.
Payload.
Payload is a variable-length field. Its minimum size is governed by a requirement for a minimum frame transmission of 64 octets (bytes). With header and FCS taken into account, the minimum payload is 42 octets when an 802.1Q tag is present and 46 octets when absent. When the actual payload is less than the minimum, padding octets are added accordingly. IEEE standards specify a maximum payload of 1500 octets. Non-standard jumbo frames allow for larger payloads on networks built to support them.
Frame check sequence.
The frame check sequence (FCS) is a four-octet cyclic redundancy check (CRC) that allows detection of corrupted data within the entire frame as received on the receiver side. According to the standard, the FCS value is computed as a function of the protected MAC frame fields: source and destination address, length/type field, MAC client data and padding (that is, all fields except the FCS).
Per the standard, this computation is done using the left shifting CRC-32 (polynomial = 0x4C11DB7, initial CRC = 0xFFFFFFFF, CRC is post complemented, verify value = 0x38FB2284) algorithm. The standard states that data is transmitted least significant bit (bit 0) first, while the FCS is transmitted most significant bit (bit 31) first. An alternative is to calculate a CRC using the right shifting CRC-32 (polynomial = 0xEDB88320, initial CRC = 0xFFFFFFFF, CRC is post complemented, verify value = 0x2144DF1C), which will result in a CRC that is a bit reversal of the FCS, and transmit both data and the CRC least significant bit first, resulting in identical transmissions.
The standard states that the receiver should calculate a new FCS as data is received and then compare the received FCS with the FCS the receiver has calculated. An alternative is to calculate a CRC on both the received data and the FCS, which will result in a fixed non-zero "verify" value. (The result is non-zero because the CRC is post complemented during CRC generation). Since the data is received least significant bit first, and to avoid having to buffer octets of data, the receiver typically uses the right shifting CRC-32. This makes the "verify" value (sometimes called "magic check") 0x2144DF1C.
However, hardware implementation of a logically right shifting CRC may use a left shifting Linear Feedback Shift Register as the basis for calculating the CRC, reversing the bits and resulting in a verify value of 0x38FB2284. Since the complementing of the CRC may be performed post calculation and during transmission, what remains in the hardware register is a non-complemented result, so the residue for a right shifting implementation would be the complement of 0x2144DF1C = 0xDEBB20E3, and for a left shifting implementation, the complement of 0x38FB2284 = 0xC704DD7B.
End of frame – physical layer.
The "end of a frame" is usually indicated by the end-of-data-stream symbol at the physical layer or by loss of the carrier signal; an example is 10BASE-T, where the receiving station detects the end of a transmitted frame by loss of the carrier. Later physical layers use an explicit "end of data" or "end of stream" symbol or sequence to avoid ambiguity, especially where the carrier is continually sent between frames; an example is Gigabit Ethernet with its 8b/10b encoding scheme that uses special symbols which are transmitted before and after a frame is transmitted.
Interpacket gap – physical layer.
Interpacket gap (IPG) is idle time between packets. After a packet has been sent, transmitters are required to transmit a minimum of 96 bits (12 octets) of idle line state before transmitting the next packet.
Types.
There are several types of Ethernet frames:
The different frame types have different formats and MTU values, but can coexist on the same physical medium. Differentiation between frame types is possible based on the table on the right.
In addition, all four Ethernet frame types may optionally contain an IEEE 802.1Q tag to identify what VLAN it belongs to and its priority (quality of service). This encapsulation is defined in the IEEE 802.3ac specification and increases the maximum frame by 4 octets.
The IEEE 802.1Q tag, if present, is placed between the Source Address and the EtherType or Length fields. The first two octets of the tag are the Tag Protocol Identifier (TPID) value of 0x8100. This is located in the same place as the EtherType/Length field in untagged frames, so an EtherType value of 0x8100 means the frame is tagged, and the true EtherType/Length is located after the Q-tag. The TPID is followed by two octets containing the Tag Control Information (TCI) (the IEEE 802.1p priority (quality of service) and VLAN id). The Q-tag is followed by the rest of the frame, using one of the types described above.
Ethernet II.
Ethernet II framing (also known as DIX Ethernet, named after DEC, Intel and Xerox, the major participants in its design), defines the two-octet EtherType field in an Ethernet frame, preceded by destination and source MAC addresses, that identifies an upper layer protocol encapsulated by the frame data. Most notably, an EtherType value of 0x0800 indicates that the frame contains an IPv4 datagram, 0x0806 indicates an ARP datagram, and 0x86DD indicates an IPv6 datagram. See for more.
As this industry-developed standard went through a formal IEEE standardization process, the EtherType field was changed to a (data) length field in the new 802.3 standard. Since the recipient still needs to know how to interpret the frame, the standard required an IEEE 802.2 header to follow the length and specify the type. Many years later, the 802.3x-1997 standard, and later versions of the 802.3 standard, formally approved of both types of framing. Ethernet II framing is the most common in Ethernet local area networks, due to its simplicity and lower overhead.
In order to allow some frames using Ethernet II framing and some using the original version of 802.3 framing to be used on the same Ethernet segment, EtherType values must be greater than or equal to 1536 (0x0600). That value was chosen because the maximum length of the payload field of an Ethernet 802.3 frame is 1500 octets (0x05DC). Thus if the field's value is greater than or equal to 1536, the frame must be an Ethernet II frame, with that field being a type field. If it's less than or equal to 1500, it must be an IEEE 802.3 frame, with that field being a length field. Values between 1500 and 1536, exclusive, are undefined. This convention allows software to determine whether a frame is an Ethernet II frame or an IEEE 802.3 frame, allowing the coexistence of both standards on the same physical medium.
Novell raw IEEE 802.3.
Novell's "raw" 802.3 frame format was based on early IEEE 802.3 work. Novell used this as a starting point to create the first implementation of its own IPX Network Protocol over Ethernet. They did not use any LLC header but started the IPX packet directly after the length field. This does not conform to the IEEE 802.3 standard, but since IPX always has FF as the first two octets (while in IEEE 802.2 LLC that pattern is theoretically possible but extremely unlikely), in practice this usually coexists on the wire with other Ethernet implementations, with the notable exception of some early forms of DECnet which got confused by this.
Novell NetWare used this frame type by default until the mid-nineties, and since NetWare was then very widespread, while IP was not, at some point in time most of the world's Ethernet traffic ran over "raw" 802.3 carrying IPX. Since NetWare 4.10, NetWare defaults to IEEE 802.2 with LLC (NetWare Frame Type Ethernet_802.2) when using IPX.
IEEE 802.2 LLC.
Some protocols, such as those designed for the OSI stack, operate directly on top of IEEE 802.2 LLC encapsulation, which provides both connection-oriented and connectionless network services.
IEEE 802.2 LLC encapsulation is not in widespread use on common networks currently, with the exception of large corporate NetWare installations that have not yet migrated to NetWare over IP. In the past, many corporate networks used IEEE 802.2 to support transparent translating bridges between Ethernet and Token Ring or FDDI networks.
There exists an Internet standard for encapsulating IPv4 traffic in IEEE 802.2 LLC SAP/SNAP frames. It is almost never implemented on Ethernet, although it is used on FDDI, Token Ring, IEEE 802.11 (with the exception of the 5.9 GHz band, where it uses EtherType) and other IEEE 802 LANs. IPv6 can also be transmitted over Ethernet using IEEE 802.2 LLC SAP/SNAP, but, again, that's almost never used.
IEEE 802.2 SNAP.
By examining the 802.2 LLC header, it is possible to determine whether it is followed by a SNAP header. The LLC header includes two eight-bit address fields, called "service access points" (SAPs) in OSI terminology; when both source and destination SAP are set to the value 0xAA, the LLC header is followed by a SNAP header. The SNAP header allows EtherType values to be used with all IEEE 802 protocols, as well as supporting private protocol ID spaces.
In IEEE 802.3x-1997, the IEEE Ethernet standard was changed to explicitly allow the use of the 16-bit field after the MAC addresses to be used as a length field or a type field.
The AppleTalk v2 protocol suite on Ethernet ("EtherTalk") uses IEEE 802.2 LLC + SNAP encapsulation.
Maximum throughput.
We may calculate the protocol overhead for Ethernet as a percentage (packet size including IPG)
formula_0
We may calculate the "protocol efficiency" for Ethernet
formula_1
Maximum efficiency is achieved with largest allowed payload size and is:
formula_2
for untagged frames, since the packet size is maximum 1500 octet payload + 8 octet preamble + 14 octet header + 4 octet trailer + minimum interpacket gap corresponding to 12 octets = 1538 octets. The maximum efficiency is:
formula_3
when 802.1Q VLAN tagging is used.
The throughput may be calculated from the efficiency
formula_4,
where the physical layer net bit rate (the wire bit rate) depends on the Ethernet physical layer standard, and may be 10 Mbit/s, 100 Mbit/s, 1 Gbit/s or 10 Gbit/s. Maximum throughput for 100BASE-TX Ethernet is consequently 97.53 Mbit/s without 802.1Q, and 97.28 Mbit/s with 802.1Q.
Channel utilization is a concept often confused with protocol efficiency. It considers only the use of the channel disregarding the nature of the data transmitted – either payload or overhead. At the physical layer, the link channel and equipment do not know the difference between data and control frames. We may calculate the channel utilization:
formula_5
The total time considers the round trip time along the channel, the processing time in the hosts and the time transmitting data and acknowledgements. The time spent transmitting data includes data and acknowledgements.
Runt frames.
A runt frame is an Ethernet frame that is less than the IEEE 802.3's minimum length of 64 octets. Runt frames are most commonly caused by collisions; other possible causes are a malfunctioning network card, buffer underrun, duplex mismatch or software issues.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\text{Protocol overhead} = \\frac{\\text{Packet size} - \\text{Payload size}}{\\text{Packet size}}"
},
{
"math_id": 1,
"text": "\\text{Protocol efficiency} = \\frac{\\text{Payload size}}{\\text{Packet size}}"
},
{
"math_id": 2,
"text": "\\frac{1500}{1538} = 97.53\\%"
},
{
"math_id": 3,
"text": "\\frac{1500}{1542} = 97.28\\%"
},
{
"math_id": 4,
"text": "\\text{Throughput} = \\text{Efficiency} \\times \\text{Net bit rate}\\,\\!"
},
{
"math_id": 5,
"text": "\\text{Channel utilization} = \\frac{\\text{Time spent transmitting data}}{\\text{Total time}}"
}
] | https://en.wikipedia.org/wiki?curid=9480299 |
9480994 | Oxygen balance | Degree to which an explosive can be oxidized
Oxygen balance (OB, OB%, or Ω) is an expression that is used to indicate the degree to which an explosive can be oxidized, to determine if an explosive molecule contains enough oxygen to fully oxidize the other atoms in the explosive. For example, fully oxidized carbon forms carbon dioxide, hydrogen forms water, sulfur forms sulfur dioxide, and metals form metal oxides. A molecule is said to have a positive oxygen balance if it contains more oxygen than is needed and a negative oxygen balance if it contains less oxygen than is needed.
An explosive with a negative oxygen balance will lead to incomplete combustion, which commonly produces carbon monoxide, which is a toxic gas. Explosives with negative or positive oxygen balance are commonly mixed with other energetic materials that are either oxygen positive or negative, respectively, to increase the explosive's power. For example, TNT is an oxygen negative explosive and is commonly mixed with oxygen positive energetic materials or fuels to increase its power.
Calculating oxygen balance.
The procedure for calculating oxygen balance in terms of 100 grams of the explosive material is to determine the number of moles of oxygen that are excess or deficient for 100 grams of the compound.
formula_0
"X" = number of atoms of carbon, "Y" = number of atoms of hydrogen, "Z" = number of atoms of oxygen, and "M" = number of atoms of metal (metallic oxide produced).
In the case of TNT (C6H2(NO2)3CH3),
Molecular weight = 227.1
"X" = 7 (number of carbon atoms)
"Y" = 5 (number of hydrogen atoms)
"Z" = 6 (number of oxygen atoms)
Therefore,
formula_1
OB% = −73.97% for TNT
Examples of materials with negative oxygen balance are nitromethane (−39%), trinitrotoluene (−74%), aluminium powder (−89%), sulfur (−100%), or carbon (−266.7%). Examples of materials with positive oxygen balance are ammonium nitrate (+20%), ammonium perchlorate (+34%), potassium chlorate (+39.2%), sodium chlorate (+45%), potassium nitrate (+47.5%), tetranitromethane (+49%), lithium perchlorate (+60%), or nitroglycerine (+3.5%). Ethylene glycol dinitrate has an oxygen balance of zero, as does the theoretical compound trinitrotriazine.
Oxygen balance and power.
Because sensitivity, brisance, and strength are properties resulting from a complex explosive chemical reaction, a simple relationship such as oxygen balance cannot be depended upon to yield universally consistent results. When using oxygen balance to predict properties of one explosive relative to another, it is to be expected that one with an oxygen balance closer to zero will be the more brisant, powerful, and sensitive; however, many exceptions to this rule do exist.
One area in which oxygen balance can be applied is in the processing of mixtures of explosives. The family of explosives called amatols are mixtures of ammonium nitrate and TNT. Ammonium nitrate has an oxygen balance of +20% and TNT has an oxygen balance of −74%, so it would appear that the mixture yielding an oxygen balance of zero would also result in the best explosive properties. In actual practice a mixture of 80% ammonium nitrate and 20% TNT by weight yields an oxygen balance of +1%, the best properties of all mixtures, and an increase in strength of 30% over TNT.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\text{OB}\\% = \\frac{-1600}{\\text{Mol. wt. of compound}} \\times (2X + Y/2 + M - Z)"
},
{
"math_id": 1,
"text": "\\text{OB}\\% = \\frac{-1600}{227.1} \\times (14 + 2.5 - 6)"
}
] | https://en.wikipedia.org/wiki?curid=9480994 |
9481141 | Polyvinyl nitrate | Polyvinyl nitrate (abbreviated: PVN) is a high-energy polymer with the idealized formula of [CH2CH(ONO2)]. Polyvinyl nitrate is a long carbon chain (polymer) with nitrate groups <chem>(-O-NO2)</chem> bonded randomly along the chain. PVN is a white, fibrous solid, and is soluble in polar organic solvents such as acetone. PVN can be prepared by nitrating polyvinyl alcohol with an excess of nitric acid. Because PVN is also a nitrate ester such as nitroglycerin (a common explosive), it exhibits energetic properties and is commonly used in explosives and propellants.
Preparation.
Polyvinyl nitrate was first synthesized by submersing polyvinyl alcohol (PVA) in a solution of concentrated sulfuric and nitric acids. This causes the PVA to lose a hydrogen atom from its hydroxy group (deprotonation), and the nitric acid (HNO3) to lose a NO2+ when in sulfuric acid. The NO2+ attaches to the oxygen in the PVA and creates a nitrate group, producing polyvinyl nitrate. This method results in a low nitrogen content of 10% and an overall yield of 80%. This method is inferior, as PVA has a low solubility in sulfuric acid and a slow rate of nitration for PVA. This meant that a lot of sulfuric acid was needed relative to PVA and did not produce a high nitrogen PVN, which is desirable for its energetic properties.
An improved method is where PVA is nitrated without sulfuric acid; however, when this solution is exposed to air, the PVA combusts. In this new method, either the PVA nitration is done in an inert gas (carbon dioxide or nitrogen) or the PVA powder is clumped into larger particles and submerged underneath the nitric acid to limit the amount of air exposure.
Currently, the most common method is when PVA powder is dissolved in acetic anhydride at -10°C. Then cooled nitric acid is slowly added. This produces a high nitrogen content PVN within about 5-7 hours. Because acetic anhydride was used as the solvent instead of sulfuric acid, the PVA will not combust when exposed to air.
Physical properties.
PVN is a white thermoplastic with a softening point of 40-50°C. The theoretical maximum nitrogen content of PVN is 15.73%. PVN is a polymer that has an atactic configuration, meaning the nitrate groups are randomly distributed along the main chain. Fibrous PVN increases in crystallinity as the nitrogen content increases, showing that the PVN molecules organize themselves more orderly as nitrogen percent increases. Intramolecularly, the geometry of the polymer is planar zigzag. The porous PVN can be gelatinized when added to acetone at room temperature. This creates a viscous slurry and loses its fibrous and porous nature; however, it retains most of its energetic properties.
Chemical properties.
Combustion.
Polyvinyl nitrate is a high-energy polymer due to the significant presence of <chem>O - NO2</chem> groups, similar to nitrocellulose and nitroglycerin. These nitrate groups have an activation energy of 53 kcal/mol are the primary cause of PVN's high chemical potential energy. The complete combustion reaction of PVN assuming full nitration is:
<chem>2CH2CH(ONO2) + 5/2O2 -> 4CO2 + N2 + 3H2O</chem>
When burned, PVN samples with less nitrogen had a significantly higher heat of combustion because there were more hydrogen molecules and more heat was generated when oxygen was present. The heat of combustion was about 3,000 cal/g for 15.71% N and 3,700 cal/g for 11.76% N. Alternatively, PVN samples with a higher nitrogen content had a significantly higher heat of explosion as it had more <chem>O - NO2</chem> groups as it had more oxygen leading to more complete combustion. This leads to a more complete combustion and more heat generated when burned in inert or low oxygen environments.
Stability.
Nitrate esters, in general, are unstable because of the weak <chem>N - O</chem> bond and tend to decompose at higher temperatures. Fibrous PVN is relatively stable at 80°C and is less stable as the nitrogen content increases. Gelatinized PVN is less stable than fibrous PVN.
Activation energy.
Ignition temperature is the temperature at which a substance combusts spontaneously and requires no other additional energy (other than the temperature)/ This temperature can be used to determine the activation energy. For samples of varying nitrogen content, the ignition temperature decreases as nitrogen percentage increases, showing that PVN is more ignitable as nitrogen content increases. Using the Semenov equation:
formula_0
where D is the ignition delay (the time it takes for a substance to ignite), E is the activation energy, R is the universal gas constant, T is absolute temperature, and C is a constant, dependent on the material.
The activation energy is greater than 13 kcal/mol and reaches 16 kcal/mol (at 15.71% nitrogen, near theoretical maximum) and varies greatly between different nitrogen concentrations and has no linear pattern between activation energy and the degree of nitration.
Impact sensitivity.
The height at which a mass is dropped on PVN and causes an explosion shows the sensitivity of PVN to impacts. As nitrogen content increases, fibrous PVN is more sensitive to impacts. Gelatinous PVN is similar to fibrous PVN in impact sensitivity.
Applications.
Because of the nitrate groups of PVN, polyvinyl nitrate is mainly used for its explosive and energetic capabilities. Structurally, PVN is similar to nitrocellulose in that it is a polymer with several nitrate groups off the main branch, differing only in their main chain (carbon and cellulose respectively). Because of this similarity, PVN is typically used in explosives and propellants as a binder. In explosives, a binder is used to form an explosive where the explosive materials are difficult to mold "(see Polymer-bonded explosive (PBX))". A common binder polymer is hydroxyl-terminated polybutadiene (HTPB) or glycidyl azide polymer (GAP). Moreover, the binder needs a plasticizer such as dioctyl adipate (DOP) or 2-nitrodiphenylamine (2-NDPA) to make the explosive more flexible. Polyvinyl nitrate combines the traits of both a binder and a plasticizer, as this polymer binds the explosive ingredients together and is flexible at is softening point (40-50°C). Moreover, PVN adds to the explosive's overall energetic potential due to its nitrate groups.
An example composition including polyvinyl nitrate is PVN, nitrocellulose and/or polyvinyl acetate, and 2-nitrodiphenylamine. This creates a moldable thermoplastic that can be combined with a powder containing nitrocellulose to create a cartridge case where the PVN composition acts as a propellant and assists as an explosive material.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "D=Ce^{-E/RT}"
}
] | https://en.wikipedia.org/wiki?curid=9481141 |
9481422 | 24-cell honeycomb | In four-dimensional Euclidean geometry, the 24-cell honeycomb, or icositetrachoric honeycomb is a regular space-filling tessellation (or honeycomb) of 4-dimensional Euclidean space by regular 24-cells. It can be represented by Schläfli symbol {3,4,3,3}.
The dual tessellation by regular 16-cell honeycomb has Schläfli symbol {3,3,4,3}. Together with the tesseractic honeycomb (or 4-cubic honeycomb) these are the only regular tessellations of Euclidean 4-space.
Coordinates.
The 24-cell honeycomb can be constructed as the Voronoi tessellation of the D4 or F4 root lattice. Each 24-cell is then centered at a D4 lattice point, i.e. one of
formula_0
These points can also be described as Hurwitz quaternions with even square norm.
The vertices of the honeycomb lie at the deep holes of the D4 lattice. These are the Hurwitz quaternions with odd square norm.
It can be constructed as a birectified tesseractic honeycomb, by taking a tesseractic honeycomb and placing vertices at the centers of all the square faces. The 24-cell facets exist between these vertices as "rectified 16-cells". If the coordinates of the tesseractic honeycomb are integers (i,j,k,l), the "birectified tesseractic honeycomb" vertices can be placed at all permutations of half-unit shifts in two of the four dimensions, thus: (i+,j+,k,l), (i+,j,k+,l), (i+,j,k,l+), (i,j+,k+,l), (i,j+,k,l+), (i,j,k+,l+).
Configuration.
Each 24-cell in the 24-cell honeycomb has 24 neighboring 24-cells. With each neighbor it shares exactly one octahedral cell.
It has 24 more neighbors such that with each of these it shares a single vertex.
It has no neighbors with which it shares only an edge or only a face.
The vertex figure of the 24-cell honeycomb is a tesseract (4-dimensional cube). So there are 16 edges, 32 triangles, 24 octahedra, and 8 24-cells meeting at every vertex. The edge figure is a tetrahedron, so there are 4 triangles, 6 octahedra, and 4 24-cells surrounding every edge. Finally, the face figure is a triangle, so there are 3 octahedra and 3 24-cells meeting at every face.
Cross-sections.
One way to visualize a 4-dimensional figure is to consider various 3-dimensional cross-sections. That is, the intersection of various hyperplanes with the figure in question. Applying this technique to the 24-cell honeycomb gives rise to various 3-dimensional honeycombs with varying degrees of regularity.
A "vertex-first" cross-section uses some hyperplane orthogonal to a line joining opposite vertices of one of the 24-cells. For instance, one could take any of the coordinate hyperplanes in the coordinate system given above (i.e. the planes determined by "x""i" = 0). The cross-section of {3,4,3,3} by one of these hyperplanes gives a rhombic dodecahedral honeycomb. Each of the rhombic dodecahedra corresponds to a maximal cross-section of one of the 24-cells intersecting the hyperplane (the center of each such (4-dimensional) 24-cell lies in the hyperplane). Accordingly, the rhombic dodecahedral honeycomb is the Voronoi tessellation of the D3 root lattice (a face-centered cubic lattice). Shifting this hyperplane halfway to one of the vertices (e.g. "x""i" = ) gives rise to a regular cubic honeycomb. In this case the center of each 24-cell lies off the hyperplane. Shifting again, so the hyperplane intersects the vertex, gives another rhombic dodecahedral honeycomb but with new 24-cells (the former ones having shrunk to points). In general, for any integer "n", the cross-section through "x""i" = "n" is a rhombic dodecahedral honeycomb, and the cross-section through "x""i" = "n" + is a cubic honeycomb. As the hyperplane moves through 4-space, the cross-section morphs between the two periodically.
A "cell-first" cross-section uses some hyperplane parallel to one of the octahedral cells of a 24-cell. Consider, for instance, some hyperplane orthogonal to the vector (1,1,0,0). The cross-section of {3,4,3,3} by this hyperplane is a rectified cubic honeycomb. Each cuboctahedron in this honeycomb is a maximal cross-section of a 24-cell whose center lies in the plane. Meanwhile, each octahedron is a boundary cell of a (4-dimensional) 24-cell whose center lies off the plane. Shifting this hyperplane till it lies halfway between the center of a 24-cell and the boundary, one obtains a bitruncated cubic honeycomb. The cuboctahedra have shrunk, and the octahedra have grown until they are both truncated octahedra. Shifting again, so the hyperplane intersects the boundary of the central 24-cell gives a rectified cubic honeycomb again, the cuboctahedra and octahedra having swapped positions. As the hyperplane sweeps through 4-space, the cross-section morphs between these two honeycombs periodically.
Kissing number.
If a 3-sphere is inscribed in each hypercell of this tessellation, the resulting arrangement is the densest known regular sphere packing in four dimensions, with the kissing number 24. The packing density of this arrangement is
formula_1
Each inscribed 3-sphere kisses 24 others at the centers of the octahedral facets of its 24-cell, since each such octahedral cell is shared with an adjacent 24-cell. In a unit-edge-length tessellation, the diameter of the spheres (the distance between the centers of kissing spheres) is √2.
Just outside this surrounding shell of 24 kissing 3-spheres is another less dense shell of 24 3-spheres which do not kiss each other or the central 3-sphere; they are inscribed in 24-cells with which the central 24-cell shares only a single vertex (rather than an octahedral cell). The center-to-center distance between one of these spheres and any of its shell neighbors or the central sphere is 2.
Alternatively, the same sphere packing arrangement with kissing number 24 can be carried out with smaller 3-spheres of edge-length-diameter, by locating them at the centers and the vertices of the 24-cells. (This is equivalent to locating them at the vertices of a 16-cell honeycomb of unit-edge-length.) In this case the central 3-sphere kisses 24 others at the centers of the cubical facets of the three tesseracts inscribed in the 24-cell. (This is the unique body-centered cubic packing of edge-length spheres of the tesseractic honeycomb.)
Just outside this shell of kissing 3-spheres of diameter 1 is another less dense shell of 24 non-kissing 3-spheres of diameter 1; they are centered in the adjacent 24-cells with which the central 24-cell shares an octahedral facet. The center-to-center distance between one of these spheres and any of its shell neighbors or the central sphere is √2.
Symmetry constructions.
There are five different Wythoff constructions of this tessellation as a uniform polytope. They are geometrically identical to the regular form, but the symmetry differences can be represented by colored 24-cell facets. In all cases, eight 24-cells meet at each vertex, but the vertex figures have different symmetry generators.
See also.
Other uniform honeycombs in 4-space:
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\left\\{(x_i)\\in\\mathbb Z^4 : {\\textstyle\\sum_i} x_i \\equiv 0\\;(\\mbox{mod }2)\\right\\}."
},
{
"math_id": 1,
"text": "\\frac{\\pi^2}{16}\\cong0.61685."
}
] | https://en.wikipedia.org/wiki?curid=9481422 |
9482601 | Liouville's theorem (differential algebra) | Says when antiderivatives of elementary functions can be expressed as elementary functions
In mathematics, Liouville's theorem, originally formulated by French mathematician Joseph Liouville in 1833 to 1841, places an important restriction on antiderivatives that can be expressed as elementary functions.
The antiderivatives of certain elementary functions cannot themselves be expressed as elementary functions. These are called nonelementary antiderivatives. A standard example of such a function is formula_0 whose antiderivative is (with a multiplier of a constant) the error function, familiar from statistics. Other examples include the functions formula_1 and formula_2
Liouville's theorem states that elementary antiderivatives, if they exist, are in the same differential field as the function, plus possibly a finite number of applications of the logarithm function.
Definitions.
For any differential field formula_3 the <templatestyles src="Template:Visible anchor/styles.css" />constants of formula_4 is the subfield
formula_5
Given two differential fields formula_4 and formula_6 formula_7 is called a <templatestyles src="Template:Visible anchor/styles.css" />logarithmic extension of formula_4 if formula_7 is a simple transcendental extension of formula_4 (that is, formula_8 for some transcendental formula_9) such that
formula_10
This has the form of a logarithmic derivative. Intuitively, one may think of formula_9 as the logarithm of some element formula_11 of formula_3 in which case, this condition is analogous to the ordinary chain rule. However, formula_4 is not necessarily equipped with a unique logarithm; one might adjoin many "logarithm-like" extensions to formula_12 Similarly, an <templatestyles src="Template:Visible anchor/styles.css" />exponential extension is a simple transcendental extension that satisfies
formula_13
With the above caveat in mind, this element may be thought of as an exponential of an element formula_11 of formula_12 Finally, formula_7 is called an <templatestyles src="Template:Visible anchor/styles.css" />elementary differential extension of formula_4 if there is a finite chain of subfields from formula_4 to formula_7 where each extension in the chain is either algebraic, logarithmic, or exponential.
Basic theorem.
Suppose formula_4 and formula_7 are differential fields with formula_14 and that formula_7 is an elementary differential extension of formula_12 Suppose formula_15 and formula_16 satisfy formula_17 (in words, suppose that formula_7 contains an antiderivative of formula_18).
Then there exist formula_19 and formula_20 such that
formula_21
In other words, the only functions that have "elementary antiderivatives" (that is, antiderivatives living in, at worst, an elementary differential extension of formula_4) are those with this form. Thus, on an intuitive level, the theorem states that the only elementary antiderivatives are the "simple" functions plus a finite number of logarithms of "simple" functions.
A proof of Liouville's theorem can be found in section 12.4 of Geddes, et al. See Lützen's scientific bibliography for a sketch of Liouville's original proof (Chapter IX. Integration in Finite Terms), its modern exposition and algebraic treatment (ibid. §61).
Examples.
As an example, the field formula_22 of rational functions in a single variable has a derivation given by the standard derivative with respect to that variable. The constants of this field are just the complex numbers formula_23 that is, formula_24
The function formula_25 which exists in formula_26 does not have an antiderivative in formula_27 Its antiderivatives formula_28 do, however, exist in the logarithmic extension formula_29
Likewise, the function formula_30 does not have an antiderivative in formula_27 Its antiderivatives formula_31 do not seem to satisfy the requirements of the theorem, since they are not (apparently) sums of rational functions and logarithms of rational functions. However, a calculation with Euler's formula formula_32 shows that in fact the antiderivatives can be written in the required manner (as logarithms of rational functions).
formula_33
Relationship with differential Galois theory.
Liouville's theorem is sometimes presented as a theorem in differential Galois theory, but this is not strictly true. The theorem can be proved without any use of Galois theory. Furthermore, the Galois group of a simple antiderivative is either trivial (if no field extension is required to express it), or is simply the additive group of the constants (corresponding to the constant of integration). Thus, an antiderivative's differential Galois group does not encode enough information to determine if it can be expressed using elementary functions, the major condition of Liouville's theorem.
See also.
<templatestyles src="Div col/styles.css"/>
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "e^{-x^2},"
},
{
"math_id": 1,
"text": "\\frac{\\sin (x)}{x}"
},
{
"math_id": 2,
"text": "x^x."
},
{
"math_id": 3,
"text": "F,"
},
{
"math_id": 4,
"text": "F"
},
{
"math_id": 5,
"text": "\\operatorname{Con}(F) = \\{ f \\in F : D f = 0\\}."
},
{
"math_id": 6,
"text": "G,"
},
{
"math_id": 7,
"text": "G"
},
{
"math_id": 8,
"text": "G = F(t)"
},
{
"math_id": 9,
"text": "t"
},
{
"math_id": 10,
"text": "D t = \\frac{D s}{s} \\quad \\text{ for some } s \\in F."
},
{
"math_id": 11,
"text": "s"
},
{
"math_id": 12,
"text": "F."
},
{
"math_id": 13,
"text": " \\frac{D t}{t} = D s \\quad \\text{ for some } s \\in F."
},
{
"math_id": 14,
"text": "\\operatorname{Con}(F) = \\operatorname{Con}(G),"
},
{
"math_id": 15,
"text": "f \\in F"
},
{
"math_id": 16,
"text": "g \\in G"
},
{
"math_id": 17,
"text": "D g = f"
},
{
"math_id": 18,
"text": "f"
},
{
"math_id": 19,
"text": "c_1, \\ldots, c_n \\in \\operatorname{Con}(F)"
},
{
"math_id": 20,
"text": "f_1, \\ldots, f_n, s \\in F"
},
{
"math_id": 21,
"text": "f = c_1 \\frac{D f_1}{f_1} + \\dotsb + c_n \\frac{D f_n}{f_n} + D s."
},
{
"math_id": 22,
"text": "F := \\Complex(x)"
},
{
"math_id": 23,
"text": "\\Complex;"
},
{
"math_id": 24,
"text": "\\operatorname{Con}(\\Complex(x)) = \\Complex,"
},
{
"math_id": 25,
"text": "f := \\tfrac{1}{x},"
},
{
"math_id": 26,
"text": "\\Complex(x),"
},
{
"math_id": 27,
"text": "\\Complex(x)."
},
{
"math_id": 28,
"text": "\\ln x + C"
},
{
"math_id": 29,
"text": "\\Complex(x, \\ln x)."
},
{
"math_id": 30,
"text": "\\tfrac{1}{x^2+1}"
},
{
"math_id": 31,
"text": "\\tan^{-1}(x) + C"
},
{
"math_id": 32,
"text": "e^{i \\theta} = \\cos \\theta + i \\sin \\theta"
},
{
"math_id": 33,
"text": "\\begin{align}\ne^{2i \\theta} & \n= \\frac{e^{i \\theta}}{e^{-i \\theta}} \n= \\frac{\\cos \\theta + i \\sin \\theta}{\\cos \\theta - i \\sin \\theta} \n= \\frac{1 + i \\tan \\theta}{1 - i \\tan \\theta} \\\\[8pt]\n\\theta & \n= \\frac{1}{2i} \\ln \\left(\\frac{1 + i \\tan \\theta}{1 - i \\tan \\theta}\\right) \\\\[8pt]\n\\tan^{-1} x & \n= \\frac{1}{2i} \\ln \\left(\\frac{1+ix}{1-ix}\\right)\n\\end{align}"
}
] | https://en.wikipedia.org/wiki?curid=9482601 |
948340 | 185 (number) | Natural number
185 (one hundred [and] eighty-five) is the natural number following 184 and preceding 186.
In mathematics.
There are 185 different directed graphs on four unlabeled vertices that have at least one sink vertex, with no outgoing edges, 185 ways of permuting the squares of a formula_0 grid of squares in such a way that each square is one unit away from its original position horizontally, vertically, or diagonally, and 185 matroids on five labeled elements in which each element participates in at least one basis.
The Spiral of Theodorus is formed by unit-length line segments that, together with the center point of the spiral, form right triangles. 185 of these right triangles fit within the first four turns of this spiral.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "2\\times 4"
}
] | https://en.wikipedia.org/wiki?curid=948340 |
9483430 | Optical correlator | An optical correlator is an optical computer for comparing two signals by utilising the Fourier transforming properties of a lens. It is commonly used in optics for target tracking and identification.
Introduction.
The correlator has an input signal which is multiplied by some filter in the Fourier domain. An example filter is the matched filter which uses the cross correlation of the two signals.
The cross correlation or correlation plane, formula_0 of a 2D signal formula_1 with formula_2 is
formula_3
This can be re-expressed in Fourier space as
formula_4
where the capital letters denote the Fourier transform of what the lower case letter denotes. So the correlation can then be calculated by inverse Fourier transforming the result.
Implementation.
According to Fresnel Diffraction theory a convex lens of focal length formula_5 will produce the exact Fourier transform at a distance formula_5 behind the lens of an object placed formula_5 distance in front of the lens. So that complex amplitudes are multiplied, the light source must be coherent and is typically from a laser. The input signal and filter are commonly written onto a spatial light modulator (SLM).
A typical arrangement is the 4f correlator. The input signal is written to an SLM which is illuminated with a laser. This is Fourier transformed with a lens and this is then modulated with a second SLM containing the filter. The resultant is again Fourier transformed with a second lens and the correlation result is captured on a camera.
Filter design.
Many filters have been designed to be used with an optical correlator. Some have been proposed to address hardware limitations, others were developed to optimize a merit function or to be invariant under a certain transformation.
Matched filter.
The matched filter maximizes the signal-to-noise ratio and is simply obtained by using as a filter the Fourier transform of the reference signal formula_6.
formula_7
Phase-only filter.
The phase-only filter is easier to implement due to limitation of many SLMs and has been shown to be more discriminant than the matched filter.
formula_8
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "c(x,y)"
},
{
"math_id": 1,
"text": "i(x,y)"
},
{
"math_id": 2,
"text": "h(x,y)"
},
{
"math_id": 3,
"text": "c(x,y)=i(x,y) \\otimes h^{*}(-x,-y)"
},
{
"math_id": 4,
"text": " C(\\xi,\\eta)=I(\\xi,\\eta) H^{*}(-\\xi,-\\eta) "
},
{
"math_id": 5,
"text": "f"
},
{
"math_id": 6,
"text": "r(x,y)"
},
{
"math_id": 7,
"text": " H(\\xi,\\eta) = R(\\xi,\\eta)"
},
{
"math_id": 8,
"text": " H(\\xi,\\eta) = \\frac{R(\\xi,\\eta)}{ \\left\\vert R(\\xi,\\eta) \\right\\vert}"
}
] | https://en.wikipedia.org/wiki?curid=9483430 |
9485301 | Soliton (optics) | Term in optics
In optics, the term soliton is used to refer to any optical field that does not change during propagation because of a delicate balance between nonlinear and dispersive effects in the medium. There are two main kinds of solitons:
Spatial solitons.
In order to understand how a spatial soliton can exist, we have to make some considerations about a simple convex lens. As shown in the picture on the right, an optical field approaches the lens and then it is focused. The effect of the lens is to introduce a non-uniform phase change that causes focusing. This phase change is a function of the space and can be represented with formula_0, whose shape is approximately represented in the picture.
The phase change can be expressed as the product of the phase constant and the width of the path the field has covered. We can write it as:
formula_1
where formula_2 is the width of the lens, changing in each point with a shape that is the same of formula_0 because formula_3 and "n" are constants. In other words, in order to get a focusing effect we just have to introduce a phase change of such a shape, but we are not obliged to change the width. If we leave the width "L" fixed in each point, but we change the value of the refractive index formula_4 we will get exactly the same effect, but with a completely different approach.
This has application in graded-index fibers: the change in the refractive index introduces a focusing effect that can balance the natural diffraction of the field. If the two effects balance each other perfectly, then we have a confined field propagating within the fiber.
Spatial solitons are based on the same principle: the Kerr effect introduces a self-phase modulation that changes the refractive index according to the intensity:
formula_5
if formula_6 has a shape similar to the one shown in the figure, then we have created the phase behavior we wanted and the field will show a self-focusing effect. In other words, the field creates a fiber-like guiding structure while propagating. If the field creates a fiber and it is the mode of such a fiber at the same time, it means that the focusing nonlinear and diffractive linear effects are perfectly balanced and the field will propagate forever without changing its shape (as long as the medium does not change and if we can neglect losses, obviously). In order to have a self-focusing effect, we must have a positive formula_7, otherwise we will get the opposite effect and we will not notice any nonlinear behavior.
The optical waveguide the soliton creates while propagating is not only a mathematical model, but it actually exists and can be used to guide other waves at different frequencies. This way it is possible to let light interact with light at different frequencies (this is impossible in linear media).
Proof.
An electric field is propagating in a medium showing optical Kerr effect, so the refractive index is given by:
formula_8
We recall that the relationship between irradiance and electric field is (in the complex representation)
formula_9
where formula_10 and formula_11 is the impedance of free space, given by
formula_12
The field is propagating in the formula_13 direction with a phase constant formula_14. About now, we will ignore any dependence on the "y" axis, assuming that it is infinite in that direction. Then the field can be expressed as:
formula_15
where formula_16 is the maximum amplitude of the field and formula_17 is a dimensionless normalized function (so that its maximum value is 1) that represents the shape of the electric field among the "x" axis. In general it depends on "z" because fields change their shape while propagating.
Now we have to solve the Helmholtz equation:
formula_18
where it was pointed out clearly that the refractive index (thus the phase constant) depends on intensity. If we replace the expression of the electric field in the equation, assuming that the envelope formula_17 changes slowly while propagating, i.e.
formula_19
the equation becomes:
formula_20
Let us introduce an approximation that is valid because the nonlinear effects are always much smaller than the linear ones:
formula_21
now we express the intensity in terms of the electric field:
formula_22
the equation becomes:
formula_23
We will now assume formula_24 so that the nonlinear effect will cause self focusing. In order to make this evident, we will write in the equation formula_25
Let us now define some parameters and replace them in the equation:
The equation becomes:
formula_32
this is a common equation known as nonlinear Schrödinger equation. From this form, we can understand the physical meaning of the parameter "N":
For formula_37 the solution of the equation is simple and it is the fundamental soliton:
formula_38
where sech is the hyperbolic secant. It still depends on "z", but only in phase, so the shape of the field will not change during propagation.
For formula_39 it is still possible to express the solution in a closed form, but it has a more complicated form:
formula_40
It does change its shape during propagation, but it is a periodic function of "z" with period formula_41.
For soliton solutions, "N" must be an integer and it is said to be the "order" or the soliton. For formula_42 an exact closed form solution also exists; it has an even more complicated form, but the same periodicity occurs. In fact, all solitons with formula_43 have the period formula_41. Their shape can easily be expressed only immediately after generation:
formula_44
on the right there is the plot of the second order soliton: at the beginning it has a shape of a sech, then the maximum amplitude increases and then comes back to the sech shape. Since high intensity is necessary to generate solitons, if the field increases its intensity even further the medium could be damaged.
The condition to be solved if we want to generate a fundamental soliton is obtained expressing "N" in terms of all the known parameters and then putting formula_45:
formula_46
that, in terms of maximum irradiance value becomes:
formula_47
In most of the cases, the two variables that can be changed are the maximum intensity formula_48 and the pulse width formula_27.
Curiously, higher-order solitons can attain complicated shapes before returning exactly to their initial shape at the end of the soliton period. In the picture of various solitons, the spectrum (left) and time domain (right) are shown at varying distances of propagation (vertical axis) in an idealized nonlinear medium. This shows how a laser pulse might behave as it travels in a medium with the properties necessary to support fundamental solitons. In practice, in order to reach the very high peak intensity needed to achieve nonlinear effects, laser pulses may be coupled into optical fibers such as photonic-crystal fiber with highly confined propagating modes. Those fibers have more complicated dispersion and other characteristics which depart from the analytical soliton parameters.
Generation of spatial solitons.
The first experiment on spatial optical solitons was reported in 1974 by Ashkin and Bjorkholm in a cell filled with sodium vapor. The field was then revisited in experiments at Limoges University in liquid carbon disulphide and expanded in the early '90s with the first observation of solitons in photorefractive crystals, glass, semiconductors and polymers. During the last decades numerous findings have been reported in various materials, for solitons of different dimensionality, shape, spiralling, colliding, fusing, splitting, in homogeneous media, periodic systems, and waveguides. Spatials solitons are also referred to as self-trapped optical beams and their formation is normally also accompanied by a self-written waveguide. In nematic liquid crystals, spatial solitons are also referred to as nematicons.
Transverse-mode-locking solitons.
Localized excitations in lasers may appear due to synchronization of transverse modes.
In confocal formula_49 laser cavity the degenerate transverse modes with single longitudinal mode at wavelength formula_50 mixed in nonlinear gain disc formula_51 (located at formula_52) and saturable absorber disc formula_53 (located at formula_54) of diameter formula_55 are capable to produce spatial solitons of hyperbolic formula_56 form:
formula_57
in Fourier-conjugated planes formula_52 and formula_54.
Temporal solitons.
The main problem that limits transmission bit rate in optical fibres is group velocity dispersion. It is because generated impulses have a non-zero bandwidth and the medium they are propagating through has a refractive index that depends on frequency (or wavelength). This effect is represented by the "group delay dispersion parameter" "D"; using it, it is possible to calculate exactly how much the pulse will widen:
formula_58
where "L" is the length of the fibre and formula_59 is the bandwidth in terms of wavelength. The approach in modern communication systems is to balance such a dispersion with other fibers having "D" with different signs in different parts of the fibre: this way the pulses keep on broadening and shrinking while propagating. With temporal solitons it is possible to remove such a problem completely.
Consider the picture on the right. On the left there is a standard Gaussian pulse, that's the envelope of the field oscillating at a defined frequency. We assume that the frequency remains perfectly constant during the pulse.
Now we let this pulse propagate through a fibre with formula_60, it will be affected by group velocity dispersion. For this sign of "D", the dispersion is anomalous, so that the higher frequency components will propagate a little bit faster than the lower frequencies, thus arriving before at the end of the fiber. The overall signal we get is a wider chirped pulse, shown in the upper right of the picture.
Now let us assume we have a medium that shows only nonlinear Kerr effect but its refractive index does not depend on frequency: such a medium does not exist, but it's worth considering it to understand the different effects.
The phase of the field is given by:
formula_61
the frequency (according to its definition) is given by:
formula_62
this situation is represented in the picture on the left. At the beginning of the pulse the frequency is lower, at the end it's higher. After the propagation through our ideal medium, we will get a chirped pulse with no broadening because we have neglected dispersion.
Coming back to the first picture, we see that the two effects introduce a change in frequency in two different opposite directions. It is possible to make a pulse so that the two effects will balance each other. Considering higher frequencies, linear dispersion will tend to let them propagate faster, while nonlinear Kerr effect will slow them down. The overall effect will be that the pulse does not change while propagating: such pulses are called temporal solitons.
History of temporal solitons.
In 1973, Akira Hasegawa and Fred Tappert of AT&T Bell Labs were the first to suggest that solitons could exist in optical fibres, due to a balance between self-phase modulation and anomalous dispersion.
Also in 1973 Robin Bullough made the first mathematical report of the existence of optical solitons. He also proposed the idea of a soliton-based transmission system to increase performance of optical telecommunications.
Solitons in a fibre optic system are described by the Manakov equations.
In 1987, P. Emplit, J.P. Hamaide, F. Reynaud, C. Froehly and A. Barthelemy, from the Universities of Brussels and Limoges, made the first experimental observation of the propagation of a dark soliton, in an optical fiber.
In 1988, Linn Mollenauer and his team transmitted soliton pulses over 4,000 kilometres using a phenomenon called the Raman effect, named for the Indian scientist Sir C. V. Raman who first described it in the 1920s, to provide optical gain in the fibre.
In 1991, a Bell Labs research team transmitted solitons error-free at 2.5 gigabits over more than 14,000 kilometres, using erbium optical fibre amplifiers (spliced-in segments of optical fibre containing the rare earth element erbium). Pump lasers, coupled to the optical amplifiers, activate the erbium, which energizes the light pulses.
In 1998, Thierry Georges and his team at France Télécom R&D Centre, combining optical solitons of different wavelengths (wavelength division multiplexing), demonstrated a data transmission of 1 terabit per second (1,000,000,000,000 units of information per second).
In 2020, Optics Communications reported a Japanese team from MEXT, optical circuit switching with bandwidth of up to 90 Tbps (terabits per second), Optics Communications, Volume 466, 1 July 2020, 125677.
Proof for temporal solitons.
An electric field is propagating in a medium showing optical Kerr effect through a guiding structure (such as an optical fibre) that limits the power on the "xy" plane. If the field is propagating towards "z" with a phase constant formula_63, then it can be expressed in the following form:
formula_64
where formula_16 is the maximum amplitude of the field, formula_65 is the envelope that shapes the impulse in the time domain; in general it depends on "z" because the impulse can change its shape while propagating; formula_66 represents the shape of the field on the "xy" plane, and it does not change during propagation because we have assumed the field is guided. Both "a" and "f" are normalized dimensionless functions whose maximum value is 1, so that formula_16 really represents the field amplitude.
Since in the medium there is a dispersion we can not neglect, the relationship between the electric field and its polarization is given by a convolution integral. Anyway, using a representation in the Fourier domain, we can replace the convolution with a simple product, thus using standard relationships that are valid in simpler media. We Fourier-transform the electric field using the following definition:
formula_67
Using this definition, a derivative in the time domain corresponds to a product in the Fourier domain:
formula_68
the complete expression of the field in the frequency domain is:
formula_69
Now we can solve Helmholtz equation in the frequency domain:
formula_70
we decide to express the phase constant with the following notation:
formula_71
where we assume that formula_72 (the sum of the linear dispersive component and the non-linear part) is a small perturbation, i.e. formula_73. The phase constant can have any complicated behaviour, but we can represent it with a Taylor series centred on formula_74:
formula_75
where, as known:
formula_76
we put the expression of the electric field in the equation and make some calculations. If we assume the slowly varying envelope approximation:
formula_77
we get:
formula_78
we are ignoring the behavior in the "xy" plane, because it is already known and given by formula_66.
We make a small approximation, as we did for the spatial soliton:
formula_79
replacing this in the equation we get simply:
formula_80.
Now we want to come back in the time domain. Expressing the products by derivatives we get the duality:
formula_81
we can write the non-linear component in terms of the irradiance or amplitude of the field:
formula_82
for duality with the spatial soliton, we define:
formula_83
and this symbol has the same meaning of the previous case, even if the context is different. The equation becomes:
formula_84
We know that the impulse is propagating along the "z" axis with a group velocity given by formula_85, so we are not interested in it because we just want to know how the pulse changes its shape while propagating. We decide to study the impulse shape, i.e. the envelope function "a"(·) using a reference that is moving with the field at the same velocity. Thus we make the substitution
formula_86
and the equation becomes:
formula_87
We now further assume that the medium where the field is propagating in shows "anomalous dispersion", i.e. formula_88 or in terms of the group delay dispersion parameter formula_89. We make this more evident replacing in the equation formula_90. Let us define now the following parameters (the duality with the previous case is evident):
formula_91
replacing those in the equation we get:
formula_92
that is "exactly" the same equation we have obtained in the previous case. The first order soliton is given by:
formula_93
the same considerations we have made are valid in this case. The condition "N" = 1 becomes a condition on the amplitude of the electric field:
formula_94
or, in terms of irradiance:
formula_95
or we can express it in terms of power if we introduce an effective area formula_96 defined so that formula_97:
formula_98
Stability of solitons.
We have described what optical solitons are and, using mathematics, we have seen that, if we want to create them, we have to create a field with a particular shape (just sech for the first order) with a particular power related to the duration of the impulse. But what if we are a bit wrong in creating such impulses? Adding small perturbations to the equations and solving them numerically, it is possible to show that mono-dimensional solitons are stable. They are often referred as (1 + 1) "D" "solitons", meaning that they are limited in one dimension ("x" or "t", as we have seen) and propagate in another one ("z").
If we create such a soliton using slightly wrong power or shape, then it will adjust itself until it reaches the standard "sech" shape with the right power. Unfortunately this is achieved at the expense of some power loss, that can cause problems because it can generate another non-soliton field propagating together with the field we want. Mono-dimensional solitons are very stable: for example, if formula_99 we will generate a first order soliton anyway; if "N" is greater we'll generate a higher order soliton, but the focusing it does while propagating may cause high power peaks damaging the media.
The only way to create a (1 + 1) "D" spatial soliton is to limit the field on the "y" axis using a dielectric slab, then limiting the field on "x" using the soliton.
On the other hand, (2 + 1) "D" spatial solitons are unstable, so any small perturbation (due to noise, for example) can cause the soliton to diffract as a field in a linear medium or to collapse, thus damaging the material. It is possible to create stable (2 + 1) "D" spatial solitons using saturating nonlinear media, where the Kerr relationship formula_8 is valid until it reaches a maximum value. Working close to this saturation level makes it possible to create a stable soliton in a three-dimensional space.
If we consider the propagation of shorter (temporal) light pulses or over a longer distance, we need to consider higher-order corrections and
therefore the pulse carrier envelope is governed by the "higher-order nonlinear Schrödinger equation" (HONSE) for which there are some specialized (analytical) soliton solutions.
Effect of power losses.
As we have seen, in order to create a soliton it is necessary to have the right power when it is generated. If there are no losses in the medium, then we know that the soliton will keep on propagating forever without changing shape (1st order) or changing its shape periodically (higher orders). Unfortunately any medium introduces losses, so the actual behaviour of power will be in the form:
formula_100
this is a serious problem for temporal solitons propagating in fibers for several kilometers. Consider what happens for the temporal soliton, generalization to the spatial ones is immediate. We have proved that the relationship between power formula_101 and impulse length formula_102 is:
formula_98
if the power changes, the only thing that can change in the second part of the relationship is formula_102. if we add losses to the power and solve the relationship in terms of formula_102 we get:
formula_103
the width of the impulse grows exponentially to balance the losses! this relationship is true as long as the soliton exists, i.e. until this perturbation is small, so it must be formula_104 otherwise we can not use the equations for solitons and we have to study standard linear dispersion. If we want to create a transmission system using optical fibres and solitons, we have to add optical amplifiers in order to limit the loss of power.
Generation of soliton pulse.
Experiments have been carried out to analyse the effect of high frequency (20 MHz-1 GHz) external magnetic field induced nonlinear Kerr effect on Single mode optical fibre of considerable length (50–100 m) to compensate group velocity dispersion (GVD) and subsequent evolution of soliton pulse ( peak energy, narrow, secant hyperbolic pulse). Generation of soliton pulse in fibre is an obvious conclusion as self phase modulation due to high energy of pulse offset GVD, whereas the evolution length is 2000 km. (the laser wavelength chosen greater than 1.3 micrometers). Moreover, peak soliton pulse is of period 1–3 ps so that it is safely accommodated in the optical bandwidth. Once soliton pulse is generated it is least dispersed over thousands of kilometres length of fibre limiting the number of repeater stations.
Dark solitons.
In the analysis of both types of solitons we have assumed particular conditions about the medium:
Is it possible to obtain solitons if those conditions are not verified? if we assume formula_107 or formula_108, we get the following differential equation (it has the same form in both cases, we will use only the notation of the temporal soliton):
formula_109
This equation has soliton-like solutions. For the first order ("N" = 1):
formula_110
The plot of formula_111 is shown in the picture on the right. For higher order solitons (formula_112) we can use the following closed form expression:
formula_113
It is a soliton, in the sense that it propagates without changing its shape, but it is not made by a normal pulse; rather, it is a "lack" of energy in a continuous time beam. The intensity is constant, but for a short time during which it jumps to zero and back again, thus generating a "dark pulse"'. Those solitons can actually be generated introducing short dark pulses in much longer standard pulses. Dark solitons are more difficult to handle than standard solitons, but they have shown to be more stable and robust to losses.
References.
<templatestyles src="Reflist/styles.css" />
Bibliography.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\varphi (x)"
},
{
"math_id": 1,
"text": "\\varphi (x) = k_0 n L(x)"
},
{
"math_id": 2,
"text": "L(x)"
},
{
"math_id": 3,
"text": "k_0"
},
{
"math_id": 4,
"text": "n(x)"
},
{
"math_id": 5,
"text": "\\varphi (x) = k_0 n (x) L = k_0 L [n + n_2 I(x)]"
},
{
"math_id": 6,
"text": "I(x)"
},
{
"math_id": 7,
"text": "n_2"
},
{
"math_id": 8,
"text": "n(I) = n + n_2 I"
},
{
"math_id": 9,
"text": "I = \\frac{|E|^2}{2 \\eta}"
},
{
"math_id": 10,
"text": "\\eta = \\eta_0 / n"
},
{
"math_id": 11,
"text": "\\eta_0"
},
{
"math_id": 12,
"text": " \\eta_0 = \\sqrt{\\frac{\\mu_0}{\\varepsilon_0}} \\approx 377\\text{ }\\Omega. "
},
{
"math_id": 13,
"text": "z"
},
{
"math_id": 14,
"text": "k_0 n"
},
{
"math_id": 15,
"text": "E(x,z,t) = A_m a(x,z) e^{i(k_0 n z - \\omega t)}"
},
{
"math_id": 16,
"text": "A_m"
},
{
"math_id": 17,
"text": "a(x,z)"
},
{
"math_id": 18,
"text": "\\nabla^2 E + k_0^2 n^2 (I) E = 0"
},
{
"math_id": 19,
"text": "\\left| \\frac{\\partial^2 a(x, z)}{\\partial z^2} \\right| \\ll \\left|k_0 \\frac{\\partial a(x,z)}{\\partial z} \\right|"
},
{
"math_id": 20,
"text": "\\frac{\\partial^2 a}{\\partial x^2} + i 2 k_0 n \\frac{\\partial a}{\\partial z} + k_0^2 \\left[n^2 (I) - n^2\\right] a = 0. "
},
{
"math_id": 21,
"text": "\\left[n^2 (I) - n^2\\right] = [n (I) - n] [n (I) + n] = n_2 I (2 n + n_2 I) \\approx 2 n n_2 I"
},
{
"math_id": 22,
"text": "\\left[n^2 (I) - n^2\\right] \\approx 2 n n_2 \\frac{|A_m|^2 |a(x, z)|^2}{2\\eta_0 / n} = n^2 n_2 \\frac{|A_m|^2 |a(x, z)|^2 }{\\eta_0}"
},
{
"math_id": 23,
"text": "\\frac{1}{2 k_0 n} \\frac{\\partial^2 a}{\\partial x^2} + i \\frac{\\partial a}{\\partial z} + \\frac{k_0 n n_2 |A_m|^2}{2 \\eta_0} |a|^2 a = 0."
},
{
"math_id": 24,
"text": "n_2 > 0"
},
{
"math_id": 25,
"text": "n_2 = |n_2|"
},
{
"math_id": 26,
"text": "\\xi = \\frac{x}{X_0}"
},
{
"math_id": 27,
"text": "X_0"
},
{
"math_id": 28,
"text": "L_d = X_0^2 k_0 n"
},
{
"math_id": 29,
"text": "\\zeta = \\frac{z}{L_d}"
},
{
"math_id": 30,
"text": "L_{n\\ell} = \\frac{2 \\eta_0}{k_0 n |n_2| \\cdot |A_m|^2}"
},
{
"math_id": 31,
"text": "N^2 = \\frac{L_d}{L_{n\\ell}}"
},
{
"math_id": 32,
"text": "\\frac{1}{2} \\frac{\\partial^2 a}{\\partial \\xi^2} + i\\frac{\\partial a}{\\partial \\zeta} + N^2 |a|^2 a = 0 "
},
{
"math_id": 33,
"text": "N \\ll 1"
},
{
"math_id": 34,
"text": "L_d \\ll L_{n\\ell}"
},
{
"math_id": 35,
"text": "N \\gg 1"
},
{
"math_id": 36,
"text": "N \\approx 1"
},
{
"math_id": 37,
"text": "N = 1"
},
{
"math_id": 38,
"text": "a(\\xi, \\zeta) = \\operatorname{sech} (\\xi) e^{i \\zeta /2}"
},
{
"math_id": 39,
"text": "N = 2"
},
{
"math_id": 40,
"text": "a(\\xi,\\zeta) = \\frac{4[\\cosh (3 \\xi) + 3 e^{4 i \\zeta} \\cosh (\\xi)] e^{i \\zeta / 2}}{\\cosh (4 \\xi) + 4 \\cosh (2 \\xi) + 3 \\cos (4 \\zeta)}."
},
{
"math_id": 41,
"text": "\\zeta = \\pi / 2"
},
{
"math_id": 42,
"text": "N=3"
},
{
"math_id": 43,
"text": "N\\geq 2"
},
{
"math_id": 44,
"text": "a(\\xi,\\zeta = 0) = N \\operatorname{sech} (\\xi) "
},
{
"math_id": 45,
"text": "N=1"
},
{
"math_id": 46,
"text": "1 = N = \\frac{L_d}{L_{n\\ell}} = \\frac{X_0^2 k_0^2 n^2 |n_2| |A_m|^2}{2 \\eta_0}"
},
{
"math_id": 47,
"text": "I_{\\max} = \\frac{|A_m|^2}{2 \\eta_0 / n} = \\frac{1}{X_0^2 k_0^2 n |n_2|}.\n"
},
{
"math_id": 48,
"text": "I_\\max"
},
{
"math_id": 49,
"text": "2F"
},
{
"math_id": 50,
"text": "\\lambda"
},
{
"math_id": 51,
"text": "G"
},
{
"math_id": 52,
"text": "z = 0"
},
{
"math_id": 53,
"text": "\\alpha"
},
{
"math_id": 54,
"text": "z = 2F"
},
{
"math_id": 55,
"text": "D"
},
{
"math_id": 56,
"text": "\\operatorname{sech}"
},
{
"math_id": 57,
"text": "\\begin{align}\n E(x, z=0) &\\sim \\operatorname{sech} \\left(\\!{\\frac {\\pi xD}{2 \\lambda F}}\\sqrt{\\frac{1 - \\alpha G}{G}}\\,\\right) \\\\[3pt]\n E(x, z=2F) &\\sim \\operatorname{sech} \\left(\\!{\\frac {2\\pi x }{D}}\\sqrt{\\frac{G}{1 - \\alpha G}}\\,\\right)\n\\end{align}"
},
{
"math_id": 58,
"text": "\\Delta \\tau \\approx D L \\, \\Delta \\lambda"
},
{
"math_id": 59,
"text": "\\Delta \\lambda"
},
{
"math_id": 60,
"text": "D > 0"
},
{
"math_id": 61,
"text": "\\varphi (t) = \\omega_0 t - k z = \\omega_0 t - k_0 z [n + n_2 I(t)]"
},
{
"math_id": 62,
"text": "\\omega (t) = \\frac{\\partial \\varphi (t)}{\\partial t} = \\omega_0 - k_0 z n_2 \\frac{\\partial I(t) }{\\partial t}"
},
{
"math_id": 63,
"text": "\\beta_0"
},
{
"math_id": 64,
"text": "E(\\mathbf{r},t) = A_m a(t,z) f(x,y) e^{i(\\beta_0 z - \\omega_0 t)}"
},
{
"math_id": 65,
"text": "a(t,z)"
},
{
"math_id": 66,
"text": "f(x,y)"
},
{
"math_id": 67,
"text": "\\tilde{E} (\\mathbf{r},\\omega - \\omega_0) = \\int\\limits_{-\\infty}^\\infty E (\\mathbf{r}, t ) e^{-i (\\omega - \\omega_0)t} \\, dt"
},
{
"math_id": 68,
"text": "\\frac{\\partial}{\\partial t} E \\Longleftrightarrow i (\\omega - \\omega_0) \\tilde{E}"
},
{
"math_id": 69,
"text": "\\tilde{E} (\\mathbf{r},\\omega - \\omega_0) = A_m \\tilde{a} (\\omega , z) f(x,y) e^{i \\beta_0 z} "
},
{
"math_id": 70,
"text": "\\nabla^2 \\tilde{E} + n^2 (\\omega) k_0^2 \\tilde{E} = 0"
},
{
"math_id": 71,
"text": "\n\\begin{align}\nn(\\omega) k_0 = \\beta (\\omega) & = \\overbrace{\\beta_0}^{\\text{linear non-dispersive}} + \\overbrace{\\beta_\\ell (\\omega)}^{\\text{linear dispersive}} + \\overbrace{\\beta_{n\\ell}}^{\\text{non-linear}} \\\\[8pt]\n& = \\beta_0 + \\Delta \\beta (\\omega)\n\\end{align}\n"
},
{
"math_id": 72,
"text": "\\Delta \\beta"
},
{
"math_id": 73,
"text": "|\\beta_0| \\gg |\\Delta \\beta (\\omega)|"
},
{
"math_id": 74,
"text": "\\omega_0"
},
{
"math_id": 75,
"text": "\\beta (\\omega) \\approx \\beta_0 + (\\omega - \\omega_0) \\beta_1 + \\frac{(\\omega - \\omega_0)^2}{2} \\beta_2 + \\beta_{n\\ell}"
},
{
"math_id": 76,
"text": "\\beta_u = \\left. \\frac{d^u \\beta (\\omega)}{d \\omega^u} \\right|_{\\omega = \\omega_0}"
},
{
"math_id": 77,
"text": "\\left| \\frac{\\partial^2 \\tilde{a}}{\\partial z^2} \\right| \\ll \\left| \\beta_0 \\frac{\\partial \\tilde{a}}{\\partial z} \\right|"
},
{
"math_id": 78,
"text": "2 i \\beta_0 \\frac{\\partial \\tilde{a}}{\\partial z} + [\\beta^2 (\\omega) - \\beta_0^2] \\tilde{a} = 0"
},
{
"math_id": 79,
"text": "\n\\begin{align}\n\\beta^2 (\\omega) - \\beta_0^2 & = [ \\beta (\\omega) - \\beta_0 ] [ \\beta (\\omega) + \\beta_0 ] \\\\[6pt]\n& = [ \\beta_0 + \\Delta \\beta (\\omega) - \\beta_0 ] [2 \\beta_0 + \\Delta \\beta (\\omega) ] \\approx 2 \\beta_0 \\,\\Delta \\beta (\\omega)\n\\end{align}\n"
},
{
"math_id": 80,
"text": "i \\frac{\\partial \\tilde{a}}{\\partial z} + \\Delta \\beta (\\omega) \\tilde{a} = 0"
},
{
"math_id": 81,
"text": "\\Delta \\beta (\\omega) \\Longleftrightarrow i \\beta_1 \\frac{\\partial}{\\partial t} - \\frac{\\beta_2}{2} \\frac{\\partial^2}{\\partial t^2} + \\beta_{n\\ell}"
},
{
"math_id": 82,
"text": "\\beta_{n\\ell} = k_0 n_2 I = k_0 n_2 \\frac{|E|^2}{2 \\eta_0 / n} = k_0 n_2 n \\frac{|A_m|^2}{2 \\eta_0} |a|^2"
},
{
"math_id": 83,
"text": "L_{n\\ell} = \\frac{2 \\eta_0}{k_0 n n_2 |A_m|^2}"
},
{
"math_id": 84,
"text": "i \\frac{\\partial a}{\\partial z} + i \\beta_1 \\frac{\\partial a}{\\partial t} - \\frac{\\beta_2}{2} \\frac{\\partial^2 a}{\\partial t^2} + \\frac{1}{L_{n\\ell}} |a|^2 a = 0"
},
{
"math_id": 85,
"text": "v_g = 1/\\beta_1"
},
{
"math_id": 86,
"text": "T = t-\\beta_1 z"
},
{
"math_id": 87,
"text": "i \\frac{\\partial a}{\\partial z} - \\frac{\\beta_2}{2} \\frac{\\partial^2 a}{\\partial T^2} + \\frac{1}{L_{n\\ell}} |a|^2 a = 0"
},
{
"math_id": 88,
"text": "\\beta_2 < 0 "
},
{
"math_id": 89,
"text": "D=\\frac{- 2 \\pi c}{\\lambda^2} \\beta_2 > 0 "
},
{
"math_id": 90,
"text": "\\beta_2 = - |\\beta_2|"
},
{
"math_id": 91,
"text": "\nL_d = \\frac{T_0^2}{|\\beta_2|}; \\qquad\n\\tau=\\frac{T}{T_0}; \\qquad\n\\zeta = \\frac{z}{L_d} ; \\qquad\nN^2 = \\frac{L_d}{L_{n\\ell}}"
},
{
"math_id": 92,
"text": "\\frac{1}{2} \\frac{\\partial^2 a}{\\partial \\tau^2} + i\\frac{\\partial a}{\\partial \\zeta} + N^2 |a|^2 a = 0 "
},
{
"math_id": 93,
"text": "a(\\tau,\\zeta) = \\operatorname{sech} (\\tau) e^{i \\zeta /2}"
},
{
"math_id": 94,
"text": "|A_m|^2 = \\frac{2 \\eta_0 |\\beta_2|}{T_0^2 n_2 k_0 n}"
},
{
"math_id": 95,
"text": "I_{\\max} = \\frac{|A_m|^2}{2 \\eta_0 / n} = \\frac{|\\beta_2|}{T_0^2 n_2 k_0}"
},
{
"math_id": 96,
"text": "A_\\text{eff}"
},
{
"math_id": 97,
"text": "P = I A_\\text{eff}"
},
{
"math_id": 98,
"text": "P = \\frac{|\\beta_2| A_\\text{eff}}{T_0^2 n_2 k_0}"
},
{
"math_id": 99,
"text": "0.5 < N < 1.5"
},
{
"math_id": 100,
"text": "P(z) = P_0 e^{- \\alpha z}"
},
{
"math_id": 101,
"text": "P_0"
},
{
"math_id": 102,
"text": "T_0"
},
{
"math_id": 103,
"text": "T(z) = T_0 e^{(\\alpha/2)z}"
},
{
"math_id": 104,
"text": "\\alpha z \\ll 1"
},
{
"math_id": 105,
"text": "\\beta_2 < 0"
},
{
"math_id": 106,
"text": "D > 0 "
},
{
"math_id": 107,
"text": "n_2 < 0"
},
{
"math_id": 108,
"text": "\\beta_2 > 0"
},
{
"math_id": 109,
"text": "\\frac{-1}{2} \\frac{\\partial^2 a}{\\partial \\tau^2} + i\\frac{\\partial a}{\\partial \\zeta} + N^2 |a|^2 a = 0. "
},
{
"math_id": 110,
"text": "a(\\tau,\\zeta) = \\tanh (\\tau) e^{i \\zeta}.\\ "
},
{
"math_id": 111,
"text": "|a(\\tau, \\zeta)|^2"
},
{
"math_id": 112,
"text": " N > 1 "
},
{
"math_id": 113,
"text": "a(\\tau,\\zeta = 0) = N \\tanh (\\tau).\\ "
}
] | https://en.wikipedia.org/wiki?curid=9485301 |
948580 | Magnetic hysteresis | Application of an external magnetic field to a ferromagnet
Magnetic hysteresis occurs when an external magnetic field is applied to a ferromagnet such as iron and the atomic dipoles align themselves with it. Even when the field is removed, part of the alignment will be retained: the material has become "magnetized". Once magnetized, the magnet will stay magnetized indefinitely. To demagnetize it requires heat or a magnetic field in the opposite direction. This is the effect that provides the element of memory in a hard disk drive.
The relationship between field strength H and magnetization M is not linear in such materials. If a magnet is demagnetized ("H" = "M" = 0) and the relationship between H and M is plotted for increasing levels of field strength, M follows the "initial magnetization curve". This curve increases rapidly at first and then approaches an asymptote called magnetic saturation. If the magnetic field is now reduced monotonically, M follows a different curve. At zero field strength, the magnetization is offset from the origin by an amount called the remanence. If the "H"-"M" relationship is plotted for all strengths of applied magnetic field the result is a hysteresis loop called the "main loop". The width of the middle section along the H axis is twice the coercivity of the material.Chapter 1
A closer look at a magnetization curve generally reveals a series of small, random jumps in magnetization called Barkhausen jumps. This effect is due to crystallographic defects such as dislocations.Chapter 15
Magnetic hysteresis loops are not exclusive to materials with ferromagnetic ordering. Other magnetic orderings, such as spin glass ordering, also exhibit this phenomenon.
Physical origin.
The phenomenon of hysteresis in ferromagnetic materials is the result of two effects: rotation of magnetization and changes in size or number of magnetic domains. In general, the magnetization varies (in direction but not magnitude) across a magnet, but in sufficiently small magnets, it doesn't. In these single-domain magnets, the magnetization responds to a magnetic field by rotating. Single-domain magnets are used wherever a strong, stable magnetization is needed (for example, magnetic recording).
Larger magnets are divided into regions called "domains". Within each domain, the magnetization does not vary; but between domains are relatively thin "domain walls" in which the direction of magnetization rotates from the direction of one domain to another. If the magnetic field changes, the walls move, changing the relative sizes of the domains. Because the domains are not magnetized in the same direction, the magnetic moment per unit volume is smaller than it would be in a single-domain magnet; but domain walls involve rotation of only a small part of the magnetization, so it is much easier to change the magnetic moment. The magnetization can also change by addition or subtraction of domains (called "nucleation" and "denucleation").
Measurement.
Magnetic hysteresis can be characterized in various ways. In general, the magnetic material is placed in a varying applied "H" field, as induced by an electromagnet, and the resulting magnetic flux density ("B" field) is measured, generally by the inductive electromotive force introduced on a pickup coil nearby the sample. This produces the characteristic "B"-"H" curve; because the hysteresis indicates a memory effect of the magnetic material, the shape of the "B"-"H" curve depends on the history of changes in "H".
Alternatively, the hysteresis can be plotted as magnetization "M" in place of "B", giving an "M"-"H" curve. These two curves are directly related since formula_0.
The measurement may be "closed-circuit" or "open-circuit", according to how the magnetic material is placed in a magnetic circuit.
With hard magnetic materials (such as sintered neodymium magnets), the detailed microscopic process of magnetization reversal depends on whether the magnet is in an open-circuit or closed-circuit configuration, since the magnetic medium around the magnet influences the interactions between domains in a way that cannot be fully captured by a simple demagnetization factor.
Models.
The most known empirical models in hysteresis are Preisach and Jiles-Atherton models. These models allow an accurate modeling of the hysteresis loop and are widely used in the industry.
However, these models lose the connection with thermodynamics and the energy consistency is not ensured. A more recent model, with a more consistent thermodynamic foundation, is the vectorial incremental nonconservative consistent hysteresis (VINCH) model of Lavet et al. (2011). is inspired by the kinematic hardening laws and by the thermodynamics of irreversible processes. In particular, in addition to provide an accurate modeling, the stored magnetic energy and the dissipated energy are known at all times. The obtained incremental formulation is variationally consistent, i.e., all internal variables follow from the minimization of a thermodynamic potential. That allows easily obtaining a vectorial model while Preisach and Jiles-Atherton are fundamentally scalar models.
The Stoner–Wohlfarth model is a physical model explaining hysteresis in terms of anisotropic response ("easy" / "hard" axes of each crystalline grain).
Micromagnetics simulations attempt to capture and explain in detail the space and time aspects of interacting magnetic domains, often based on the Landau-Lifshitz-Gilbert equation.
Toy models such as the Ising model can help explain qualitative and thermodynamic aspects of hysteresis (such as the Curie point phase transition to paramagnetic behaviour), though they are not used to describe real magnets.
Applications.
There are a great variety in applications of the theory of hysteresis in magnetic materials. Many of these make use of their ability to retain a memory, for example magnetic tape, hard disks, and credit cards. In these applications, "hard" magnets (high coercivity) like iron are desirable so the memory is not easily erased.
"Soft" magnets (low coercivity) are used as cores in transformers and electromagnets. The response of the magnetic moment to a magnetic field boosts the response of the coil wrapped around it. Low coercivity reduces that energy loss associated with hysteresis.
Magnetic hysteresis material (soft nickel-iron rods) has been used in damping the angular motion of satellites in low Earth orbit since the dawn of the space age.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "B = \\mu_0(H + M)"
}
] | https://en.wikipedia.org/wiki?curid=948580 |
9487 | Epsilon | Fifth letter of the Greek alphabet
Epsilon (, ; uppercase ', lowercase ' or ; ) is the fifth letter of the Greek alphabet, corresponding phonetically to a mid front unrounded vowel or . In the system of Greek numerals it also has the value five. It was derived from the Phoenician letter He . Letters that arose from epsilon include the Roman E, Ë and Ɛ, and Cyrillic Е, È, Ё, Є and Э. The name of the letter was originally (), but it was later changed to ( 'simple e') in the Middle Ages to distinguish the letter from the digraph , a former diphthong that had come to be pronounced the same as epsilon.
The uppercase form of epsilon is identical to Latin ⟨E⟩ but has its own code point in Unicode: . The lowercase version has two typographical variants, both inherited from medieval Greek handwriting. One, the most common in modern typography and inherited from medieval minuscule, looks like a reversed number "3" and is encoded . The other, also known as lunate or uncial epsilon and inherited from earlier uncial writing, looks like a semicircle crossed by a horizontal bar: it is encoded . While in normal typography these are just alternative font variants, they may have different meanings as mathematical symbols: computer systems therefore offer distinct encodings for them. In TeX, codice_0 ( formula_0 ) denotes the lunate form, while codice_1 ( formula_1 ) denotes the reversed-3 form. Unicode versions 2.0.0 and onwards use ɛ as the lowercase Greek epsilon letter, but in version 1.0.0, ϵ was used. The lunate or uncial epsilon provided inspiration for the euro sign, €.
There is also a 'Latin epsilon', ⟨ɛ⟩ or "open e", which looks similar to the Greek lowercase epsilon. It is encoded in Unicode as and and is used as an IPA phonetic symbol. This Latin uppercase epsilon, Ɛ, is not to be confused with the Greek uppercase Σ (sigma)
The lunate epsilon, ⟨ϵ⟩, is not to be confused with the set membership symbol ∈. The symbol formula_2, first used in set theory and logic by Giuseppe Peano and now used in mathematics in general for set membership ("belongs to"), evolved from the letter epsilon, since the symbol was originally used as an abbreviation for the Latin word . In addition, mathematicians often read the symbol ∈ as "element of", as in "1 is an element of the natural numbers" for formula_3, for example. As late as 1960, ɛ itself was used for set membership, while its negation "does not belong to" (now ∉) was denoted by ε' (epsilon prime). Only gradually did a fully separate, stylized symbol take the place of epsilon in this role. In a related context, Peano also introduced the use of a backwards epsilon, ϶, for the phrase "such that", although the abbreviation "s.t." is occasionally used in place of ϶ in informal cardinals.
History.
Origin.
The letter ⟨Ε⟩ was adopted from the Phoenician letter He () when Greeks first adopted alphabetic writing. In archaic Greek writing, its shape is often still identical to that of the Phoenician letter. Like other Greek letters, it could face either leftward or rightward (), depending on the current writing direction, but, just as in Phoenician, the horizontal bars always faced in the direction of writing. Archaic writing often preserves the Phoenician form with a vertical stem extending slightly below the lowest horizontal bar. In the classical era, through the influence of more cursive writing styles, the shape was simplified to the current ⟨E⟩ glyph.
Sound value.
While the original pronunciation of the Phoenician letter "He" was , the earliest Greek sound value of Ε was determined by the vowel occurring in the Phoenician letter name, which made it a natural choice for being reinterpreted from a consonant symbol to a vowel symbol denoting an sound. Besides its classical Greek sound value, the short phoneme, it could initially also be used for other -like sounds. For instance, in early Attic before c. 500 BC, it was used also both for the long, open , and for the long close . In the former role, it was later replaced in the classic Greek alphabet by Eta (⟨Η⟩), which was taken over from eastern Ionic alphabets, while in the latter role it was replaced by the digraph spelling ΕΙ.
Epichoric alphabets.
Some dialects used yet other ways of distinguishing between various e-like sounds.
In Corinth, the normal function of ⟨Ε⟩ to denote and was taken by a glyph resembling a pointed B (), while ⟨Ε⟩ was used only for long close . The letter Beta, in turn, took the deviant shape .
In Sicyon, a variant glyph resembling an ⟨X⟩ () was used in the same function as Corinthian .
In Thespiai (Boeotia), a special letter form consisting of a vertical stem with a single rightward-pointing horizontal bar () was used for what was probably a raised variant of in pre-vocalic environments. This tack glyph was used elsewhere also as a form of "Heta", i.e. for the sound .
Glyph variants.
After the establishment of the canonical classical Ionian (Euclidean) Greek alphabet, new glyph variants for Ε were introduced through handwriting. In the uncial script (used for literary papyrus manuscripts in late antiquity and then in early medieval vellum codices), the "lunate" shape () became predominant. In cursive handwriting, a large number of shorthand glyphs came to be used, where the cross-bar and the curved stroke were linked in various ways. Some of them resembled a modern lowercase Latin "e", some a "6" with a connecting stroke to the next letter starting from the middle, and some a combination of two small "c"-like curves. Several of these shapes were later taken over into minuscule book hand. Of the various minuscule letter shapes, the inverted-3 form became the basis for lower-case Epsilon in Greek typography during the modern era.
Uses.
International Phonetic Alphabet.
Despite its pronunciation as mid, in the International Phonetic Alphabet, the Latin epsilon represents open-mid front unrounded vowel, as in the English word "pet" .
Symbol.
The uppercase Epsilon is not commonly used outside of the Greek language because of its similarity to the Latin letter E. However, it is commonly used in structural mechanics with Young's Modulus equations for calculating tensile, compressive and areal strain.
The Greek lowercase epsilon , the lunate epsilon symbol , and the Latin lowercase epsilon (see above) are used in a variety of places:
Unicode.
These characters are used only as mathematical symbols. Stylized Greek text should be encoded using the normal Greek letters, with markup and formatting to indicate text style.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\epsilon\\!"
},
{
"math_id": 1,
"text": "\\varepsilon\\!"
},
{
"math_id": 2,
"text": "\\in"
},
{
"math_id": 3,
"text": "1\\in\\N"
},
{
"math_id": 4,
"text": "\\epsilon x.\\phi"
},
{
"math_id": 5,
"text": "a+b \\varepsilon"
},
{
"math_id": 6,
"text": "\\varepsilon^{2}=0"
},
{
"math_id": 7,
"text": "\\varepsilon \\neq 0"
}
] | https://en.wikipedia.org/wiki?curid=9487 |
9487872 | Beatty sequence | Integers formed by rounding down the integer multiples of a positive irrational number
In mathematics, a Beatty sequence (or homogeneous Beatty sequence) is the sequence of integers found by taking the floor of the positive multiples of a positive irrational number. Beatty sequences are named after Samuel Beatty, who wrote about them in 1926.
Rayleigh's theorem, named after Lord Rayleigh, states that the complement of a Beatty sequence, consisting of the positive integers that are not in the sequence, is itself a Beatty sequence generated by a different irrational number.
Beatty sequences can also be used to generate Sturmian words.
Definition.
Any irrational number formula_0 that is greater than one generates the Beatty sequence
formula_1
The two irrational numbers formula_0 and formula_2 naturally satisfy the equation formula_3.
The two Beatty sequences formula_4 and formula_5 that they generate form a "pair of complementary Beatty sequences". Here, "complementary" means that every positive integer belongs to exactly one of these two sequences.
Examples.
When formula_0 is the golden ratio formula_6, the complementary Beatty sequence is generated by formula_7. In this case, the sequence formula_8, known as the "lower Wythoff sequence", is
<templatestyles src="Block indent/styles.css"/>
and the complementary sequence formula_9, the "upper Wythoff sequence", is
<templatestyles src="Block indent/styles.css"/>
These sequences define the optimal strategy for Wythoff's game, and are used in the definition of the Wythoff array.
As another example, for the square root of 2, formula_10, formula_11. In this case, the sequences are
<templatestyles src="Block indent/styles.css"/>
<templatestyles src="Block indent/styles.css"/>
For formula_12 and formula_13, the sequences are
<templatestyles src="Block indent/styles.css"/>
<templatestyles src="Block indent/styles.css"/>
Any number in the first sequence is absent in the second, and vice versa.
History.
Beatty sequences got their name from the problem posed in "The American Mathematical Monthly" by Samuel Beatty in 1926. It is probably one of the most often cited problems ever posed in the "Monthly". However, even earlier, in 1894 such sequences were briefly mentioned by Lord Rayleigh in the second edition of his book "The Theory of Sound".
Rayleigh theorem.
Rayleigh's theorem (also known as Beatty's theorem) states that given an irrational number formula_14 there exists formula_15 so that the Beatty sequences formula_4 and formula_5 partition the set of positive integers: each positive integer belongs to exactly one of the two sequences.
First proof.
Given formula_14 let formula_2. We must show that every positive integer lies in one and only one of the two sequences formula_4 and formula_5. We shall do so by considering the ordinal positions occupied by all the fractions formula_16 and formula_17 when they are jointly listed in nondecreasing order for positive integers "j" and "k".
To see that no two of the numbers can occupy the same position (as a single number), suppose to the contrary that formula_18 for some "j" and "k". Then formula_19 = formula_20, a rational number, but also, formula_21 not a rational number. Therefore, no two of the numbers occupy the same position.
For any formula_16, there are formula_22 positive integers formula_23 such that formula_24 and formula_25 positive integers formula_26 such that formula_27, so that the position of formula_16 in the list is formula_28. The equation formula_3 implies
formula_29
Likewise, the position of formula_17 in the list is formula_30.
Conclusion: every positive integer (that is, every position in the list) is of the form formula_31 or of the form formula_32, but not both. The converse statement is also true: if "p" and "q" are two real numbers such that every positive integer occurs precisely once in the above list, then "p" and "q" are irrational and the sum of their reciprocals is 1.
Second proof.
Collisions: Suppose that, contrary to the theorem, there are integers "j" > 0 and "k" and "m" such that
formula_33
This is equivalent to the inequalities
formula_34
For non-zero "j", the irrationality of "r" and "s" is incompatible with equality, so
formula_35
which leads to
formula_36
Adding these together and using the hypothesis, we get
formula_37
which is impossible (one cannot have an integer between two adjacent integers). Thus the supposition must be false.
Anti-collisions: Suppose that, contrary to the theorem, there are integers "j" > 0 and "k" and "m" such that
formula_38
Since "j" + 1 is non-zero and "r" and "s" are irrational, we can exclude equality, so
formula_39
Then we get
formula_40
Adding corresponding inequalities, we get
formula_41
formula_42
which is also impossible. Thus the supposition is false.
Properties.
A number formula_43 belongs to the Beatty sequence formula_4 if and only if
formula_44
where formula_45 denotes the fractional part of formula_46 i.e., formula_47.
Proof:
formula_48
formula_49
formula_50
formula_51
formula_52
formula_53
Furthermore, formula_54.
Proof:
formula_55
formula_56
formula_57
formula_58
formula_59
Relation with Sturmian sequences.
The first difference
formula_60
of the Beatty sequence associated with the irrational number formula_0 is a characteristic Sturmian word over the alphabet formula_61.
Generalizations.
If slightly modified, the Rayleigh's theorem can be generalized to positive real numbers (not necessarily irrational) and negative integers as well: if positive real numbers formula_0 and formula_62 satisfy formula_3, the sequences formula_63 and formula_64 form a partition of integers. For example, the white and black keys of a piano keyboard are distributed as such sequences for formula_65 and formula_66.
The Lambek–Moser theorem generalizes the Rayleigh theorem and shows that more general pairs of sequences defined from an integer function and its inverse have the same property of partitioning the integers.
Uspensky's theorem states that, if formula_67 are positive real numbers such that formula_68 contains all positive integers exactly once, then formula_69 That is, there is no equivalent of Rayleigh's theorem for three or more Beatty sequences.
References.
<templatestyles src="Reflist/styles.css" />
External links.
| [
{
"math_id": 0,
"text": "r"
},
{
"math_id": 1,
"text": "\\mathcal{B}_r = \\bigl\\{ \\lfloor r \\rfloor, \\lfloor 2r \\rfloor, \\lfloor 3r \\rfloor,\\ldots \\bigr\\}"
},
{
"math_id": 2,
"text": "s = r/(r-1)"
},
{
"math_id": 3,
"text": "1/r + 1/s = 1"
},
{
"math_id": 4,
"text": "\\mathcal{B}_r"
},
{
"math_id": 5,
"text": "\\mathcal{B}_s"
},
{
"math_id": 6,
"text": "r=(1+\\sqrt5)/2\\approx 1.618"
},
{
"math_id": 7,
"text": "s=r+1=(3+\\sqrt5)/2\\approx 2.618"
},
{
"math_id": 8,
"text": "( \\lfloor nr \\rfloor)"
},
{
"math_id": 9,
"text": "( \\lfloor ns \\rfloor)"
},
{
"math_id": 10,
"text": "r=\\sqrt2\\approx 1.414"
},
{
"math_id": 11,
"text": "s=2+\\sqrt2\\approx 3.414"
},
{
"math_id": 12,
"text": "r=\\pi\\approx 3.142"
},
{
"math_id": 13,
"text": "s=\\pi/(\\pi-1)\\approx 1.467"
},
{
"math_id": 14,
"text": "r > 1 \\,,"
},
{
"math_id": 15,
"text": "s > 1"
},
{
"math_id": 16,
"text": "j/r"
},
{
"math_id": 17,
"text": "k/s"
},
{
"math_id": 18,
"text": "j/r = k/s"
},
{
"math_id": 19,
"text": "r/s"
},
{
"math_id": 20,
"text": "j/k"
},
{
"math_id": 21,
"text": "r/s = r(1 - 1/r) = r - 1,"
},
{
"math_id": 22,
"text": "j"
},
{
"math_id": 23,
"text": "i"
},
{
"math_id": 24,
"text": "i/r \\le j/r"
},
{
"math_id": 25,
"text": " \\lfloor js/r \\rfloor"
},
{
"math_id": 26,
"text": "k"
},
{
"math_id": 27,
"text": "k/s \\le j/r"
},
{
"math_id": 28,
"text": "j + \\lfloor js/r \\rfloor"
},
{
"math_id": 29,
"text": "j + \\lfloor js/r \\rfloor = j + \\lfloor j(s - 1) \\rfloor = \\lfloor js \\rfloor."
},
{
"math_id": 30,
"text": "\\lfloor kr \\rfloor"
},
{
"math_id": 31,
"text": "\\lfloor nr \\rfloor"
},
{
"math_id": 32,
"text": "\\lfloor ns \\rfloor"
},
{
"math_id": 33,
"text": "j = \\left\\lfloor {k \\cdot r} \\right\\rfloor = \\left\\lfloor {m \\cdot s} \\right\\rfloor \\,."
},
{
"math_id": 34,
"text": "j \\le k \\cdot r < j + 1 \\text{ and } j \\le m \\cdot s < j + 1. "
},
{
"math_id": 35,
"text": "j < k \\cdot r < j + 1 \\text{ and } j < m \\cdot s < j + 1, "
},
{
"math_id": 36,
"text": "{j \\over r} < k < {j + 1 \\over r} \\text{ and } {j \\over s} < m < {j + 1 \\over s}. "
},
{
"math_id": 37,
"text": "j < k + m < j + 1 "
},
{
"math_id": 38,
"text": "k \\cdot r < j \\text{ and } j + 1 \\le (k + 1) \\cdot r \\text{ and } m \\cdot s < j \\text{ and } j + 1 \\le (m + 1) \\cdot s \\,."
},
{
"math_id": 39,
"text": "k \\cdot r < j \\text{ and } j + 1 < (k + 1) \\cdot r \\text{ and } m \\cdot s < j \\text{ and } j + 1 < (m + 1) \\cdot s. "
},
{
"math_id": 40,
"text": "k < {j \\over r} \\text{ and } {j + 1 \\over r} < k + 1 \\text{ and } m < {j \\over s} \\text{ and } {j + 1 \\over s} < m + 1 "
},
{
"math_id": 41,
"text": "k + m < j \\text{ and } j + 1 < k + m + 2 "
},
{
"math_id": 42,
"text": "k + m < j < k + m + 1 "
},
{
"math_id": 43,
"text": "m"
},
{
"math_id": 44,
"text": " 1 - \\frac{1}{r} < \\left[ \\frac{m}{r} \\right]_1"
},
{
"math_id": 45,
"text": "[x]_1"
},
{
"math_id": 46,
"text": "x"
},
{
"math_id": 47,
"text": "[x]_1 = x - \\lfloor x \\rfloor"
},
{
"math_id": 48,
"text": " m \\in B_r "
},
{
"math_id": 49,
"text": "\\Leftrightarrow \\exists n, m = \\lfloor nr \\rfloor"
},
{
"math_id": 50,
"text": "\\Leftrightarrow m < nr < m + 1"
},
{
"math_id": 51,
"text": "\\Leftrightarrow \\frac{m}{r} < n < \\frac{m}{r} + \\frac{1}{r}"
},
{
"math_id": 52,
"text": "\\Leftrightarrow n - \\frac{1}{r} < \\frac{m}{r} < n"
},
{
"math_id": 53,
"text": "\\Leftrightarrow 1 - \\frac{1}{r} < \\left[ \\frac{m}{r} \\right]_1"
},
{
"math_id": 54,
"text": "m = \\left\\lfloor \\left( \\left\\lfloor \\frac{m}{r} \\right\\rfloor + 1 \\right) r \\right\\rfloor"
},
{
"math_id": 55,
"text": "m = \\left\\lfloor \\left( \\left\\lfloor \\frac{m}{r} \\right\\rfloor + 1 \\right) r \\right\\rfloor "
},
{
"math_id": 56,
"text": "\\Leftrightarrow m < \\left( \\left\\lfloor \\frac{m}{r} \\right\\rfloor + 1 \\right) r < m + 1"
},
{
"math_id": 57,
"text": "\\Leftrightarrow \\frac{m}{r} < \\left\\lfloor \\frac{m}{r} \\right\\rfloor + 1 < \\frac{m + 1}{r}"
},
{
"math_id": 58,
"text": "\\Leftrightarrow \\left\\lfloor \\frac{m}{r} \\right\\rfloor + 1 - \\frac{1}{r} < \\frac{m}{r} < \\left\\lfloor \\frac{m}{r} \\right\\rfloor + 1"
},
{
"math_id": 59,
"text": "\\Leftrightarrow 1 - \\frac{1}{r} < \\frac{m}{r} - \\left\\lfloor \\frac{m}{r} \\right\\rfloor =\\left[ \\frac{m}{r} \\right]_1 "
},
{
"math_id": 60,
"text": "\\lfloor (n+1)r\\rfloor-\\lfloor nr\\rfloor"
},
{
"math_id": 61,
"text": "\\{\\lfloor r\\rfloor,\\lfloor r\\rfloor+1\\}"
},
{
"math_id": 62,
"text": "s"
},
{
"math_id": 63,
"text": "( \\lfloor mr \\rfloor)_{m \\in \\mathbb{Z}}"
},
{
"math_id": 64,
"text": "( \\lceil ns \\rceil -1)_{n \\in \\mathbb{Z}}"
},
{
"math_id": 65,
"text": "r = 12/7"
},
{
"math_id": 66,
"text": "s = 12/5"
},
{
"math_id": 67,
"text": "\\alpha_1,\\ldots,\\alpha_n"
},
{
"math_id": 68,
"text": "(\\lfloor k\\alpha_i\\rfloor)_{k,i\\ge1}"
},
{
"math_id": 69,
"text": "n\\le2."
}
] | https://en.wikipedia.org/wiki?curid=9487872 |
9488196 | Timeline of chemistry | This timeline of chemistry lists important works, discoveries, ideas, inventions, and experiments that significantly changed humanity's understanding of the modern science known as chemistry, defined as the scientific study of the composition of matter and of its interactions.
Known as "the central science", the study of chemistry is strongly influenced by, and exerts a strong influence on, many other scientific and technological fields. Many historical developments that are considered to have had a significant impact upon our modern understanding of chemistry are also considered to have been key discoveries in such fields as physics, biology, astronomy, geology, and materials science.
Pre-17th century.
Prior to the acceptance of the scientific method and its application to the field of chemistry, it is somewhat controversial to consider many of the people listed below as "chemists" in the modern sense of the word. However, the ideas of certain great thinkers, either for their prescience, or for their wide and long-term acceptance, bear listing here.
20th century.
formula_0The Schrödinger equation
Robert Burns Woodward and William von Eggers Doering successfully synthesized of quinine. This achievement, characterized of fully artificial chemicals as source for synthesis process, opened an era called as "Woodwardian era" or "chemical era" when many drugs and chemicals, as well as organic synthesis methods invented. Due to the growth of chemical industry, many fields has grown, such as drug industry.
Jacob A. Marinsky, Lawrence E. Glendenin, and Charles D. Coryell perform the first confirmed synthesis of Promethium, filling in the last "gap" in the periodic table.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "H(t) | \\psi (t) \\rangle = i \\hbar \\frac{d}{d t} | \\psi (t) \\rangle"
}
] | https://en.wikipedia.org/wiki?curid=9488196 |
948888 | Sortino ratio | Measurement in finance
The Sortino ratio measures the risk-adjusted return of an investment asset, portfolio, or strategy. It is a modification of the Sharpe ratio but penalizes only those returns falling below a user-specified target or required rate of return, while the Sharpe ratio penalizes both upside and downside volatility equally. Though both ratios measure an investment's risk-adjusted return, they do so in significantly different ways that will frequently lead to differing conclusions as to the true nature of the investment's return-generating efficiency.
The Sortino ratio is used as a way to compare the risk-adjusted performance of programs with differing risk and return profiles. In general, risk-adjusted returns seek to normalize the risk across programs and then see which has the higher return unit per risk.
Definition.
The ratio formula_0 is calculated as
formula_1 ,
where formula_2 is the asset or portfolio average realized return, formula_3 is the target or required rate of return for the investment strategy under consideration (originally called the minimum acceptable return "MAR"), and formula_4 is the target semi-deviation (the square root of target semi-variance), termed downside deviation. formula_4 is expressed in percentages and therefore allows for rankings in the same way as standard deviation.
An intuitive way to view downside risk is the annualized standard deviation of returns below the target. Another is the square root of the probability-weighted squared below-target returns. The squaring of the below-target returns has the effect of penalizing failures at a quadratic rate. This is consistent with observations made on the behavior of individual decision making under uncertainty.
formula_5
Here
formula_4 = downside deviation or (commonly known in the financial community) "downside risk" (by extension, formula_6 = downside variance),
formula_3 = the annual target return, originally termed the minimum acceptable return "MAR",
formula_7 = the random variable representing the return for the distribution of annual returns formula_8, and
formula_8 = the distribution for the annual returns, e.g., the log-normal distribution.
For the reasons provided below, this "continuous" formula is preferred over a simpler "discrete" version that determines the standard deviation of below-target periodic returns taken from the return series.
"Before we make an investment, we don't know what the outcome will be... After the investment is made, and we want to measure its performance, all we know is what the outcome was, not what it could have been. To cope with this uncertainty, we assume that a reasonable estimate of the range of possible returns, as well as the probabilities associated with estimation of those returns...In statistical terms, the shape of [this] uncertainty is called a probability distribution. In other words, looking at just the discrete monthly or annual values does not tell the whole story."
Using the observed points to create a distribution is a staple of conventional performance measurement. For example, monthly returns are used to calculate a fund's mean and standard deviation. Using these values and the properties of the normal distribution, we can make statements such as the likelihood of losing money (even though no negative returns may actually have been observed) or the range within which two-thirds of all returns lies (even though the specific returns identifying this range have not necessarily occurred). Our ability to make these statements comes from the process of assuming the continuous form of the normal distribution and certain of its well-known properties.
In post-modern portfolio theory an analogous process is followed.
As a caveat, some practitioners have fallen into the habit of using discrete periodic returns to compute downside risk. This method is conceptually and operationally incorrect and negates the foundational statistic of post-modern portfolio theory as developed by Brian M. Rom and Frank A. Sortino.
Usage.
The Sortino ratio is used to score a portfolio's risk-adjusted returns relative to an investment target using downside risk. This is analogous to the Sharpe ratio, which scores risk-adjusted returns relative to the risk-free rate using standard deviation. When return distributions are near symmetrical and the target return is close to the distribution median, these two measure will produce similar results. As skewness increases and targets vary from the median, results can be expected to show dramatic differences.
The Sortino ratio can also be used in trading. For example, whenever you want to get a performance metric for your trading strategy in an asset, you can compute the Sortino ratio to compare your strategy performance with any other strategy.
Practitioners who use a lower partial Standard Deviation (LPSD) instead of a standard deviation also tend to use the Sortino ratio instead of the Sharpe ratio.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "S"
},
{
"math_id": 1,
"text": "S = \\frac{R-T}{DR}"
},
{
"math_id": 2,
"text": "R"
},
{
"math_id": 3,
"text": "T"
},
{
"math_id": 4,
"text": "DR"
},
{
"math_id": 5,
"text": "DR = \\sqrt{ \\int_{-\\infty}^T (T-r)^2f(r)\\,dr } "
},
{
"math_id": 6,
"text": "DR^2"
},
{
"math_id": 7,
"text": "r"
},
{
"math_id": 8,
"text": "f(r)"
}
] | https://en.wikipedia.org/wiki?curid=948888 |
9489914 | Nagel point | Triangle center; intersection of all three of a triangle's splitters
In geometry, the Nagel point (named for Christian Heinrich von Nagel) is a triangle center, one of the points associated with a given triangle whose definition does not depend on the placement or scale of the triangle. It is the point of concurrency of all three of the triangle's splitters.
Construction.
Given a triangle △"ABC", let TA, TB, TC be the extouch points in which the A-excircle meets line BC, the B-excircle meets line CA, and the C-excircle meets line AB, respectively. The lines ATA, BTB, CTC concur in the Nagel point N of triangle △"ABC".
Another construction of the point TA is to start at A and trace around triangle △"ABC" half its perimeter, and similarly for TB and TC. Because of this construction, the Nagel point is sometimes also called the bisected perimeter point, and the segments are called the triangle's splitters.
There exists an easy construction of the Nagel point. Starting from each vertex of a triangle, it suffices to carry twice the length of the opposite edge. We obtain three lines which concur at the Nagel point.
Relation to other triangle centers.
The Nagel point is the isotomic conjugate of the Gergonne point. The Nagel point, the centroid, and the incenter are collinear on a line called the "Nagel line". The incenter is the Nagel point of the medial triangle; equivalently, the Nagel point is the incenter of the anticomplementary triangle. The isogonal conjugate of the Nagel point is the point of concurrency of the lines joining the mixtilinear touchpoint and the opposite vertex.
Barycentric coordinates.
The un-normalized barycentric coordinates of the Nagel point are formula_0 where formula_1 is the semi-perimeter of the reference triangle △"ABC".
Trilinear coordinates.
The trilinear coordinates of the Nagel point are as
formula_2
or, equivalently, in terms of the side lengths formula_3
formula_4
History.
The Nagel point is named after Christian Heinrich von Nagel, a nineteenth-century German mathematician, who wrote about it in 1836.
Early contributions to the study of this point were also made by August Leopold Crelle and Carl Gustav Jacob Jacobi.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " (s-a:s-b:s-c) "
},
{
"math_id": 1,
"text": "s = \\tfrac{a+b+c}{2}"
},
{
"math_id": 2,
"text": "\\csc^2\\left(\\frac{A}{2}\\right)\\,:\\,\\csc^2\\left(\\frac{B}{2}\\right)\\,:\\,\\csc^2\\left(\\frac{C}{2}\\right)"
},
{
"math_id": 3,
"text": "a=\\left|\\overline{BC}\\right|, b=\\left|\\overline{CA}\\right|, c=\\left|\\overline{AB}\\right|,"
},
{
"math_id": 4,
"text": "\\frac{b + c - a}{a}\\,:\\,\\frac{c + a - b}{b}\\,:\\,\\frac{a + b - c}{c}."
}
] | https://en.wikipedia.org/wiki?curid=9489914 |
949189 | Design matrix | Matrix of values of explanatory variables
In statistics and in particular in regression analysis, a design matrix, also known as model matrix or regressor matrix and often denoted by X, is a matrix of values of explanatory variables of a set of objects. Each row represents an individual object, with the successive columns corresponding to the variables and their specific values for that object. The design matrix is used in certain statistical models, e.g., the general linear model. It can contain indicator variables (ones and zeros) that indicate group membership in an ANOVA, or it can contain values of continuous variables.
The design matrix contains data on the independent variables (also called explanatory variables), in a statistical model that is intended to explain observed data on a response variable (often called a dependent variable). The theory relating to such models uses the design matrix as input to some linear algebra : see for example linear regression. A notable feature of the concept of a design matrix is that it is able to represent a number of different experimental designs and statistical models, e.g., ANOVA, ANCOVA, and linear regression.
Definition.
The design matrix is defined to be a matrix formula_0 such that formula_1 (the "j"th column of the "i"th row of formula_0) represents the value of the "j"th variable associated with the "i"th object.
A regression model may be represented via matrix multiplication as
formula_2
where "X" is the design matrix, formula_3 is a vector of the model's coefficients (one for each variable), formula_4 is a vector of random errors with mean zero, and "y" is the vector of predicted outputs for each object.
Size.
The design matrix has dimension "n"-by-"p", where "n" is the number of samples observed, and "p" is the number of variables (features) measured in all samples.
In this representation different rows typically represent different repetitions of an experiment, while columns represent different types of data (say, the results from particular probes). For example, suppose an experiment is run where 10 people are pulled off the street and asked 4 questions. The data matrix "M" would be a 10×4 matrix (meaning 10 rows and 4 columns). The datum in row "i" and column "j" of this matrix would be the answer of the "i" th person to the "j" th question.
Examples.
Arithmetic mean.
The design matrix for an arithmetic mean is a column vector of ones.
Simple linear regression.
This section gives an example of simple linear regression—that is, regression with only a single explanatory variable—with seven observations.
The seven data points are {"y""i", "x""i"}, for "i" = 1, 2, …, 7. The simple linear regression model is
formula_5
where formula_6 is the "y"-intercept and formula_7 is the slope of the regression line. This model can be represented in matrix form as
formula_8
where the first column of 1s in the design matrix allows estimation of the "y"-intercept while the second column contains the "x"-values associated with the corresponding "y"-values. The matrix whose columns are 1's and "x"'s in this example is the design matrix.
Multiple regression.
This section contains an example of multiple regression with two covariates (explanatory variables): "w" and "x".
Again suppose that the data consist of seven observations, and that for each observed value to be predicted (formula_9), values "w""i" and "x""i" of the two covariates are also observed. The model to be considered is
formula_10
This model can be written in matrix terms as
formula_11
Here the 7×3 matrix on the right side is the design matrix.
One-way ANOVA (cell means model).
This section contains an example with a one-way analysis of variance (ANOVA) with three groups and seven observations. The given data set has the first three observations belonging to the first group, the following two observations belonging to the second group and the final two observations belonging to the third group.
If the model to be fit is just the mean of each group, then the model is
formula_12
which can be written
formula_13
In this model formula_14 represents the mean of the formula_15th group.
One-way ANOVA (offset from reference group).
The ANOVA model could be equivalently written as each group parameter formula_16 being an offset from some overall reference. Typically this reference point is taken to be one of the groups under consideration. This makes sense in the context of comparing multiple treatment groups to a control group and the control group is considered the "reference". In this example, group 1 was chosen to be the reference group. As such the model to be fit is
formula_17
with the constraint that formula_18 is zero.
formula_19
In this model formula_20 is the mean of the reference group and formula_16 is the difference from group formula_15 to the reference group. formula_18 is not included in the matrix because its difference from the reference group (itself) is necessarily zero.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "X"
},
{
"math_id": 1,
"text": "X_{ij}"
},
{
"math_id": 2,
"text": "y=X\\beta+e,"
},
{
"math_id": 3,
"text": "\\beta"
},
{
"math_id": 4,
"text": "e"
},
{
"math_id": 5,
"text": " y_i = \\beta_0 + \\beta_1 x_i +\\varepsilon_i, \\,"
},
{
"math_id": 6,
"text": " \\beta_0 "
},
{
"math_id": 7,
"text": "\\beta_1"
},
{
"math_id": 8,
"text": "\n\\begin{bmatrix}y_1 \\\\ y_2 \\\\ y_3 \\\\ y_4 \\\\ y_5 \\\\ y_6 \\\\ y_7 \\end{bmatrix}\n= \n\\begin{bmatrix}1 & x_1 \\\\1 & x_2 \\\\1 & x_3 \\\\1 & x_4 \\\\1 & x_5 \\\\1 & x_6 \\\\ 1 & x_7 \\end{bmatrix}\n\\begin{bmatrix} \\beta_0 \\\\ \\beta_1 \\end{bmatrix}\n+\n\\begin{bmatrix} \\varepsilon_1 \\\\ \\varepsilon_2 \\\\ \\varepsilon_3 \\\\\n\\varepsilon_4 \\\\ \\varepsilon_5 \\\\ \\varepsilon_6 \\\\ \\varepsilon_7 \\end{bmatrix}\n"
},
{
"math_id": 9,
"text": "y_i"
},
{
"math_id": 10,
"text": " y_i = \\beta_0 + \\beta_1 w_i + \\beta_2 x_i + \\varepsilon_i "
},
{
"math_id": 11,
"text": "\n\\begin{bmatrix}y_1 \\\\ y_2 \\\\ y_3 \\\\ y_4 \\\\ y_5 \\\\ y_6 \\\\ y_7 \\end{bmatrix} = \n\\begin{bmatrix} 1 & w_1 & x_1 \\\\1 & w_2 & x_2 \\\\1 & w_3 & x_3 \\\\1 & w_4 & x_4 \\\\1 & w_5 & x_5 \\\\1 & w_6 & x_6 \\\\ 1& w_7 & x_7 \\end{bmatrix}\n\\begin{bmatrix} \\beta_0 \\\\ \\beta_1 \\\\ \\beta_2 \\end{bmatrix}\n+\n\\begin{bmatrix} \\varepsilon_1 \\\\ \\varepsilon_2 \\\\ \\varepsilon_3 \\\\\n\\varepsilon_4 \\\\ \\varepsilon_5 \\\\ \\varepsilon_6 \\\\ \\varepsilon_7 \\end{bmatrix}\n"
},
{
"math_id": 12,
"text": " y_{ij} = \\mu_i + \\varepsilon_{ij}"
},
{
"math_id": 13,
"text": "\n\\begin{bmatrix}y_1 \\\\ y_2 \\\\ y_3 \\\\ y_4 \\\\ y_5 \\\\ y_6 \\\\ y_7 \\end{bmatrix} = \n\\begin{bmatrix}1 & 0 & 0 \\\\1 &0 &0 \\\\ 1 & 0 & 0 \\\\ 0 & 1 & 0 \\\\ 0 & 1 & 0 \\\\ 0 & 0 & 1 \\\\ 0 & 0 & 1\\end{bmatrix}\n\\begin{bmatrix}\\mu_1 \\\\ \\mu_2 \\\\ \\mu_3 \\end{bmatrix}\n+\n\\begin{bmatrix} \\varepsilon_1 \\\\ \\varepsilon_2 \\\\ \\varepsilon_3 \\\\\n\\varepsilon_4 \\\\ \\varepsilon_5 \\\\ \\varepsilon_6 \\\\ \\varepsilon_7 \\end{bmatrix}\n"
},
{
"math_id": 14,
"text": "\\mu_i"
},
{
"math_id": 15,
"text": "i"
},
{
"math_id": 16,
"text": "\\tau_i"
},
{
"math_id": 17,
"text": " y_{ij} = \\mu + \\tau_i + \\varepsilon_{ij} "
},
{
"math_id": 18,
"text": "\\tau_1"
},
{
"math_id": 19,
"text": "\n\\begin{bmatrix}y_1 \\\\ y_2 \\\\ y_3 \\\\ y_4 \\\\ y_5 \\\\ y_6 \\\\ y_7 \\end{bmatrix} = \n\\begin{bmatrix}1 &0 &0 \\\\1 &0 &0 \\\\ 1 & 0 & 0 \\\\ 1 & 1 & 0 \\\\ 1 & 1 & 0 \\\\ 1 & 0 & 1 \\\\ 1 & 0 & 1\\end{bmatrix}\n\\begin{bmatrix}\\mu \\\\ \\tau_2 \\\\ \\tau_3 \\end{bmatrix}\n+\n\\begin{bmatrix} \\varepsilon_1 \\\\ \\varepsilon_2 \\\\ \\varepsilon_3 \\\\\n\\varepsilon_4 \\\\ \\varepsilon_5 \\\\ \\varepsilon_6 \\\\ \\varepsilon_7 \\end{bmatrix}\n"
},
{
"math_id": 20,
"text": "\\mu"
}
] | https://en.wikipedia.org/wiki?curid=949189 |
9492439 | Differential graded category | In mathematics, especially homological algebra, a differential graded category, often shortened to dg-category or DG category, is a category whose morphism sets are endowed with the additional structure of a differential graded formula_0-module.
In detail, this means that formula_1, the morphisms from any object "A" to another object "B" of the category is a direct sum
formula_2
and there is a differential "d" on this graded group, i.e., for each "n" there is a linear map
formula_3,
which has to satisfy formula_4. This is equivalent to saying that formula_1 is a cochain complex. Furthermore, the composition of morphisms
formula_5 is required to be a map of complexes, and for all objects "A" of the category, one requires formula_6.
formula_14.
The differential of such a morphism formula_15 of degree "n" is defined to be
formula_16,
where formula_17 are the differentials of "A" and "B", respectively. This applies to the category of complexes of quasi-coherent sheaves on a scheme over a ring.
Further properties.
The category of small dg-categories can be endowed with a model category structure such that weak equivalences are those functors that induce an equivalence of derived categories.
Given a dg-category "C" over some ring "R", there is a notion of smoothness and properness of "C" that reduces to the usual notions of smooth and proper morphisms in case "C" is the category of quasi-coherent sheaves on some scheme "X" over "R".
Relation to triangulated categories.
A DG category "C" is called pre-triangulated if it has a suspension functor
formula_18 and a class of distinguished triangles compatible with the
suspension, such that its homotopy category Ho("C") is a triangulated category.
A triangulated category "T" is said to have a "dg enhancement" "C" if "C"
is a pretriangulated dg category whose homotopy category is equivalent to "T". dg enhancements of an exact functor between triangulated categories are defined similarly. In general, there need not exist dg enhancements of triangulated categories or functors between them, for example stable homotopy category can be shown not to arise from a dg category in this way. However, various positive results do exist, for example the derived category "D"("A") of a Grothendieck abelian category "A" admits a unique dg enhancement. | [
{
"math_id": 0,
"text": "\\Z"
},
{
"math_id": 1,
"text": "\\operatorname{Hom}(A,B)"
},
{
"math_id": 2,
"text": "\\bigoplus_{n \\in \\Z}\\operatorname{Hom}_n(A,B)"
},
{
"math_id": 3,
"text": "d\\colon \\operatorname{Hom}_n(A,B) \\rightarrow \\operatorname{Hom}_{n+1}(A,B)"
},
{
"math_id": 4,
"text": "d \\circ d = 0"
},
{
"math_id": 5,
"text": "\\operatorname{Hom}(A,B) \\otimes \\operatorname{Hom}(B,C) \\rightarrow \\operatorname{Hom}(A,C)"
},
{
"math_id": 6,
"text": "d(\\operatorname{id}_A) = 0"
},
{
"math_id": 7,
"text": "\\mathrm{Hom}_n(-,-)"
},
{
"math_id": 8,
"text": "n\\ne 0"
},
{
"math_id": 9,
"text": "d=0"
},
{
"math_id": 10,
"text": "C(\\mathcal A)"
},
{
"math_id": 11,
"text": "\\mathcal A"
},
{
"math_id": 12,
"text": "\\operatorname{Hom}_{C(\\mathcal A), n} (A, B)"
},
{
"math_id": 13,
"text": "A \\rightarrow B[n]"
},
{
"math_id": 14,
"text": "\\mathrm{Hom}_{C(\\mathcal A), n} (A, B) = \\prod_{l \\in \\Z} \\mathrm{Hom}(A_l, B_{l+n})"
},
{
"math_id": 15,
"text": "f = (f_l \\colon A_l \\rightarrow B_{l+n})"
},
{
"math_id": 16,
"text": "f_{l+1} \\circ d_A + (-1)^{n+1} d_B \\circ f_l"
},
{
"math_id": 17,
"text": "d_A, d_B"
},
{
"math_id": 18,
"text": "\\Sigma"
}
] | https://en.wikipedia.org/wiki?curid=9492439 |
9492642 | Nakagami distribution | Statistical distribution
The Nakagami distribution or the Nakagami-"m" distribution is a probability distribution related to the gamma distribution. It is used to model physical phenomena, such as those found in medical ultrasound imaging, communications engineering, meteorology, hydrology, multimedia, and seismology.
The family of Nakagami distributions has two parameters: a shape parameter formula_0 and a second parameter controlling spread formula_1.
Characterization.
Its probability density function (pdf) is
formula_2
where formula_3 and formula_4.
Its cumulative distribution function (CDF) is
formula_5
where "P" is the regularized (lower) incomplete gamma function.
Parameterization.
The parameters formula_6 and formula_7 are
formula_8
and
formula_9
No closed form solution exists for the median of this distribution, although special cases do exist, such as formula_10 when "m" = 1. For practical purposes the median would have to be calculated as the 50th-percentile of the observations.
Parameter estimation.
An alternative way of fitting the distribution is to re-parametrize formula_11 as "σ" = Ω/"m".
Given independent observations formula_12 from the Nakagami distribution, the likelihood function is
formula_13
Its logarithm is
formula_14
Therefore
formula_15
These derivatives vanish only when
formula_16
and the value of "m" for which the derivative with respect to "m" vanishes is found by numerical methods including the Newton–Raphson method.
It can be shown that at the critical point a global maximum is attained, so the critical point is the maximum-likelihood estimate of ("m","σ"). Because of the equivariance of maximum-likelihood estimation, a maximum likelihood estimate for Ω is obtained as well.
Random variate generation.
The Nakagami distribution is related to the gamma distribution.
In particular, given a random variable formula_17, it is possible to obtain a random variable formula_18, by setting formula_19, formula_20, and taking the square root of formula_21:
formula_22
Alternatively, the Nakagami distribution formula_23 can be generated from the chi distribution with parameter formula_24 set to formula_25 and then following it by a scaling transformation of random variables. That is, a Nakagami random variable formula_26 is generated by a simple scaling transformation on a chi-distributed random variable formula_27 as below.
formula_28
For a chi-distribution, the degrees of freedom formula_29 must be an integer, but for Nakagami the formula_6 can be any real number greater than 1/2. This is the critical difference and accordingly, Nakagami-m is viewed as a generalization of chi-distribution, similar to a gamma distribution being considered as a generalization of chi-squared distributions.
History and applications.
The Nakagami distribution is relatively new, being first proposed in 1960 by Minoru Nakagami as a mathematical model for small-scale fading in long-distance high-frequency radio wave propagation. It has been used to model attenuation of wireless signals traversing multiple paths and to study the impact of fading channels on wireless communications.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "m\\geq 1/2 "
},
{
"math_id": 1,
"text": "\\Omega > 0"
},
{
"math_id": 2,
"text": " f(x;\\,m,\\Omega) = \\frac{2m^m}{\\Gamma(m)\\Omega^m}x^{2m-1}\\exp\\left(-\\frac{m}{\\Omega}x^2\\right) \\text{ for } x\\geq 0.\n"
},
{
"math_id": 3,
"text": "m\\geq 1/2"
},
{
"math_id": 4,
"text": "\\Omega>0"
},
{
"math_id": 5,
"text": " F(x;\\,m,\\Omega) = \\frac{\\gamma\\left(m, \\frac{m}{\\Omega}x^2\\right)}{\\Gamma(m)} = P\\left(m, \\frac{m}{\\Omega}x^2\\right)"
},
{
"math_id": 6,
"text": "m"
},
{
"math_id": 7,
"text": "\\Omega"
},
{
"math_id": 8,
"text": " m = \\frac{\\left( \\operatorname{E} [X^2] \\right)^2 }\n {\\operatorname{Var} [X^2]},\n"
},
{
"math_id": 9,
"text": " \\Omega = \\operatorname{E} [X^2]. "
},
{
"math_id": 10,
"text": "\\sqrt{\\Omega \\ln(2)}"
},
{
"math_id": 11,
"text": " \\Omega "
},
{
"math_id": 12,
"text": " X_1=x_1,\\ldots,X_n=x_n "
},
{
"math_id": 13,
"text": " L( \\sigma, m) = \\left( \\frac{2}{\\Gamma(m)\\sigma^m} \\right)^n \\left( \\prod_{i=1}^n x_i\\right)^{2m-1} \\exp\\left(-\\frac{\\sum_{i=1}^n x_i^2} \\sigma \\right). "
},
{
"math_id": 14,
"text": " \\ell(\\sigma, m) = \\log L(\\sigma,m) = -n \\log \\Gamma(m) - nm\\log\\sigma + (2m-1) \\sum_{i=1}^n \\log x_i - \\frac{ \\sum_{i=1}^n x_i^2} \\sigma. "
},
{
"math_id": 15,
"text": "\n\\begin{align}\n\\frac{\\partial\\ell}{\\partial\\sigma} = \\frac{-nm\\sigma+\\sum_{i=1}^n x_i^2}{\\sigma^2} \\quad \\text{and} \\quad \\frac{\\partial\\ell}{\\partial m} = -n\\frac{\\Gamma'(m)}{\\Gamma(m)} -n \\log\\sigma + 2\\sum_{i=1}^n \\log x_i.\n\\end{align}\n"
},
{
"math_id": 16,
"text": " \\sigma= \\frac{\\sum_{i=1}^n x_i^2}{nm} "
},
{
"math_id": 17,
"text": "Y \\, \\sim \\textrm{Gamma}(k, \\theta)"
},
{
"math_id": 18,
"text": "X \\, \\sim \\textrm{Nakagami} (m, \\Omega)"
},
{
"math_id": 19,
"text": "k=m"
},
{
"math_id": 20,
"text": "\\theta=\\Omega / m "
},
{
"math_id": 21,
"text": "Y"
},
{
"math_id": 22,
"text": " X = \\sqrt{Y}. \\,"
},
{
"math_id": 23,
"text": "f(y; \\,m,\\Omega)"
},
{
"math_id": 24,
"text": "k"
},
{
"math_id": 25,
"text": "2m"
},
{
"math_id": 26,
"text": "X"
},
{
"math_id": 27,
"text": "Y \\sim \\chi(2m) "
},
{
"math_id": 28,
"text": " X = \\sqrt{(\\Omega / 2 m)Y} ."
},
{
"math_id": 29,
"text": " 2m "
},
{
"math_id": 30,
"text": "m = \\tfrac 1 2"
}
] | https://en.wikipedia.org/wiki?curid=9492642 |
9492657 | Isotomic conjugate | Point constructed from another point and a reference triangle
In geometry, the isotomic conjugate of a point P with respect to a triangle △"ABC" is another point, defined in a specific way from P and △"ABC": If the base points of the lines PA, PB, PC on the sides opposite A, B, C are reflected about the midpoints of their respective sides, the resulting lines intersect at the isotomic conjugate of P.
Construction.
We assume that P is not collinear with any two vertices of △"ABC". Let A', B', C' be the points in which the lines AP, BP, CP meet sidelines BC, CA, AB (extended if necessary). Reflecting A', B', C' in the midpoints of sides will give points A", B", C" respectively. The isotomic lines AA", BB", CC" joining these new points to the vertices meet at a point (which can be proved using Ceva's theorem), the "isotomic conjugate" of P.
Coordinates.
If the trilinears for P are "p" : "q" : "r", then the trilinears for the isotomic conjugate of P are
formula_0
where a, b, c are the side lengths opposite vertices A, B, C respectively.
Properties.
The isotomic conjugate of the centroid of triangle △"ABC" is the centroid itself.
The isotomic conjugate of the symmedian point is the third Brocard point, and the isotomic conjugate of the Gergonne point is the Nagel point.
Isotomic conjugates of lines are circumconics, and conversely, isotomic conjugates of circumconics are lines. (This property holds for isogonal conjugates as well.) | [
{
"math_id": 0,
"text": "a^{-2}p^{-1} : b^{-2}q^{-1} : c^{-2}r^{-1},"
}
] | https://en.wikipedia.org/wiki?curid=9492657 |
9493307 | Josef Meixner | German theoretical physicist
Josef Meixner (24 April 1908 – 19 March 1994) was a German theoretical physicist, known for his work on the physics of deformable bodies, thermodynamics, statistical mechanics, Meixner polynomials, Meixner–Pollaczek polynomials, and spheroidal wave functions.
Education.
Meixner began his studies in theoretical physics with Arnold Sommerfeld at the Ludwig Maximilian University of Munich in 1926. He was awarded his doctorate in 1931, with the submission of a thesis on the application of the Green function in quantum mechanics.
Career.
Meixner taught at a high school for a few years. He was an assistant at the Institute of Theoretical Physics in Munchen until 1934. He worked with Salomon Bochner to determine that the Hermite polynomials were the only orthogonal polynomials formula_0 with generating functions of the form formula_1.
Meixner later wrote in his personal memoirs about his close friend, an Austrian Jew who came to Munich in 1929 and left for Princeton in 1933:
Bochner foresaw the coming political development very clearly, and I recall when we, surely at the end of 1932, stood before a bulletin board of the Voelkischer Beobachter and he said: ‘Now it is almost time that I must depart’. When I [at age 24] replied that then I would also like to leave, he replied: You remain here; nothing will happen to you and for us there are too few places in the world.
Meixner loosened the condition on the generating function and determined that formula_2 was satisfied by five classes of polynomials, known as the Hermite polynomials, Charlier polynomials, Laguerre polynomials, Meixner polynomials and Meixner-Pollaczek polynomials. He published this result in 1934.
In 1934, Meixner became an Assistant in theoretical physics to Karl Bechert at the University of Giessen. Also in 1934, he joined the Nazi paramilitary SA. In 1937, he became a Lecturer and also joined the Nazi party. From 1939 to 1941 he was a lecturer at the Humboldt University of Berlin. Serving in the German army, from September 1941 he was stationed at the weather station at Vadsø, Finnmark, Norway.
In 1942, he was appointed “Extraordinary” Professor of theoretical physics and Director of the Institute of Theoretical Physics at Rhine-Westfalian Institute of Technology, Aachen, Germany. However, he could not take up this position until he was released from the armed forces in the summer of 1943. Meixner received a denazification certificate (Persilschein) with the help of his PhD advisor Sommerfeld, one of the few German scientists untainted with Nazi affiliation, who wrote that Meixner had never been a supporter of the Nazi system but in his circumstances it would have been very difficult for him to avoid joining the SA. Meixner was promoted to “Ordinary” Professor in 1949.
After Sommerfeld's death in 1951, Meixner edited a volume and new editions of two other volumes of Sommerfeld's six-volume "Vorlesungen über theoretische Physik".
Meixner conducted research and taught graduate courses at the Institut für theoretische Physik of the Rheinisch-Westfälische Technische Hochschule Aachen (RWTH Aachen). Literature citations, as well as doctorates granted to his students, put Meixner at RWTH Aachen, or just Aachen, for years in each of the decades from the 1950s until his death in 1994. He was known for his work on the physics of deformable bodies (rheology), thermodynamics, statistical mechanics, Meixner polynomials, Meixner-Pollaczek polynomials, and spheroidal wave functions.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "p_n(x)"
},
{
"math_id": 1,
"text": "f(t)e^{xt} = \\sum_{n=0}^{\\infty}p_n(x)\\frac{t^n}{n!}"
},
{
"math_id": 2,
"text": "f(t)e^{xu(t)} = \\sum_{n=0}^{\\infty}p_n(x)\\frac{t^n}{n!}"
}
] | https://en.wikipedia.org/wiki?curid=9493307 |
9493613 | Sieving coefficient | Mathematical coefficient used in mass transfer calculations
In mass transfer, the sieving coefficient is a measure of equilibration between the concentrations of two mass transfer streams. It is defined as the mean pre- and post-contact concentration of the mass receiving stream divided by the pre- and post-contact concentration of the mass donating stream.
formula_0
where
A sieving coefficient of unity implies that the concentrations of the receiving and donating stream equilibrate, i.e. the out-flow concentrations (post-mass transfer) of the mass donating and receiving stream are equal to one another. Systems with sieving coefficient that are greater than one require an external energy source, as they would otherwise violate the laws of thermodynamics.
Sieving coefficients less than one represent a mass transfer process where the concentrations have not equilibrated.
Contact time between mass streams is important in consider in mass transfer and affects the sieving coefficient.
In kidney.
In renal physiology, the glomerular sieving coefficient (GSC) can be expressed as:
sieving coefficient = clearance / ultrafiltration rate | [
{
"math_id": 0,
"text": "S = \\frac{C_r}{C_d}"
}
] | https://en.wikipedia.org/wiki?curid=9493613 |
9494408 | Pedelec | Bicycle where the rider's pedalling is assisted by an electric motor
A Pedelec (from pedal electric cycle) or EPAC ("electronically power assisted cycle"), is a type of low-powered electric bicycle where the rider's pedalling is assisted by a small electric motor. However, unlike some other types of e-bikes, pedelecs are classified as conventional bicycles in many countries by road authorities rather than as a type of electric moped. Pedelecs have an electronic controller that cuts power to the motor when the rider is not pedalling or when a certain speed – usually or – is reached. Pedelecs are useful for people who ride in hilly areas or in strong headwinds. While a pedelec can be any type of bicycle, a pedelec city bike is very common. A conventional bicycle can be converted to a pedelec with the addition of the necessary parts, e.g., motor, battery, etc.
Many jurisdictions classify pedelecs as bicycles as opposed to mopeds or motorcycles. More powerful e-bikes, such as the S-Pedelecs and "power-on-demand" e-bikes (those whose motors can provide assistance regardless of whether the rider is pedalling) are often classified as mopeds or even motorcycles with the rider thus subject to the regulations of such motor vehicles, e.g., having a license and a vehicle registration, wearing a helmet, etc.
History.
In 1989, Michael Kutter, founder of Dolphin E-bikes, designed and produced the first pedelec prototype. The first market-ready models of this kind were available in 1992 from the Swiss company Velocity under the name Dolphin.
In 1994, larger numbers were produced by Yamaha under the name Power Assist.
In 1995, the first Flyer in the same year founded the Swiss start-up company BKTech AG in small series by e-business (as an integral part of the start-up) to the market.
Pedelec market penetration.
Germany.
As of 2012[ [update]] there were about 600,000 pedelecs on the road in Germany. Growth has been spectacular: the year before, 310,000–340,000 pedelecs were sold in Germany and this in turn was 55% more than in 2010. In fact, in Germany sales have gone up by more than 30% every year since 2008. In comparison, there were around 70 million conventional bicycles in Germany in 2011 according to ZIV, the German Bicycle Industry Association.
About 95% of all e-bikes in Germany are in fact pedelecs.
ADAC, the German automotive club, tested a large number of pedelecs in 2013, where about 56% of the pedelecs failed the test with a score lower than reasonable due to unsatisfactory safety and durability.
Elsewhere.
Only the Chinese market for pedelecs and e-bikes is bigger than the European. According to the National Bureau of Statistics in China, more than 100 million e-bikes are on the road. Annual production in Chinese factories has increased from 58,000 in 1998 to 33 million in 2011. A pedelec classification separate from an e-bike is not known in China.
Legal status of pedelecs worldwide.
To really be useful, it is important for a pedelec to be legally classified as a bicycle in each country or jurisdiction rather than classified as a moped or motorcycle. Otherwise, if a pedelec is classified as a moped or motorbike then it may not be allowed in bike lanes or on bike paths; the pedelec may have to be registered; the rider may have to wear a motorcycle helmet; and/or vehicle insurance may have to be paid for.
Europe.
In the European Union a pedelec does not need registration, insurance, or license plate, if it adheres to these rules:
If any of these rules is not followed, the vehicle is classified as either e-bike or S-Pedelec which require a license plate and insurance, or as a motorbike which also requires a driving license.
Austria.
Under Austrian law is no difference between any types of electric bicycles like such as exclusively powered by electricity without having pedals (see below, lit. d) or as hybrid powered (see below, lit. b), equal if it is power-assisted only when pedalling ("pedelec" but not: S-Pedelec) or without need of pedalling (commonly in Austria known as "e-bike").
To be such electric vehicles not classified as motor vehicle but as "Fahrrad "(= bicycle) according to § 2 paragraph 1 number 22 lit. b and d Road Traffic Act 1960 (StVO 1960) in conjunction with § 1 paragraph 2a Motor Vehicle Law 1967 (KFG 1967) two types of electric bikes can be distinguished:
<templatestyles src="Verse translation/styles.css" />
The above named § 1 paragraph 2a KFG 1967 defines as follows:
<templatestyles src="Verse translation/styles.css" />
As for normal (only muscle powered) bikes, also for electric bikes, the provisions of bicycle Regulation, for the control of these are the same as those for muscle StVO-powered bicycles, etc. Mandatory use of the bike path lane with bicycles. For their (commercial) In placing on the market subject to the product liability provisions.
If the above criteria are exceeded, the electric bicycle (equal if a so-called S-Pedelec or any other e-bike) it is a motor vehicle under the rules of the KFG 1967 and not a "Fahrrad" under the StVO 1960 and is only allowed to drive as a moped with the corresponding consequences. It must be a liability insurance be completed, it is the helmet and a driver's license of the corresponding class L1-eB (Vehicle classification "Two-wheel moped" in the Regulation (EU) No 168/2013) must be present. It must also be equipped like a moped with a maximum design vehicle speed less or equal 45 km/h. For these, the buyer should make sure to receive a COC (certificate of conformity) from the dealer in addition to the purchase contract. Only with these documents, the fast e-bike can be registered.
Other of above described electric bicycles are not typable in Austria.
The Netherlands.
The true Pedelecs are not required to have any other prerequisities than a bicycle has.
However, any pedelec where the power assistance is triggered by merely turning wheels rather than pedal motion (a large number of cheap versions or notoriously front hub assistance), are required to have a licence plate for a scooter / small motorcycle (so called or ), a valid driving licence and an insurance.
In case of the power assistance stopping at a speed up to 25 km/h, the riders are not required to carry motorcycle helmets, however, this speed limit shall not be exceeded even while pedaling only. There is no speedlimit by law for human powered vehicles, including un-assisted pedelecs at > 25 kmh speed.
Electric bicycles, for example Specialized Turbo, without 25 km/h speed limitation for power assistance are considered a small motorcycle and besides license plate (yellow with black letters), driving license and insurance, a 'motorcycle helmet' must be worn at all time from the start of 2017 and onwards.
A large fleet of electric bicycles and pedelecs without required power control linked to the pedaling effort can be seen on the cycling paths without any proper registration.
Additionally, many users found very simple ways how to tweak their pedelecs in order to overcome the pedaling sensor, making their pedelecs without further proper vehicle registration illegal.
Hong Kong.
Pedelecs, and all kinds of mechanical assist, are regarded as "motor vehicles" and classified as motor cycles, making legal registration impossible. The Hong Kong Transport Department is currently conducting a review, with a first report expected in mid-2020.
Singapore.
Pedelecs are allowed, when wearing a helmet, the motor output is limited to 200 W and the motor cuts out by 25 km/h.
India.
Electric vehicles with a motor having power less than 250 W, and a maximum speed 25 km/h or lower, are not required to be registered under the Central Motor Vehicle Rules, and may be driven freely without any license/paperwork.
Japan.
Electric-assisted bicycles are treated as human-powered bicycles, while bicycles capable of propulsion by electric power alone face additional registration and regulatory requirements as mopeds. Requirements include electric power generation by a motor that cannot be easily modified, along with a power assist mechanism that operates safely and smoothly. In December 2008, the assist ratio was updated as follows:
In October 2017, only for the special case that 3 wheel bicycle that draws a cart with a device to be drawn, the ratio was updated as follows:
Australia.
As of 30 May 2012, Australia has an additional new electric bicycle category using the European model of a "Pedelec" as per EN15194 Standard. This means the bicycle can have a motor of 250 watts continuous rated power which must be activated only by pedalling (if above 6 km/h) and must cut out over 25 km/h. The State of Victoria is the first to amend their local road rules to accommodate this new standard as of 18 September 2012.
Technical.
Components.
Pedelecs differ from an ordinary bicycle by an additional electric motor, a battery, an electronic control system for the motor as well as a sensor to detect the motion of the cranks. Most models are also equipped with a battery charge indicator and a motor power setting, either continuously or divided into support levels.
Battery.
Besides the motor, the battery is the main component of pedelec technology. It is usually either a NiMH - Ni - or a lithium-ion battery. The battery capacity is up to 24 Ah at 24 or 36 V or up to 15 amp hours at 48 V. The stored energy can be up to about 800 watt hour n (Wh), but mostly about 400 Wh (2013). In ideal conditions, after a thousand charges NiCd batteries have 85% of their original capacity and are therefore considered worn. With NiMH batteries about 400 to 800 cycles are possible. The charging time depending on the type of battery is around 2 to 9 hours. The durability of the battery is dependent on other factors. As lead-acid batteries discharge they provide less power, so that full motor power is no longer achieved. The very light, more expensive lithium ion batteries are now used by most manufacturers and have a range of up to 100 kilometers with moderate pedaling and a medium capacity battery (e.g. 15 Ah). Lithium batteries do not tolerate frost and should not be charged at frosty temperatures. For safety, the chemical composition and the quality of the electronics are crucial. Especially with short circuit and over voltage, lithium-ion batteries react very strongly. These problems in laptops have led to recalls. Lithium iron phosphate (LFP) batteries are a notable exception. They have far safer thermal characteristics as well as being non-toxic.
In evaluating pedelec batteries, it is useful to consider not only the capacity, but also criteria such as durability, memory effect, charging time, weight, safety and environmental protection.
Manufacturers which equip their pedelecs with NiCd batteries usually deliver them with an AC adapter that discharges the battery completely before the actual charging process in order to decrease the memory effect. NiMH batteries have a much lower memory effect. With lithium-ion batteries there is no memory effect.
A lithium iron phosphate battery is much longer-lived than a lithium-ion battery. Its use significantly reduces operating costs resulting from battery wear. In 2013, they are not yet available as standard in most pedelec models, but some pedelecs (e.g. Beyond Oil) have begun installing LFP batteries as standard.
Motor control.
For switching or control of the motor, there are several possibilities:
In addition, the speed of the vehicle are measured on the wheel, in particular, for example, to drive the motor from 25 km/h off.
The measurement can be further processed mechanically or electronically and is used to control the motor on and off or to regulate a control function based on continuously.
The fed power is based on the sensor data (force sensor, crank speed, ground speed) is calculated based on the chosen level of support from the motor controller. The so-called support levels, that is, how much the motor supported in addition to the driver's performance lie in horizontal drive 5-400 percent.
When the motors are regularly used heavily, especially when going uphill they may heat up significantly, some have a temperature sensor in the motor winding, where if a certain temperature is reached the electronics may reduce power to the motor. Ideally the electronics disconnect the battery at a predetermined discharge voltage to prevent total discharge and to ensure sufficient supply for the operation of the lighting system. This can be done by electronics in the battery.
Force control.
When running with a force sensor, the motor is automatically a certain percentage of the service provided to the driver. In many models, this proportion may be set in several stages. There are also models where the support level can be set only at the dealer to the customer.
Rotary motion detection.
In the version with speed sensor (s) of the motor is automatically using a function to a set percentage of the self-applied force. Since the force required at the speed rises sharply, it can be calculated in some models without force sensor.
Sliding or traction.
The slide or traction can help with Maximization of legislation to support a motor without pedaling to 6 km/h. The shift means has the advantage that you can let the bike roll along with motor support without pedaling or you push yourself (e.g. must, when transporting a heavy load, or so you walk up the wheel alone on a hill may be). For some models, the allowed 6 km/h can be achieved only in top gear, the other gears in the wheel rolls correspondingly slower. In any case, it allows for a faster (and more controlled physically) starting from standstill to "green" switches over light .
Power electronics.
The power electronics, depending on the type of motor, consist of a DC motor controller with pulse-width modulation or a regulated DC-AC converter.
Motor types.
Almost exclusively, pedelecs use DC motors, using commutator-less and brush disc motors, which are suitable for direct drive, and brush motors with gears.
The use of maintenance-free AC induction motors pedelec is the exception.
A direct rotor hub motor may feature a regenerative brake, so it can be used as a brake that converts some of the kinetic energy into battery charge. In addition to charging the battery when braking this incurs less wear on the
traditional brake, reducing braking noises.
Usage.
E-bikes are divided into many categories according to their uses, such as electric city bikes, electric folding bikes, electric fat tire bikes, electric mountain bikes and so on.
Force approach of the electric drive.
See generally starting points of the electric drive. When Pedelec specifically, the type of control of the drive by the pedaling (see above), which may be integrated in the drive.
Drive positions.
The position of the motor has a significant impact on the handling of the pedelec. The following combinations of actuator position and motor have been successful:
Range.
Generally the range with motor support is between 7 km for a constant rise and up to 70 km. At medium power addition, it is about 20 to 50 km. On some models, by default two successive switchable batteries are housed in luggage bags, here is the range specified at medium power addition of 100 km.
A conventional battery (36 V / 7 Ah) (1.9 to 5.1 kg mass in a pedelec) has an energy content of around 250 Wh (1 kg of gasoline has about 11,500 Wh). The conversion of electrical energy into mechanical work is done with some loss of energy due to the generation of heat. Typically, incurred losses are around 25 percent, depending on the efficiency of the motor and the motor controller. Thus, a pedelec with a 70 kg rider (total mass of ≈100 kg) can be calculated to go about 5.6 kilometres on a 10% grade at 25 km/h on battery power alone (assuming frontal area = 0.4 meter-squared, drag coefficient = 0.7, altitude = 100m, wind speed = 10 km/h (2.8 m/s) and rolling resistance coefficient = 0.007). Depending on the assistance of the rider (which is required on a pedelec), a proportionally greater range is possible.
Safety.
Safety issues are a concern in relatively flat areas, but are more pronounced in the hills. Hilly areas provide changing conditions; this poses the possibility of encountering more critical situations and thus more accidents may occur. Cars may need to overtake pedelecs at higher speeds than cars would overtake regular bikes, and this may result in more accidents with serious consequences for both cyclists and drivers. For drivers and pedestrians, it may be difficult to estimate how fast a cyclist is moving. Also, an elderly on a pedelec may ride much faster than previously possible. Risky situations can also arise at exits and junctions. To illustrate the consequences of such critical situations, the German Insurers Accident Research (UDV) has conducted a research project with road tests, performance tests and crash tests for pedelecs.
On the other hand, many pedelec (and e-bike) users report that they can ride more defensively with the auxiliary electric drive assisting them; unlike traditional bicyclists that tend to be averse to braking since this incurs effort to accelerate again, a pedelec rider can brake and then accelerate back to a normal speed with much less effort. The Bavarian accident statistics for the first half of 2012 lists 6,186 accidents involving bicycles, of which 76 are e-bikes and notes that the accident risk of e-bikes is not higher than for other bicycles.
The use of S-Pedelecs involves an additional risk. Not only do they achieve a higher average speed, but a higher top speed (usually 45 km/h) and can also expect a higher annual mileage.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "2 - \\tfrac{\\text{speed in }\\tfrac{km}{h} - 10}{7}"
},
{
"math_id": 1,
"text": "2 - \\tfrac{3 \\cdot \\text{speed in }\\tfrac{km}{h} - 10}{14}"
}
] | https://en.wikipedia.org/wiki?curid=9494408 |
949526 | Denudation | A geological term describing the various processes that wear away landforms
Denudation is the geological process in which moving water, ice, wind, and waves erode the Earth's surface, leading to a reduction in elevation and in relief of landforms and landscapes. Although the terms erosion and denudation are used interchangeably, erosion is the transport of soil and rocks from one location to another, and denudation is the sum of processes, including erosion, that result in the lowering of Earth's surface. Endogenous processes such as volcanoes, earthquakes, and tectonic uplift can expose continental crust to the exogenous processes of weathering, erosion, and mass wasting. The effects of denudation have been recorded for millennia but the mechanics behind it have been debated for the past 200 years and have only begun to be understood in the past few decades.
Description.
Denudation incorporates the mechanical, biological, and chemical processes of erosion, weathering, and mass wasting. Denudation can involve the removal of both solid particles and dissolved material. These include sub-processes of cryofracture, insolation weathering, slaking, salt weathering, bioturbation, and anthropogenic impacts.
Factors affecting denudation include:
Historical theories.
The effects of denudation have been written about since antiquity, although the terms "denudation" and "erosion" have been used interchangeably throughout most of history. In the Age of Enlightenment, scholars began trying to understand how denudation and erosion occurred without mythical or biblical explanations. Throughout the 18th century, scientists theorized valleys are formed by streams running through them, not from floods or other cataclysms. In 1785, Scottish physician James Hutton proposed an Earth history based on observable processes over an unlimited amount of time, which marked a shift from assumptions based on faith to reasoning based on logic and observation. In 1802, John Playfair, a friend of Hutton, published a paper clarifying Hutton's ideas, explaining the basic process of water wearing down the Earth's surface, and describing erosion and chemical weathering. Between 1830 and 1833, Charles Lyell published three volumes of "Principles of Geology", which describes the shaping of the surface of Earth by ongoing processes, and which endorsed and established gradual denudation in the wider scientific community.
As denudation came into the wider conscience, questions of how denudation occurs and what the result is began arising. Hutton and Playfair suggested over a period of time, a landscape would eventually be worn down to erosional planes at or near sea level, which gave the theory the name "planation". Charles Lyell proposed marine planation, oceans, and ancient shallow seas were the primary driving force behind denudation. While surprising given the centuries of observation of fluvial and pluvial erosion, this is more understandable given early geomorphology was largely developed in Britain, where the effects of coastal erosion are more evident and play a larger role in geomorphic processes. There was more evidence against marine planation than there was for it. By the 1860s, marine planation had largely fallen from favor, a move led by Andrew Ramsay, a former proponent of marine planation who recognized rain and rivers play a more important role in denudation. In North America during the mid-19th century, advancements in identifying fluvial, pluvial, and glacial erosion were made. The work being done in the Appalachians and American West that formed the basis for William Morris Davis to hypothesize peneplanation, despite the fact while peneplanation was compatible in the Appalachians, it did not work as well in the more active American West. Peneplanation was a cycle in which young landscapes are produced by uplift and denuded down to sea level, which is the base level. The process would be restarted when the old landscape was uplifted again or when the base level was lowered, producing a new, young landscape.
Publication of the Davisian cycle of erosion caused many geologists to begin looking for evidence of planation around the world. Unsatisfied with Davis's cycle due to evidence from the Western United States, Grove Karl Gilbert suggested backwearing of slopes would shape landscapes into pediplains, and W.J. McGee named these landscapes pediments. This later gave the concept the name pediplanation when L.C. King applied it on a global scale. The dominance of the Davisian cycle gave rise to several theories to explain planation, such as eolation and glacial planation, although only etchplanation survived time and scrutiny because it was based on observations and measurements done in different climates around the world and it also explained irregularities in landscapes. The majority of these concepts failed, partly because Joseph Jukes, a popular geologist and professor, separated denudation and uplift in an 1862 publication that had a lasting impact on geomorphology. These concepts also failed because the cycles, Davis's in particular, were generalizations and based on broad observations of the landscape rather than detailed measurements; many of the concepts were developed based on local or specific processes, not regional processes, and they assumed long periods of continental stability.
Some scientists opposed the Davisian cycle; one was Grove Karl Gilbert, who, based on measurements over time, realized denudation is nonlinear; he started developing theories based on fluid dynamics and equilibrium concepts. Another was Walther Penck, who devised a more complex theory that denudation and uplift occurred at the same time, and that landscape formation is based on the ratio between denudation and uplift rates. His theory proposed geomorphology is based on endogenous and exogenous processes. Penck's theory, while ultimately being ignored, returned to denudation and uplift occurring simultaneously and relying on continental mobility, even though Penck rejected continental drift. The Davisian and Penckian models were heavily debated for a few decades until Penck's was ignored and support for Davis's waned after his death as more critiques were made. One critic was John Leighly, who stated geologists did not know how landforms were developed, so Davis's theory was built upon a shaky foundation.
From 1945 to 1965, a change in geomorphology research saw a shift from mostly deductive work to detailed experimental designs that used improved technologies and techniques, although this led to research over details of established theories, rather than researching new theories. Through the 1950s and 1960s, as improvements were made in ocean geology and geophysics, it became clearer Wegener's theory on continental drift was correct and that there is constant movement of parts (the plates) of Earth's surface. Improvements were also made in geomorphology to quantify slope forms and drainage networks, and to find relationships between the form and process, and the magnitude and frequency of geomorphic processes. The final blow to peneplanation came in 1964 when a team led by Luna Leopold published "Fluvial Processes in Geomorphology", which links landforms with measurable precipitation-infiltration runoff processes and concluded no peneplains exist over large areas in modern times, and any historical peneplains would have to be proven to exist, rather than inferred from modern geology. They also stated pediments could form across all rock types and regions, although through different processes. Through these findings and improvements in geophysics, the study of denudation shifted from planation to studying which relationships affect denudation–including uplift, isostasy, lithology, and vegetation–and measuring denudation rates around the world.
Measurement.
Denudation is measured in the wearing down of Earth's surface in inches or centimeters per 1000 years. This rate is intended as an estimate and often assumes uniform erosion, among other things, to simplify calculations. Assumptions made are often only valid for the landscapes being studied. Measurements of denudation over large areas are performed by averaging the rates of subdivisions. Often, no adjustments are made for human impact, which causes the measurements to be inflated. Calculations have suggested soil loss of up to caused by human activity will change previously calculated denudation rates by less than 30%.
Denudation rates are usually much lower than the rates of uplift and average orogeny rates can be eight times the maximum average denudation. The only areas at which there could be equal rates of denudation and uplift are active plate margins with an extended period of continuous deformation.
Denudation is measured in catchment-scale measurements and can use other erosion measurements, which are generally split into dating and survey methods. Techniques for measuring erosion and denudation include stream load measurement, cosmogenic exposure and burial dating, erosion tracking, topographic measurements, surveying the deposition in reservoirs, landslide mapping, chemical fingerprinting, thermochronology, and analysis of sedimentary records in deposition areas. The most common way of measuring denudation is from stream load measurements taken at gauging stations. The suspended load, bed load, and dissolved load are included in measurements. The weight of the load is converted to volumetric units and the load volume is divided by the area of the watershed above the gauging station. An issue with this method of measurement is the high annual variation in fluvial erosion, which can be up to a factor of five between successive years. An important equation for denudation is the stream power law: formula_0, where E is erosion rate, K is the erodibility constant, A is drainage area, S is channel gradient, and m and n are functions that are usually given beforehand or assumed based on the location. Most denudation measurements are based on stream load measurements and analysis of the sediment or the water chemistry.
A more recent technique is cosmogenic isotope analysis, which is used in conjunction with stream load measurements and sediment analysis. This technique measures chemical weathering intensity by calculating chemical alteration in molecular proportions. Preliminary research into using cosmogenic isotopes to measure weathering was done by studying the weathering of feldspar and volcanic glass, which contain most of the material found in the Earth's upper crust. The most common isotopes used are 26Al and 10Be; however, 10Be is used more often in these analyses. 10Be is used due to its abundance and, while it is not stable, its half-life of 1.39 million years is relatively stable compared to the thousand or million-year scale in which denudation is measured. 26Al is used because of the low presence of Al in quartz, making it easy to separate, and because there is no risk of contamination of atmospheric 10Be. This technique was developed because previous denudation-rate studies assumed steady rates of erosion even though such uniformity is difficult to verify in the field and may be invalid for many landscapes; its use to help measure denudation and geologically date events was important. On average, the concentration of undisturbed cosmogenic isotopes in sediment leaving a particular basin is inversely related to the rate at which that basin is eroding. In a rapidly-eroding basin, most rock will be exposed to only a small number of cosmic rays before erosion and transport out of the basin; as a result, isotope concentration will be low. In a slowly-eroding basin, integrated cosmic ray exposure is much greater and isotope concentration will be much higher. Measuring isotopic reservoirs in most areas is difficult with this technique so uniform erosion is assumed. There is also variation in year-to-year measurements, which can be as high as a factor of three.
Problems in measuring denudation include both the technology used and the environment. Landslides can interfere with denudation measurements in mountainous regions, especially the Himalayas. The two main problems with dating methods are uncertainties in the measurements, both with equipment used and with assumptions made during measurement; and the relationship between the measured ages and histories of the markers. This relates to the problem of making assumptions based on the measurements being made and the area being measured. Environmental factors such as temperature, atmospheric pressure, humidity, elevation, wind, the speed of light at higher elevations if using lasers or time of flight measurements, instrument drift, chemical erosion, and for cosmogenic isotopes, climate and snow or glacier coverage. When studying denudation, the Stadler effect, which states measurements over short time periods show higher accumulation rates and than measurements over longer time periods, should be considered. In a study by James Gilully, the presented data suggested the denudation rate has stayed roughly the same throughout the Cenozoic era based on geological evidence; however, given estimates of denudation rates at the time of Gilully's study and the United States' elevation, it would take 11-12 million years to erode North America; well before the 66 million years of the Cenozoic.
The research on denudation is primarily done in river basins and in mountainous regions like the Himalayas because these are very geologically active regions, which allows for research between uplift and denudation. There is also research on the effects of denudation on karst because only about 30% of chemical weathering from water occurs on the surface. Denudation has a large impact on karst and landscape evolution because the most-rapid changes to landscapes occur when there are changes to subterranean structures. Other research includes effects on denudation rates; this research is mostly studying how climate and vegetation impact denudation. Research is also being done to find the relationship between denudation and isostasy; the more denudation occurs, the lighter the crust becomes in an area, which allows for uplift. The work is primarily trying to determine a ratio between denudation and uplift so better estimates can be made on changes in the landscape. In 2016 and 2019, research that attempted to apply denudation rates to improve the stream power law so it can be used more effectively was conducted.
Examples.
Denudation exposes deep subvolcanic structures on the present surface of the area where volcanic activity once occurred. Subvolcanic structures such as volcanic plugs and dikes are exposed by denudation.
Other examples include:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "E=KA^mS^n"
}
] | https://en.wikipedia.org/wiki?curid=949526 |
949562 | Compound Poisson distribution | Aspect of probability theory
In probability theory, a compound Poisson distribution is the probability distribution of the sum of a number of independent identically-distributed random variables, where the number of terms to be added is itself a Poisson-distributed variable. The result can be either a continuous or a discrete distribution.
Definition.
Suppose that
formula_0
i.e., "N" is a random variable whose distribution is a Poisson distribution with expected value λ, and that
formula_1
are identically distributed random variables that are mutually independent and also independent of "N". Then the probability distribution of the sum of formula_2 i.i.d. random variables
formula_3
is a compound Poisson distribution.
In the case "N" = 0, then this is a sum of 0 terms, so the value of "Y" is 0. Hence the conditional distribution of "Y" given that "N" = 0 is a degenerate distribution.
The compound Poisson distribution is obtained by marginalising the joint distribution of ("Y","N") over "N", and this joint distribution can be obtained by combining the conditional distribution "Y" | "N" with the marginal distribution of "N".
Properties.
The expected value and the variance of the compound distribution can be derived in a simple way from law of total expectation and the law of total variance. Thus
formula_4
formula_5
Then, since E("N") = Var("N") if "N" is Poisson-distributed, these formulae can be reduced to
formula_6
formula_7
The probability distribution of "Y" can be determined in terms of characteristic functions:
formula_8
and hence, using the probability-generating function of the Poisson distribution, we have
formula_9
An alternative approach is via cumulant generating functions:
formula_10
Via the law of total cumulance it can be shown that, if the mean of the Poisson distribution "λ" = 1, the cumulants of "Y" are the same as the moments of "X"1.
Every infinitely divisible probability distribution is a limit of compound Poisson distributions. And compound Poisson distributions is infinitely divisible by the definition.
Discrete compound Poisson distribution.
When formula_1 are positive integer-valued i.i.d random variables with formula_11, then this compound Poisson distribution is named discrete compound Poisson distribution (or stuttering-Poisson distribution) . We say that the discrete random variable formula_12 satisfying probability generating function characterization
formula_13
has a discrete compound Poisson(DCP) distribution with parameters formula_14 (where formula_15, with formula_16), which is denoted by
formula_17
Moreover, if formula_18, we say formula_19 has a discrete compound Poisson distribution of order formula_20 . When formula_21, DCP becomes Poisson distribution and Hermite distribution, respectively. When formula_22, DCP becomes triple stuttering-Poisson distribution and quadruple stuttering-Poisson distribution, respectively. Other special cases include: shift geometric distribution, negative binomial distribution, Geometric Poisson distribution, Neyman type A distribution, Luria–Delbrück distribution in Luria–Delbrück experiment. For more special case of DCP, see the reviews paper and references therein.
Feller's characterization of the compound Poisson distribution states that a non-negative integer valued r.v. formula_19 is infinitely divisible if and only if its distribution is a discrete compound Poisson distribution. The negative binomial distribution is discrete infinitely divisible, i.e., if "X" has a negative binomial distribution, then for any positive integer "n", there exist discrete i.i.d. random variables "X"1, ..., "X""n" whose sum has the same distribution that "X" has. The shift geometric distribution is discrete compound Poisson distribution since it is a trivial case of negative binomial distribution.
This distribution can model batch arrivals (such as in a bulk queue). The discrete compound Poisson distribution is also widely used in actuarial science for modelling the distribution of the total claim amount.
When some formula_23 are negative, it is the discrete pseudo compound Poisson distribution. We define that any discrete random variable formula_12 satisfying probability generating function characterization
formula_24
has a discrete pseudo compound Poisson distribution with parameters formula_25 where formula_26 and formula_27, with formula_28.
Compound Poisson Gamma distribution.
If "X" has a gamma distribution, of which the exponential distribution is a special case, then the conditional distribution of "Y" | "N" is again a gamma distribution. The marginal distribution of "Y" is a Tweedie distribution with variance power 1 < "p" < 2 (proof via comparison of characteristic function (probability theory)). To be more explicit, if
formula_29
and
formula_30
i.i.d., then the distribution of
formula_31
is a reproductive exponential dispersion model formula_32 with
formula_33
The mapping of parameters Tweedie parameter formula_34 to the Poisson and Gamma parameters formula_35 is the following:
formula_36
Compound Poisson processes.
A compound Poisson process with rate formula_37 and jump size distribution "G" is a continuous-time stochastic process formula_38 given by
formula_39
where the sum is by convention equal to zero as long as "N"("t") = 0. Here, formula_40 is a Poisson process with rate formula_41, and formula_42 are independent and identically distributed random variables, with distribution function "G", which are also independent of formula_43
For the discrete version of compound Poisson process, it can be used in survival analysis for the frailty models.
Applications.
A compound Poisson distribution, in which the summands have an exponential distribution, was used by Revfeim to model the distribution of the total rainfall in a day, where each day contains a Poisson-distributed number of events each of which provides an amount of rainfall which has an exponential distribution. Thompson applied the same model to monthly total rainfalls.
There have been applications to insurance claims and x-ray computed tomography.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "N\\sim\\operatorname{Poisson}(\\lambda),"
},
{
"math_id": 1,
"text": "X_1, X_2, X_3, \\dots"
},
{
"math_id": 2,
"text": "N"
},
{
"math_id": 3,
"text": "Y = \\sum_{n=1}^N X_n"
},
{
"math_id": 4,
"text": "\\operatorname{E}(Y)= \\operatorname{E}\\left[\\operatorname{E}(Y \\mid N)\\right]= \\operatorname{E}\\left[N \\operatorname{E}(X)\\right]= \\operatorname{E}(N) \\operatorname{E}(X) ,"
},
{
"math_id": 5,
"text": "\n\\begin{align}\n\\operatorname{Var}(Y) & = \\operatorname{E}\\left[\\operatorname{Var}(Y\\mid N)\\right] + \\operatorname{Var}\\left[\\operatorname{E}(Y \\mid N)\\right] =\\operatorname{E} \\left[N\\operatorname{Var}(X)\\right] + \\operatorname{Var}\\left[N\\operatorname{E}(X)\\right] , \\\\[6pt]\n& = \\operatorname{E}(N)\\operatorname{Var}(X) + \\left(\\operatorname{E}(X) \\right)^2 \\operatorname{Var}(N).\n\\end{align}\n"
},
{
"math_id": 6,
"text": "\\operatorname{E}(Y)= \\operatorname{E}(N)\\operatorname{E}(X) ,"
},
{
"math_id": 7,
"text": "\\operatorname{Var}(Y) = \\operatorname{E}(N)(\\operatorname{Var}(X) + (\\operatorname{E}(X))^2)= \\operatorname{E}(N){\\operatorname{E}(X^2)}."
},
{
"math_id": 8,
"text": "\\varphi_Y(t) = \\operatorname{E}(e^{itY})= \\operatorname{E} \\left( \\left(\\operatorname{E} (e^{itX}\\mid N) \\right)^N \\right)= \\operatorname{E} \\left((\\varphi_X(t))^N\\right), \\,"
},
{
"math_id": 9,
"text": "\\varphi_Y(t) = \\textrm{e}^{\\lambda(\\varphi_X(t) - 1)}.\\,"
},
{
"math_id": 10,
"text": "K_Y(t)=\\ln \\operatorname{E}[e^{tY}]=\\ln \\operatorname E[\\operatorname E[e^{tY}\\mid N]]=\\ln \\operatorname E[e^{NK_X(t)}]=K_N(K_X(t)) . \\,"
},
{
"math_id": 11,
"text": "P(X_1 = k) = \\alpha_k,\\ (k =1,2, \\ldots )"
},
{
"math_id": 12,
"text": "Y"
},
{
"math_id": 13,
"text": " P_Y(z) = \\sum\\limits_{i = 0}^\\infty P(Y = i)z^i = \\exp\\left(\\sum\\limits_{k = 1}^\\infty \\alpha_k \\lambda (z^k - 1)\\right), \\quad (|z| \\le 1)"
},
{
"math_id": 14,
"text": "(\\alpha_1 \\lambda,\\alpha_2 \\lambda, \\ldots ) \\in \\mathbb{R}^\\infty"
},
{
"math_id": 15,
"text": "\\sum_{i = 1}^\\infty \\alpha_i = 1"
},
{
"math_id": 16,
"text": "\\alpha_i \\ge 0,\\lambda > 0"
},
{
"math_id": 17,
"text": "X \\sim {\\text{DCP}}(\\lambda {\\alpha _1},\\lambda {\\alpha _2}, \\ldots )"
},
{
"math_id": 18,
"text": "X \\sim {\\operatorname{DCP}}(\\lambda {\\alpha _1}, \\ldots ,\\lambda {\\alpha _r})"
},
{
"math_id": 19,
"text": "X"
},
{
"math_id": 20,
"text": "r"
},
{
"math_id": 21,
"text": "r = 1,2"
},
{
"math_id": 22,
"text": "r = 3,4"
},
{
"math_id": 23,
"text": "\\alpha_k"
},
{
"math_id": 24,
"text": " G_Y(z) = \\sum\\limits_{i = 0}^\\infty P(Y = i)z^i = \\exp\\left(\\sum\\limits_{k = 1}^\\infty \\alpha_k \\lambda (z^k - 1)\\right), \\quad (|z| \\le 1)"
},
{
"math_id": 25,
"text": "(\\lambda_1 ,\\lambda_2, \\ldots )=:(\\alpha_1 \\lambda,\\alpha_2 \\lambda, \\ldots ) \\in \\mathbb{R}^\\infty"
},
{
"math_id": 26,
"text": "\\sum_{i = 1}^\\infty {\\alpha_i} = 1"
},
{
"math_id": 27,
"text": "\\sum_{i = 1}^\\infty {\\left| {{\\alpha _i}} \\right|} < \\infty"
},
{
"math_id": 28,
"text": "{\\alpha_i} \\in \\mathbb{R},\\lambda > 0 "
},
{
"math_id": 29,
"text": " N \\sim\\operatorname{Poisson}(\\lambda) ,"
},
{
"math_id": 30,
"text": " X_i \\sim \\operatorname{\\Gamma}(\\alpha, \\beta) "
},
{
"math_id": 31,
"text": " Y = \\sum_{i=1}^N X_i "
},
{
"math_id": 32,
"text": "ED(\\mu, \\sigma^2)"
},
{
"math_id": 33,
"text": "\n\\begin{align}\n\\operatorname{E}[Y] & = \\lambda \\frac{\\alpha}{\\beta} =: \\mu , \\\\[4pt]\n\\operatorname{Var}[Y]& = \\lambda \\frac{\\alpha(1+\\alpha)}{\\beta^2}=: \\sigma^2 \\mu^p .\n\\end{align}\n"
},
{
"math_id": 34,
"text": "\\mu, \\sigma^2, p"
},
{
"math_id": 35,
"text": "\\lambda, \\alpha, \\beta"
},
{
"math_id": 36,
"text": "\n\\begin{align}\n\\lambda &= \\frac{\\mu^{2-p}}{(2-p)\\sigma^2} ,\n\\\\[4pt]\n\\alpha &= \\frac{2-p}{p-1} ,\n\\\\[4pt]\n\\beta &= \\frac{\\mu^{1-p}}{(p-1)\\sigma^2} .\n\\end{align}\n"
},
{
"math_id": 37,
"text": "\\lambda>0"
},
{
"math_id": 38,
"text": "\\{\\,Y(t) : t \\geq 0 \\,\\}"
},
{
"math_id": 39,
"text": "Y(t) = \\sum_{i=1}^{N(t)} D_i,"
},
{
"math_id": 40,
"text": " \\{\\,N(t) : t \\geq 0\\,\\}"
},
{
"math_id": 41,
"text": "\\lambda"
},
{
"math_id": 42,
"text": " \\{\\,D_i : i \\geq 1\\,\\}"
},
{
"math_id": 43,
"text": " \\{\\,N(t) : t \\geq 0\\,\\}.\\,"
}
] | https://en.wikipedia.org/wiki?curid=949562 |
949628 | Approximate identity | In mathematics, particularly in functional analysis and ring theory, an approximate identity is a net in a Banach algebra or ring (generally without an identity) that acts as a substitute for an identity element.
Definition.
A right approximate identity in a Banach algebra "A" is a net formula_0 such that for every element "a" of "A", formula_1 Similarly, a left approximate identity in a Banach algebra "A" is a net formula_0 such that for every element "a" of "A", formula_2 An approximate identity is a net which is both a right approximate identity and a left approximate identity.
C*-algebras.
For C*-algebras, a right (or left) approximate identity consisting of self-adjoint elements is the same as an approximate identity. The net of all positive elements in "A" of norm ≤ 1 with its natural order is an approximate identity for any C*-algebra. This is called the canonical approximate identity of a C*-algebra. Approximate identities are not unique. For example, for compact operators acting on a Hilbert space, the net consisting of finite rank projections would be another approximate identity.
If an approximate identity is a sequence, we call it a sequential approximate identity and a C*-algebra with a sequential approximate identity is called σ-unital. Every separable C*-algebra is σ-unital, though the converse is false. A commutative C*-algebra is σ-unital if and only if its spectrum is σ-compact. In general, a C*-algebra "A" is σ-unital if and only if "A" contains a strictly positive element, i.e. there exists "h" in "A"+ such that the hereditary C*-subalgebra generated by "h" is "A".
One sometimes considers approximate identities consisting of specific types of elements. For example, a C*-algebra has real rank zero if and only if every hereditary C*-subalgebra has an approximate identity consisting of projections. This was known as property (HP) in earlier literature.
Convolution algebras.
An approximate identity in a convolution algebra plays the same role as a sequence of function approximations to the Dirac delta function (which is the identity element for convolution). For example, the Fejér kernels of Fourier series theory give rise to an approximate identity.
Rings.
In ring theory, an approximate identity is defined in a similar way, except that the ring is given the discrete topology so that "a" = "ae"λ for some λ.
A module over a ring with approximate identity is called non-degenerate if for every "m" in the module there is some λ with "m" = "me"λ. | [
{
"math_id": 0,
"text": "\\{e_\\lambda : \\lambda \\in \\Lambda\\}"
},
{
"math_id": 1,
"text": "\\lim_{\\lambda\\in\\Lambda}\\lVert ae_\\lambda - a \\rVert = 0."
},
{
"math_id": 2,
"text": "\\lim_{\\lambda\\in\\Lambda}\\lVert e_\\lambda a - a \\rVert = 0."
}
] | https://en.wikipedia.org/wiki?curid=949628 |
9496699 | Gafargaon Upazila | Gafargaon () is an upazila of Mymensingh District in the Division of Mymensingh, Bangladesh.
Geography.
Gafargaon is located at . It has 73,130 households and total area 401.16 km2. Trishal on the north, Nandail in the east, Hossainpur and Pakundia upazilas of Kishoreganj district, Kapasia and Sreepur upazilas of Gazipur district on the south and Trishal and Bhaluka upazilas on the west. Nearly three sides of the border are surrounded by river Brahmaputra, Kalivana on the south and Sutia in the west. Just land on the north.
Demographics.
According to the 2011 Bangladesh census, Gafargaon Upazila had 99,093 households and a population of 430,746. 109,295 (25.37%) were under 10 years of age. Gafargaon has a literacy rate (age 7 and over) of 49.26%, compared to the national average of 51.8%, and a sex ratio of 1045 females per 1000 males. 42,641 (9.90%) lived in urban areas.
As of the 1991 Bangladesh census, Gafargaon has a population of 379,803. Males constitute 51.04% of the population, and females 48.96%. This Upazila's eighteen up population is 184,633. Gafargaon has an average literacy rate of 90.3% (7+ years), and the national average of 70.4% literate.
Economy.
Cooperatives
Central Co-operative Society Ltd. 01,
Freedom Fighter Co-operative Society Ltd. 2,
Union multipurpose co-operative society Ltd. 15,
Multipurpose Co-operative Society Ltd. 109,
Fisheries Cooperative Society Ltd. 37,
Youth Co-operative Society Ltd. 11,
Asylum / Housing Multipurpose Cooperative Society 05,
Farmer Co-operative Society Ltd. 120, Male Non-Cooperative Co-operative Society Ltd. 06,
Female Non-Exempt Co-operative Society Ltd. 07, Small Business Co-operative Society Ltd. 2,
Other Cooperative Societies 05,
Driver Cooperative Association 3.
Land and revenue.
Mouza 142,
Union Land Office 15,
Municipal land office 01,
Total khas land 1690.61 acres,
Agriculture 167.39 acres,
Non-agricultural 1523.22 acres,
Settlementable agriculture 14.71 acres (agricultural),
Annual land development tax (demand),
Common = 3860280 / -,
Agency = 1,88,04,747 / -,
Annual land development tax (realization),
General = Revenues in July 27, 312 / - July,
Agency = No income in July,
Hat-market number = 34.
Agriculture.
Agriculture
Total amount of land 23,834 hectares,
Net crop land 16,500 hectares,
Total Crop Land 39,103 hectares,
One crop land 3,015 hectares,
Two crop land 4,367 hectares,
Three crop land 9,118 hectares,
Deep tube well 123,
Non-deep tube well 2,423,
Power driven pump 488,
Blank number 54,
Yearly food demand 782,667 tons,
Number of tube wells 4,276.
Fish
Number of ponds 7,454,
Fishery seed production farm official 01,
Fishery seed production farm is non-governmental 06,
Yearly fisheries demand 6,180 m tons,
Annual fishery production 5,513 tons.
Animal resources
Upazila Animal Medical Center 01,
The number of veterinary doctors 01 people,
Artificial breeding center 01,
Number of points 03,
The number of advanced chicken farms 11.
Layer 800 poultry is upwards of 10-49 chickens, so is the farm Countless. Cattle farm 22,
Broiler chicken farm 96.
Administration.
Gafargaon Upazila is divided into Gafargaon Municipality and 15 union parishads: Barobaria, Charalgi, Dotterbazar, Gafargaon, Josora, Longair, Moshakhali, Niguari, Paithol, Panchbagh, Raona, Rasulpur, Saltia, Tangabo, and Usthi. The union parishads are subdivided into 202 mauzas and 214 villages.
Gafargaon Municipality is subdivided into 9 wards and 19 mahallas.
The police stations (thana) are Gafargaon and Pagla. Pagla thana was established in 2012.
Infrastructure.
Health and hospitals.
Upazila Health Complex 01,
Upazila Health and Family Welfare Center 16,
Number of beds 50,
The number of doctor's given words 37,
number of working doctors UHC 17, Union level 16, UHPPO 1, total = 34,
Senior Nurse Number 15, people Working = 13 people,
Assistant Nurses Number 01 people.
Family planning
Health and Family Welfare Center 11,
Family planning clinic 01,
M.C.H. The unit 01,
The number of able couples 84,833 people.
Transport.
Concrete road 147.00 km,
half way 8.00 km,
soil road 334 kilometers,
number of bridges / culvert 466,
number of rivers 4.
Education.
Gafargaon Islamia Govt. High School
The Gafargaon Islamia Govt. High School is a secondary school, established 1906. It is situated at Gafargaon, Bangladesh. The geographical coordinates of Gafargaon Islamia Govt. High School are 24°27'25.83" North, 90°32'55.4" East.
Some information:
EIIN Code: 111523
Students: Around 2000-2500
Khairullah Govt. Girls High School
Another government school at Gafargaon mainly for girls, established 1941. It is situated at Gafargaon, near Shahid Belal Plaza (a shopping complex).
School Code : 7526
Centre Code: (JSC): 493
(SSC): 296
EIIN Code: 111528
Datter Bazar Union Higher Secondary School
Datter Bazar Union High School is another school in here. In 2012, a college department was added to the school.
In 2019, Datter Bazar Union Higher Secondary School gain Highest Rank.
EIIN code: 111553
Beroi Taltola High School
Beroi Taltola High School is one of the oldest schools in Gafargaon Upazila. Before 2010 it was a girls-only school.
A Hindu woman social worker, Jyoti, established this school in her father's land. It is a fully rural school.
Others
According to Banglapedia, Kandipara Askar Ali High School, founded in 1906, is a notable secondary school.
and beside have Abdur Rahman Degre College.
Number of educational institutions and literacy rate
Government primary school 160,
Non-government primary school 42,
Community Primary School 20,
Junior high school 06,
High school (co-education) 41,
High school (girl) 03,
Dakhil Madrasa 16,
Alim Madrasa 07,
Fazil Madrasa 04,
Kamil Madrasa 2,
College (class) 09,
College (girl) 01,
literacy rate 65%,
males 68%,
female 62%
Notable residents.
Distinguished people of Gafargaon: Some of Gafargaon's people were respected by respecting all contemporary leaders. | [
{
"math_id": 0,
"text": "["
},
{
"math_id": 1,
"text": "]]"
}
] | https://en.wikipedia.org/wiki?curid=9496699 |
9496834 | Carry operator | Symbol
The carry operator, symbolized by the ¢ sign, is an abstraction of the operation of determining whether a portion of an adder network generates or propagates a carry. It is defined as follows:
formula_0 ¢ formula_1 | [
{
"math_id": 0,
"text": "(G_1, P_1) \\ "
},
{
"math_id": 1,
"text": "(G_2, P_2) = (G_1 \\lor G_2 P_1, P_2 P_1) "
}
] | https://en.wikipedia.org/wiki?curid=9496834 |
9499243 | Intelligent driver model | In traffic flow modeling, the intelligent driver model (IDM) is a time-continuous car-following model for the simulation of freeway and urban traffic. It was developed by Treiber, Hennecke and Helbing in 2000 to improve upon results provided with other "intelligent" driver models such as Gipps' model, which loses realistic properties in the deterministic limit.
Model definition.
As a car-following model, the IDM describes the dynamics of the positions and velocities of single vehicles. For vehicle formula_0, formula_1 denotes its position at time formula_2, and formula_3 its velocity. Furthermore, formula_4 gives the length of the vehicle. To simplify notation, we define the "net distance" formula_5, where formula_6 refers to the vehicle directly in front of vehicle formula_0, and the velocity difference, or "approaching rate", formula_7. For a simplified version of the model, the dynamics of vehicle formula_0 are then described by the following two ordinary differential equations:
formula_8
formula_9
formula_10
formula_11, formula_12, formula_13, formula_14, and formula_15 are model parameters which have the following meaning:
The exponent formula_16 is usually set to 4.
Model characteristics.
The acceleration of vehicle formula_0 can be separated into a "free road term" and an "interaction term":
formula_17
This leads to a driving behavior that compensates velocity differences while trying not to brake much harder than the comfortable braking deceleration formula_15.
Solution example.
Let's assume a ring road with 50 vehicles. Then, vehicle 1 will follow vehicle 50. Initial speeds are given and since all vehicles are considered equal, vector ODEs are further simplified to:
formula_22
formula_23
formula_24
For this example, the following values are given for the equation's parameters, in line with the original calibrated model.
The two ordinary differential equations are solved using Runge–Kutta methods of orders 1, 3, and 5 with the same time step, to show the effects of computational accuracy in the results.
This comparison shows that the IDM does not show extremely irrealistic properties such as negative velocities or vehicles sharing the same space even for from a low order method such as with the Euler's method (RK1). However, traffic wave propagation is not as accurately represented as in the higher order methods, RK3 and RK 5. These last two methods show no significant differences, which lead to conclude that a solution for IDM reaches acceptable results from RK3 upwards and no additional computational requirements would be needed. Nonetheless, when introducing heterogeneous vehicles and both jam distance parameters, this observation could not suffice.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\alpha"
},
{
"math_id": 1,
"text": "x_\\alpha"
},
{
"math_id": 2,
"text": "t"
},
{
"math_id": 3,
"text": "v_\\alpha"
},
{
"math_id": 4,
"text": "l_\\alpha"
},
{
"math_id": 5,
"text": "s_\\alpha := x_{\\alpha-1} - x_\\alpha - l_{\\alpha-1}"
},
{
"math_id": 6,
"text": "\\alpha - 1"
},
{
"math_id": 7,
"text": "\\Delta v_\\alpha := v_\\alpha - v_{\\alpha-1}"
},
{
"math_id": 8,
"text": "\\dot{x}_\\alpha = \\frac{\\mathrm{d}x_\\alpha}{\\mathrm{d}t} = v_\\alpha"
},
{
"math_id": 9,
"text": "\\dot{v}_\\alpha = \\frac{\\mathrm{d}v_\\alpha}{\\mathrm{d}t} = a\\,\\left( 1 - \\left(\\frac{v_\\alpha}{v_0}\\right)^\\delta - \\left(\\frac{s^*(v_\\alpha,\\Delta v_\\alpha)}{s_\\alpha}\\right)^2 \\right)"
},
{
"math_id": 10,
"text": "\\text{with }s^*(v_\\alpha,\\Delta v_\\alpha) = s_0 + v_\\alpha\\,T + \\frac{v_\\alpha\\,\\Delta v_\\alpha}{2\\,\\sqrt{a\\,b}}"
},
{
"math_id": 11,
"text": "v_0"
},
{
"math_id": 12,
"text": "s_0"
},
{
"math_id": 13,
"text": "T"
},
{
"math_id": 14,
"text": "a"
},
{
"math_id": 15,
"text": "b"
},
{
"math_id": 16,
"text": "\\delta"
},
{
"math_id": 17,
"text": "\\dot{v}^\\text{free}_\\alpha = a\\,\\left( 1 - \\left(\\frac{v_\\alpha}{v_0}\\right)^\\delta \\right)\n\\qquad\\dot{v}^\\text{int}_\\alpha = -a\\,\\left(\\frac{s^*(v_\\alpha,\\Delta v_\\alpha)}{s_\\alpha}\\right)^2\n= -a\\,\\left(\\frac{s_0 + v_\\alpha\\,T}{s_\\alpha} + \\frac{v_\\alpha\\,\\Delta v_\\alpha}{2\\,\\sqrt{a\\,b}\\,s_\\alpha}\\right)^2"
},
{
"math_id": 18,
"text": "s_\\alpha"
},
{
"math_id": 19,
"text": " v_0"
},
{
"math_id": 20,
"text": "-a\\,(v_\\alpha\\,\\Delta v_\\alpha)^2\\,/\\,(2\\,\\sqrt{a\\,b}\\,s_\\alpha)^2 = -(v_\\alpha\\,\\Delta v_\\alpha)^2\\,/\\,(4\\,b\\,s_\\alpha^2)"
},
{
"math_id": 21,
"text": "-a\\,(s_0 + v_\\alpha\\,T)^2\\,/\\,s_\\alpha^2"
},
{
"math_id": 22,
"text": "\\dot{x} = \\frac{\\mathrm{d}x}{\\mathrm{d}t} = v"
},
{
"math_id": 23,
"text": "\\dot{v} = \\frac{\\mathrm{d}v}{\\mathrm{d}t} = a\\,\\left( 1 - \\left(\\frac{v}{v_0}\\right)^\\delta - \\left(\\frac{s^*(v,\\Delta v)}{s}\\right)^2 \\right)"
},
{
"math_id": 24,
"text": "\\text{with }s^*(v,\\Delta v) = s_0 + v\\,T + \\frac{v\\,\\Delta v}{2\\,\\sqrt{a\\,b}}"
}
] | https://en.wikipedia.org/wiki?curid=9499243 |
949966 | Constraint satisfaction | In artificial intelligence and operations research, constraint satisfaction is the process of finding a solution through
a set of constraints that impose conditions that the variables must satisfy. A solution is therefore an assignment of values to the variables that satisfies all constraints—that is, a point in the feasible region.
The techniques used in constraint satisfaction depend on the kind of constraints being considered. Often used are constraints on a finite domain, to the point that constraint satisfaction problems are typically identified with problems based on constraints on a finite domain. Such problems are usually solved via search, in particular a form of backtracking or local search. Constraint propagation is another family of methods used on such problems; most of them are incomplete in general, that is, they may solve the problem or prove it unsatisfiable, but not always. Constraint propagation methods are also used in conjunction with search to make a given problem simpler to solve. Other considered kinds of constraints are on real or rational numbers; solving problems on these constraints is done via variable elimination or the simplex algorithm.
Constraint satisfaction as a general problem originated in the field of artificial intelligence in the 1970s (see for example ). However, when the constraints are expressed as multivariate linear equations defining (in)equalities, the field goes back to Joseph Fourier in the 19th century: George Dantzig's invention of the simplex algorithm for linear programming (a special case of mathematical optimization) in 1946 has allowed determining feasible solutions to problems containing hundreds of variables.
During the 1980s and 1990s, embedding of constraints into a programming language was developed. The first language devised expressly with intrinsic support for constraint programming was Prolog. Since then, constraint-programming libraries have become available in other languages, such as C++ or Java (e.g., Choco for Java).
Constraint satisfaction problem.
As originally defined in artificial intelligence, constraints enumerate the possible values a set of variables may take in a given world. A possible world is a total assignment of values to variables representing a way the world (real or imaginary) could be. Informally, a finite domain is a finite set of arbitrary elements. A constraint satisfaction problem on such domain contains a set of variables whose values can only be taken from the domain, and a set of constraints, each constraint specifying the allowed values for a group of variables. A solution to this problem is an evaluation of the variables that satisfies all constraints. In other words, a solution is a way for assigning a value to each variable in such a way that all constraints are satisfied by these values.
In some circumstances, there may exist additional requirements: one may be interested not only in the solution (and in the fastest or most computationally efficient way to reach it) but in how it was reached; e.g. one may want the "simplest" solution ("simplest" in a logical, non-computational sense that has to be precisely defined). This is often the case in logic games such as Sudoku.
In practice, constraints are often expressed in compact form, rather than enumerating all the values of the variables that would satisfy the constraint. One of the most-used constraints is the (obvious) one establishing that the values of the affected variables must be all different.
Problems that can be expressed as constraint satisfaction problems are the eight queens puzzle, the Sudoku solving problem and many other logic puzzles, the Boolean satisfiability problem, scheduling problems, bounded-error estimation problems and various problems on graphs such as the graph coloring problem.
While usually not included in the above definition of a constraint satisfaction problem, arithmetic equations and inequalities bound the values of the variables they contain and can therefore be considered a form of constraints. Their domain is the set of numbers (either integer, rational, or real), which is infinite: therefore, the relations of these constraints may be infinite as well; for example, formula_0 has an infinite number of pairs of satisfying values. Arithmetic equations and inequalities are often not considered within the definition of a "constraint satisfaction problem", which is limited to finite domains. They are however used often in constraint programming.
It can be shown that the arithmetic inequalities or equations present in some types of finite logic puzzles such as Futoshiki or Kakuro (also known as Cross Sums) can be dealt with as non-arithmetic constraints (see "Pattern-Based Constraint Satisfaction and Logic Puzzles").
Solving.
Constraint satisfaction problems on finite domains are typically solved using a form of search. The most-used techniques are variants of backtracking, constraint propagation, and local search. These techniques are used on problems with nonlinear constraints.
Variable elimination and the simplex algorithm are used for solving linear and polynomial equations and inequalities, and problems containing variables with infinite domain. These are typically solved as optimization problems in which the optimized function is the number of violated constraints.
Complexity.
Solving a constraint satisfaction problem on a finite domain is an NP-complete problem with respect to the domain size. Research has shown a number of tractable subcases, some limiting the allowed constraint relations, some requiring the scopes of constraints to form a tree, possibly in a reformulated version of the problem. Research has also established relationships of the constraint satisfaction problem with problems in other areas such as finite model theory.
Constraint programming.
Constraint programming is the use of constraints as a programming language to encode and solve problems. This is often done by embedding constraints into a programming language, which is called the host language. Constraint programming originated from a formalization of equalities of terms in Prolog II, leading to a general framework for embedding constraints into a logic programming language. The most common host languages are Prolog, C++, and Java, but other languages have been used as well.
Constraint logic programming.
A constraint logic program is a logic program that contains constraints in the bodies of clauses. As an example, the clause codice_0 is a clause containing the constraint codice_1 in the body. Constraints can also be present in the goal. The constraints in the goal and in the clauses used to prove the goal are accumulated into a set called constraint store. This set contains the constraints the interpreter has assumed satisfiable in order to proceed in the evaluation. As a result, if this set is detected unsatisfiable, the interpreter backtracks. Equations of terms, as used in logic programming, are considered a particular form of constraints, which can be simplified using unification. As a result, the constraint store can be considered an extension of the concept of substitution that is used in regular logic programming. The most common kinds of constraints used in constraint logic programming are constraints over integers/rational/real numbers and constraints over finite domains.
Concurrent constraint logic programming languages have also been developed. They significantly differ from non-concurrent constraint logic programming in that they are aimed at programming concurrent processes that may not terminate. Constraint handling rules can be seen as a form of concurrent constraint logic programming, but are also sometimes used within a non-concurrent constraint logic programming language. They allow for rewriting constraints or to infer new ones based on the truth of conditions.
Constraint satisfaction toolkits.
Constraint satisfaction toolkits are software libraries for imperative programming languages that are used to encode and solve a constraint satisfaction problem.
Other constraint programming languages.
Constraint toolkits are a way for embedding constraints into an imperative programming language. However, they are only used as external libraries for encoding and solving problems. An approach in which constraints are integrated into an imperative programming language is taken in the Kaleidoscope programming language.
Constraints have also been embedded into functional programming languages.
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "X=Y+1"
}
] | https://en.wikipedia.org/wiki?curid=949966 |
9499804 | Probabilistic design | Discipline within engineering design
Probabilistic design is a discipline within engineering design. It deals primarily with the consideration and minimization of the effects of random variability upon the performance of an engineering system during the design phase. Typically, these effects studied and optimized are related to quality and reliability. It differs from the classical approach to design by assuming a small probability of failure instead of using the safety factor. Probabilistic design is used in a variety of different applications to assess the likelihood of failure. Disciplines which extensively use probabilistic design principles include product design, quality control, systems engineering, machine design, civil engineering (particularly useful in limit state design) and manufacturing.
Objective and motivations.
When using a probabilistic approach to design, the designer no longer thinks of each variable as a single value or number. Instead, each variable is viewed as a continuous random variable with a probability distribution. From this perspective, probabilistic design predicts the flow of variability (or distributions) through a system.
Because there are so many sources of random and systemic variability when designing materials and structures, it is greatly beneficial for the designer to model the factors studied as random variables. By considering this model, a designer can make adjustments to reduce the flow of random variability, thereby improving engineering quality. Proponents of the probabilistic design approach contend that many quality problems can be predicted and rectified during the early design stages and at a much reduced cost.
Typically, the goal of probabilistic design is to identify the design that will exhibit the smallest effects of random variability. Minimizing random variability is essential to probabilistic design because it limits uncontrollable factors, while also providing a much more precise determination of failure probability. This could be the one design option out of several that is found to be most robust. Alternatively, it could be the only design option available, but with the optimum combination of input variables and parameters. This second approach is sometimes referred to as robustification, parameter design or design for six sigma.
Sources of variability.
Though the laws of physics dictate the relationships between variables and measurable quantities such as force, stress, strain, and deflection, there are still three primary sources of variability when considering these relationships.
The first source of variability is statistical, due to the limitations of having a finite sample size to estimate parameters such as yield stress, Young's modulus, and true strain. Measurement uncertainty is the most easily minimized out of these three sources, as variance is proportional to the inverse of the sample size.
We can represent variance due to measurement uncertainties as a corrective factor formula_0, which is multiplied by the true mean formula_1 to yield the measured mean of formula_2. Equivalently, formula_3.
This yields the result formula_4 , and the variance of the corrective factor formula_0 is given as:
formula_5
where formula_0 is the correction factor, formula_1 is the true mean, formula_2 is the measured mean, and formula_6 is the number of measurements made.
The second source of variability stems from the inaccuracies and uncertainties of the model used to calculate such parameters. These include the physical models we use to understand loading and their associated effects in materials. The uncertainty from the model of a physical measurable can be determined if both theoretical values according to the model and experimental results are available.
The measured value formula_7 is equivalent to the theoretical model prediction formula_8 multiplied by a model error of formula_9, plus the experimental error formula_10. Equivalently,
formula_11
and the model error takes the general form:
formula_12
where formula_13 are coefficients of regression determined from experimental data.
Finally, the last variability source comes from the intrinsic variability of any physical measurable. There is a fundamental random uncertainty associated with all physical phenomena, and it is comparatively the most difficult to minimize this variability. Thus, each physical variable and measurable quantity can be represented as a random variable with a mean and a variability.
Comparison to classical design principles.
Consider the classical approach to performing tensile testing in materials. The stress experienced by a material is given as a singular value (i.e., force applied divided by the cross-sectional area perpendicular to the loading axis). The yield stress, which is the maximum stress a material can support before plastic deformation, is also given as a singular value. Under this approach, there is a 0% chance of material failure below the yield stress, and a 100% chance of failure above it. However, these assumptions break down in the real world.
The yield stress of a material is often only known to a certain precision, meaning that there is an uncertainty and therefore a probability distribution associated with the known value. Let the probability distribution function of the yield strength be given as formula_14.
Similarly, the applied load or predicted load can also only be known to a certain precision, and the range of stress which the material will undergo is unknown as well. Let this probability distribution be given as formula_15.
The probability of failure is equivalent to the area between these two distribution functions, mathematically:
formula_16
or equivalently, if we let the difference between yield stress and applied load equal a third function formula_17, then:
formula_18
where the variance of the mean difference formula_19 is given by formula_20 .
The probabilistic design principles allow for precise determination of failure probability, whereas the classical model assumes absolutely no failure before yield strength. It is clear that the classical applied load vs. yield stress model has limitations, so modeling these variables with a probability distribution to calculate failure probability is a more precise approach. The probabilistic design approach allows for the determination of material failure under all loading conditions, associating quantitative probabilities to failure chance in place of a definitive yes or no.
Methods used to determine variability.
In essence, probabilistic design focuses upon the prediction of the effects of variability. In order to be able to predict and calculate variability associated with model uncertainty, many methods have been devised and utilized across different disciplines to determine theoretical values for parameters such as stress and strain. Examples of theoretical models used alongside probabilistic design include:
Additionally, there are many statistical methods used to quantify and predict the random variability in the desired measurable. Some methods that are used to predict the random variability of an output include: | [
{
"math_id": 0,
"text": "B"
},
{
"math_id": 1,
"text": "X"
},
{
"math_id": 2,
"text": "\\bar X"
},
{
"math_id": 3,
"text": "\\bar X = \\bar B X"
},
{
"math_id": 4,
"text": "\\bar B = \\frac{\\bar X}{X}"
},
{
"math_id": 5,
"text": "Var[B]= \\frac{Var[\\bar X]}{X} = \\frac{Var[X]}{nX}"
},
{
"math_id": 6,
"text": "n"
},
{
"math_id": 7,
"text": "\\hat H(\\omega)"
},
{
"math_id": 8,
"text": "H(\\omega)"
},
{
"math_id": 9,
"text": "\\phi(\\omega)"
},
{
"math_id": 10,
"text": "\\varepsilon(\\omega)"
},
{
"math_id": 11,
"text": "\\hat H(\\omega) = H(\\omega) \\phi(\\omega) + \\varepsilon(\\omega)"
},
{
"math_id": 12,
"text": "\\phi(\\omega) = \\sum_{i = 0}^n a_i \\omega^{n}"
},
{
"math_id": 13,
"text": "a_i"
},
{
"math_id": 14,
"text": "f(R)"
},
{
"math_id": 15,
"text": "f(S)"
},
{
"math_id": 16,
"text": "P_f = P(R<S)= \\int\\limits_{-\\infty}^{\\infty}\\int\\limits_{-\\infty}^{\\infty}f(R)f(S)dSdR"
},
{
"math_id": 17,
"text": "R-S = Q"
},
{
"math_id": 18,
"text": "P_f = \\int\\limits_{-\\infty}^{\\infty}\\int\\limits_{-\\infty}^{\\infty}f(R)f(S)dSdR = \\int\\limits_{-\\infty}^{0} f(Q)dQ"
},
{
"math_id": 19,
"text": "Q"
},
{
"math_id": 20,
"text": "\\sigma_Q^{2} = \\sqrt{\\sigma_R^{2}+ \\sigma_S^{2}}"
}
] | https://en.wikipedia.org/wiki?curid=9499804 |
950012 | Coupling (physics) | Two systems are coupled if they are interacting with each other
In physics, <dfn>two objects are said </dfn>to be coupled when they are interacting with each other. In classical mechanics, coupling is a connection between two oscillating systems, such as pendulums connected by a spring. The connection affects the oscillatory pattern of both objects. In particle physics, <dfn>two particles are coupled </dfn>if they are connected by <dfn>one of the four </dfn>fundamental forces.
Wave mechanics.
Coupled harmonic oscillator.
If two waves are able to transmit energy to each other, then these waves are said to be "coupled." This normally occurs when the waves share a common component. An example of this is two pendulums connected by a spring. If the pendulums are identical, then their equations of motion are given by
formula_0
formula_1
These equations represent the simple harmonic motion of the pendulum with an added coupling factor of the spring. This behavior is also seen in certain molecules (such as CO2 and H2O), wherein two of the atoms will vibrate around a central one in a similar manner.
Coupled LC circuits.
In LC circuits, charge oscillates between the capacitor and the inductor and can therefore be modeled as a simple harmonic oscillator. When the magnetic flux from <dfn>one inductor is able </dfn>to affect the inductance of an inductor in an unconnected LC circuit, the circuits are said to be coupled. The coefficient of coupling k defines how closely the <dfn>two circuits are coupled</dfn> and is given by the equation
formula_2
where M is the mutual inductance of the circuits and Lp and Ls are the inductances of the primary and secondary circuits, respectively. If the flux lines of the primary inductor thread every line of the secondary one, then the coefficient of coupling is 1 and formula_3 In practice, however, there is of<dfn>ten leakage</dfn>, so most systems are not perfectly coupled.
Chemistry.
Spin-spin coupling.
Spin-spin coupling occurs when the magnetic field of <dfn>one atom affects the </dfn>magnetic field of another nearby atom. This is very common in NMR imaging. If the atoms are not coupled, then there will be <dfn>two individual peaks</dfn>, known as a doublet, representing the individual atoms. If coupling is present, then there will be a triplet, <dfn>one larger peak with two smaller ones to </dfn>either side. This occurs due to the spins of the individual atoms oscillating in tandem.
Astrophysics.
Objects in space which are coupled to each other are under the mutual influence of each other's gravity. For instance, the Earth is coupled to both the Sun and the Moon, as it is under the gravitational influence of both. Common in space are binary systems, two objects gravitationally coupled to each other. Examples of this are binary stars which circle each other. Multiple objects may also be coupled to each other simultaneously, such as with globular clusters and galaxy groups. When smaller particles, such as dust, which are coupled together over time accumulate into much larger objects, accretion is occurring. This is the major process by which stars and planets form.
Plasma.
The coupling constant of a plasma is given by the ratio of its average Coulomb-interaction energy to its average kinetic energy—or how strongly the electric force of each atom holds the plasma together. Plasmas can therefore be categorized into weakly- and strongly-coupled plasmas depending upon the value of this ratio. Many of the typical classical plasmas, such as the plasma in the solar corona, are weakly coupled, while the plasma in a white dwarf star is an example of a strongly coupled plasma.
Quantum mechanics.
Two coupled quantum systems can be modeled by a Hamiltonian of the form
formula_4
which is the addition of the two Hamiltonians in isolation with an added interaction factor. In most simple systems, formula_5 and formula_6 can be solved exactly while formula_7 can be solved through perturbation theory. If the two systems have similar total energy, then the system may undergo Rabi oscillation.
Angular momentum coupling.
When angular momenta from two separate sources interact with each other, they are said to be coupled. For example, two electrons orbiting around the same nucleus may have coupled angular momenta. Due to the conservation of angular momentum and the nature of the angular momentum operator, the total angular momentum is always the sum of the individual angular momenta of the electrons, or
formula_8
Spin-Orbit interaction (also known as spin-orbit coupling) is a special case of angular momentum coupling. Specifically, it is the interaction between the intrinsic spin of a particle, S, and its orbital angular momentum, L. As they are both forms of angular momentum, they must be conserved. Even if energy is transferred between the two, the total angular momentum, J, of the system must be constant, formula_9.
Particle physics and quantum field theory.
Particles which interact with each other are said to be coupled. This interaction is caused by one of the fundamental forces, whose strengths are usually given by a dimensionless coupling constant. In quantum electrodynamics, this value is known as the fine-structure constant α, approximately equal to 1/137. For quantum chromodynamics, the constant changes with respect to the distance between the particles. This phenomenon is known as "asymptotic freedom." Forces which have a coupling constant greater than 1 are said to be "strongly coupled" while those with constants less than 1 are said to be "weakly coupled." | [
{
"math_id": 0,
"text": "m\\ddot{x} = -mg\\frac{x}{l_1} - k(x-y)"
},
{
"math_id": 1,
"text": "m\\ddot{y} = -mg \\frac{y}{l_2} + k(x-y)"
},
{
"math_id": 2,
"text": "\\frac{M}{\\sqrt{L_p L_s}} = k"
},
{
"math_id": 3,
"text": "M = \\sqrt{L_p L_s}"
},
{
"math_id": 4,
"text": "\\hat{H} = \\hat{H}_a + \\hat{H}_b + \\hat{V}_{ab}"
},
{
"math_id": 5,
"text": "\\hat{H}_a"
},
{
"math_id": 6,
"text": "\\hat{H}_b"
},
{
"math_id": 7,
"text": "\\hat{V}_{ab}"
},
{
"math_id": 8,
"text": "\\mathbf{J}=\\mathbf{J_1}+\\mathbf{J_2}"
},
{
"math_id": 9,
"text": "\\mathbf{J}=\\mathbf{L}+\\mathbf{S}"
}
] | https://en.wikipedia.org/wiki?curid=950012 |
9501159 | Well-pointed category | In category theory, a category with a terminal object formula_0 is well-pointed if for every pair of arrows formula_1 such that formula_2, there is an arrow formula_3 such that formula_4. (The arrows formula_5 are called the global elements or "points" of the category; a well-pointed category is thus one that has "enough points" to distinguish non-equal arrows.) | [
{
"math_id": 0,
"text": "1"
},
{
"math_id": 1,
"text": "f,g:A\\to B"
},
{
"math_id": 2,
"text": "f\\neq g"
},
{
"math_id": 3,
"text": "p:1\\to A"
},
{
"math_id": 4,
"text": "f\\circ p\\neq g\\circ p"
},
{
"math_id": 5,
"text": "p"
}
] | https://en.wikipedia.org/wiki?curid=9501159 |
9501745 | Thermal effusivity | Ability of a material to exchange thermal energy with surroundings
In thermodynamics, a material's thermal effusivity, also known as thermal responsivity, is a measure of its ability to exchange thermal energy with its surroundings. It is defined as the square root of the product of the material's thermal conductivity (formula_0) and its volumetric heat capacity (formula_1) or as the ratio of thermal conductivity to the square root of thermal diffusivity (formula_2).
formula_3
Some authors use the symbol formula_4 to denote the thermal responsivity, although it usuage along with an exponential becomes difficult. The SI units for thermal effusivity are formula_5, or, equivalently, formula_6.
Thermal effusivity is a good approximation for the material's "thermal inertia" for a semi-infinite rigid body where heat transfer is dominated by the diffusive process of conduction only.
Thermal effusivity is a parameter that emerges upon applying solutions of the heat equation to heat flow through a thin surface-like region. It becomes particularly useful when the region is selected adjacent to a material's actual surface. Knowing the effusivity and equilibrium temperature of each of two material bodies then enables an estimate of their interface temperature formula_7 when placed into thermal contact.
If formula_8 and formula_9 are the temperature of the two bodies, then upon contact, the temperature of the contact interface (assumed to be a smooth surface) becomes
formula_10
Specialty sensors have also been developed based on this relationship to measure effusivity.
Thermal effusivity and thermal diffusivity are related quantities; respectively a product versus a ratio of a material's fundamental transport and storage properties. The diffusivity appears explicitly in the heat equation, which is an energy conservation equation, and measures the speed at which thermal equilibrium can be reached by a body. By contrast a body's effusivity (also sometimes called inertia, accumulation, responsiveness etc.) is its ability to resist a temperature change when subjected to a time-periodic, or similarly perturbative, forcing function.
Applications.
Temperature at a contact surface.
If two semi-infinite bodies initially at temperatures formula_8 and formula_9 are brought in perfect thermal contact, the temperature at the contact surface formula_7 will be a weighted mean based on their relative effusivities. This relationship can be demonstrated with a very simple "control volume" back-of-the-envelope calculation:
Consider the following 1D heat conduction problem. Region 1 is material 1, initially at uniform temperature formula_8, and region 2 is material 2, initially at uniform temperature formula_9. Given some period of time formula_11 after being brought into contact, heat will have diffused across the boundary between the two materials. The thermal diffusivity of a material is formula_12. From the heat equation (or diffusion equation), a characteristic diffusion length formula_13 into material 1 is
formula_14, where formula_15.
Similarly, a characteristic diffusion length formula_16 into material 2 is
formula_17, where formula_18.
Assume that the temperature within the characteristic diffusion length on either side of the boundary between the two materials is uniformly at the contact temperature formula_19 (this is the essence of a control-volume approach). Conservation of energy dictates that
formula_20.
Substitution of the expressions above for formula_21 and formula_16 and elimination of formula_22 yields an expression for the contact temperature.
formula_23
This expression is valid for all times for semi-infinite bodies in perfect thermal contact. It is also a good first guess for the initial contact temperature for finite bodies.
Even though the underlying heat equation is parabolic and not hyperbolic (i.e. it does not support waves), if we in some rough sense allow ourselves to think of a temperature jump as two materials are brought into contact as a "signal", then the transmission of the temperature signal from 1 to 2 is formula_24. Clearly, this analogy must be used with caution; among other caveats, it only applies in a transient sense, to media which are large enough (or time scales short enough) to be considered effectively infinite in extent.
Heat sensed by human skin.
An application of thermal effusivity is the quasi-qualitative measurement of coolness or warmth "feel" of materials, also known as thermoception. It is a particularly important metric for textiles, fabrics, and building materials. Rather than temperature, skin thermoreceptors are highly responsive to the inward or outward flow of heat. Thus, despite having similar temperatures near room temperature, a high effusivity metal object is detected as cool while a low effusivity fabric is sensed as being warmer.
Diathermal walls.
For a diathermal wall having a stepped "constant heat" boundary condition imposed abruptly onto one side, thermal effusivity formula_4 performs nearly the same role in limiting the initial "dynamic" thermal response (rigorously, during times less than the heat diffusion time to transit the wall) as the insulation U-factor formula_25 plays in defining the "static" temperature obtained by the side after a long time. A dynamic U-factor formula_26 and a diffusion time formula_27 for the wall of thickness formula_28, thermal diffusivity formula_2 and thermal conductivity formula_0 are specified by:
formula_29 ; during formula_30 where formula_31 and formula_32
Planetary science.
For planetary surfaces, thermal inertia is a key phenomenon controlling the diurnal and seasonal surface temperature variations. The thermal inertia of a terrestrial planet such as Mars can be approximated from the thermal effusivity of its near-surface geologic materials. In remote sensing applications, thermal inertia represents a complex combination of particle size, rock abundance, bedrock outcropping and the degree of induration (i.e. thickness and hardness).
A rough approximation to thermal inertia is sometimes obtained from the amplitude of the diurnal temperature curve (i.e. maximum minus minimum surface temperature). The temperature of a material with low thermal effusivity changes significantly during the day, while the temperature of a material with high thermal effusivity does not change as drastically. Deriving and understanding the thermal inertia of the surface can help to recognize small-scale features of that surface. In conjunction with other data, thermal inertia can help to characterize surface materials and the geologic processes responsible for forming these materials.
On Earth, thermal inertia of the global ocean is a major factor influencing climate inertia. Ocean thermal inertia is much greater than land inertia because of convective heat transfer, especially through the upper mixed layer. The thermal effusivities of stagnant and frozen water underestimate the vast thermal inertia of the dynamic and multi-layered ocean.
Thermographic inspection.
Thermographic inspection encompasses a variety of nondestructive testing methods that utilize the wave-like characteristics of heat propagation through a transfer medium. These methods include "Pulse-echo thermography" and "thermal wave imaging". Thermal effusivity and diffusivity of the materials being inspected can serve to simplify the mathematical modelling of, and thus interpretation of results from these techniques.
Measurement interpretation.
When a material is measured from the surface with short test times by any transient method or instrument, the heat transfer mechanisms generally include thermal conduction, convection, radiation and phase changes. The diffusive process of conduction may dominate the thermal behavior of solid bodies near and below room temperature.
A contact resistance (due to surface roughness, oxidation, impurities, etc.) between the sensor and sample may also exist. Evaluations with high heat dissipation (driven by large temperature differentials) can likewise be influenced by an interfacial thermal resistance. All of these factors, along with the body's finite dimensions, must be considered during execution of measurements and interpretation of results.
Thermal effusivity of selected materials and substances.
This is a list of the thermal effusivity of some common substances, evaluated at room temperature unless otherwise indicated.
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\lambda"
},
{
"math_id": 1,
"text": "\\rho c_p"
},
{
"math_id": 2,
"text": "\\alpha"
},
{
"math_id": 3,
"text": "r = \\frac{\\lambda}{\\sqrt{\\alpha}}=\\sqrt{\\lambda\\rho c_p}."
},
{
"math_id": 4,
"text": "e"
},
{
"math_id": 5,
"text": "{\\rm W} \\sqrt{{\\rm s}} / ({\\rm m^2 K})"
},
{
"math_id": 6,
"text": "{\\rm J} / ( {\\rm m^2 K}\\sqrt{{\\rm s}})"
},
{
"math_id": 7,
"text": "T_m"
},
{
"math_id": 8,
"text": "T_1"
},
{
"math_id": 9,
"text": "T_2"
},
{
"math_id": 10,
"text": "T_m = \\frac{r_1 T_1 + r_2 T_2}{r_1+r_2}"
},
{
"math_id": 11,
"text": "\\Delta t"
},
{
"math_id": 12,
"text": "\\alpha = \\lambda/(\\rho c_p)"
},
{
"math_id": 13,
"text": " \\Delta x_1 "
},
{
"math_id": 14,
"text": "\\Delta x_1 \\simeq \\sqrt{\\alpha_1 \\cdot \\Delta t}"
},
{
"math_id": 15,
"text": "\\alpha_1 = \\lambda_1 / (\\rho c_p)_1 "
},
{
"math_id": 16,
"text": " \\Delta x_2 "
},
{
"math_id": 17,
"text": "\\Delta x_2 \\simeq \\sqrt{\\alpha_2 \\cdot \\Delta t}"
},
{
"math_id": 18,
"text": "\\alpha_2 = \\lambda_2 / (\\rho c_p)_2 "
},
{
"math_id": 19,
"text": " T_m "
},
{
"math_id": 20,
"text": " \\Delta x_1 (\\rho c_p)_1 (T_1 - T_m) = \\Delta x_2 (\\rho c_p)_2 ( T_m - T_2 ) "
},
{
"math_id": 21,
"text": "\\Delta x_1"
},
{
"math_id": 22,
"text": " \\Delta t "
},
{
"math_id": 23,
"text": "T_m = T_1 + \\left(T_2 - T_1\\right)\\frac{r_2}{r_2 + r_1}=\\frac{r_1 T_1 + r_2 T_2}{r_1+r_2}"
},
{
"math_id": 24,
"text": " r_1 / (r_1 + r_2) "
},
{
"math_id": 25,
"text": "U"
},
{
"math_id": 26,
"text": "U_{dyn}"
},
{
"math_id": 27,
"text": "t_L"
},
{
"math_id": 28,
"text": "L"
},
{
"math_id": 29,
"text": "U_{dyn}(t) = r\\sqrt{\\frac{\\pi}{4t}} \\approx \\frac{r}{\\sqrt{t}}"
},
{
"math_id": 30,
"text": "t < t_{L} = \\frac{L^2}{4\\pi\\alpha}=\\frac{r^2}{4\\pi U^2}"
},
{
"math_id": 31,
"text": "r= \\frac{\\lambda}{\\sqrt{\\alpha}}"
},
{
"math_id": 32,
"text": "U = \\frac{\\lambda}{L}."
}
] | https://en.wikipedia.org/wiki?curid=9501745 |
9501969 | Boyle temperature | The Boyle temperature is formally defined as the temperature for which the second virial coefficient, formula_0, becomes zero.
It is at this temperature that the attractive forces and the repulsive forces acting on the gas particles balance out
formula_1
This is the virial equation of state and describes a real gas.
Since higher order virial coefficients are generally much smaller than the second coefficient, the gas tends to behave as an ideal gas over a wider range of pressures when the temperature reaches the Boyle temperature (or when formula_2 or formula_3 are minimized).
In any case, when the pressures are low, the second virial coefficient will be the only relevant one because the remaining concern terms of higher order on the pressure. Also at Boyle temperature the dip in a PV diagram tends to a straight line over a period of pressure. We then have
formula_4
where formula_5 is the compressibility factor.
Expanding the van der Waals equation in formula_6 one finds that formula_7. | [
{
"math_id": 0,
"text": "B_{2}(T)"
},
{
"math_id": 1,
"text": "P = RT \\left(\\frac{1}{V_m} + \\frac{B_{2}(T)}{V_m^2} + \\cdots \\right)"
},
{
"math_id": 2,
"text": " c = \\frac{1}{V_m}"
},
{
"math_id": 3,
"text": "P"
},
{
"math_id": 4,
"text": "\\frac{\\mathrm{d}Z}{\\mathrm{d}P} = 0 \\qquad\\mbox{if}~P \\to 0"
},
{
"math_id": 5,
"text": "Z"
},
{
"math_id": 6,
"text": "\\frac{1}{V_m}"
},
{
"math_id": 7,
"text": "T_b = \\frac{a}{Rb}"
}
] | https://en.wikipedia.org/wiki?curid=9501969 |
9503180 | Generation time | Average time from one generation to another within the same population
In population biology and demography, generation time is the average time between two consecutive generations in the lineages of a population. In human populations, generation time typically has ranged from 20 to 30 years, with wide variation based on gender and society. Historians sometimes use this to date events, by converting generations into years to obtain rough estimates of time.
Definitions and corresponding formulas.
The existing definitions of generation time fall into two categories: those that treat generation time as a renewal time of the population, and those that focus on the distance between individuals of one generation and the next. Below are the three most commonly used definitions:
Time for a population to grow by a factor of its net reproductive rate.
The net reproductive rate formula_0 is the number of offspring an individual is expected to produce during its lifetime: formula_1 means demographic equilibrium. One may then define the generation time formula_2 as the time it takes for the population to increase by a factor of formula_0. For example, in microbiology, a population of cells undergoing exponential growth by mitosis replaces each cell by two daughter cells, so that formula_3 and formula_2 is the population doubling time.
If the population grows with exponential growth rate formula_4, so the population size at time formula_5 is given by
formula_6,
then generation time is given by
formula_7.
That is, formula_8 is such that formula_9, i.e. formula_10.
Average difference in age between parent and offspring.
This definition is a measure of the distance between generations rather than a renewal time of the population. Since many demographic models are female-based (that is, they only take females into account), this definition is often expressed as a mother-daughter distance (the "average age of mothers at birth of their daughters"). However, it is also possible to define a father-son distance (average age of fathers at the birth of their sons) or not to take sex into account at all in the definition. In age-structured population models, an expression is given by:
formula_11,
where formula_4 is the growth rate of the population, formula_12 is the survivorship function (probability that an individual survives to age formula_13) and formula_14 the maternity function (birth function, age-specific fertility). For matrix population models, there is a general formula:
formula_15,
where formula_16 is the discrete-time growth rate of the population, formula_17 is its fertility matrix, formula_18 its reproductive value (row-vector) and formula_19 its stable stage distribution (column-vector); the formula_20 are the elasticities of formula_21 to the fertilities.
Age at which members of a cohort are expected to reproduce.
This definition is very similar to the previous one but the population need not be at its stable age distribution. Moreover, it can be computed for different cohorts and thus provides more information about the generation time in the population. This measure is given by:
formula_22.
Indeed, the numerator is the sum of the ages at which a member of the cohort reproduces, and the denominator is "R"0, the average number of offspring it produces.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\textstyle R_0"
},
{
"math_id": 1,
"text": "\\textstyle R_0=1"
},
{
"math_id": 2,
"text": "T"
},
{
"math_id": 3,
"text": "\\textstyle R_0=2"
},
{
"math_id": 4,
"text": "\\textstyle r"
},
{
"math_id": 5,
"text": "t"
},
{
"math_id": 6,
"text": "n(t) = \\alpha \\, e^{r t}"
},
{
"math_id": 7,
"text": "T = \\frac{\\log R_0}{r}"
},
{
"math_id": 8,
"text": "\\textstyle T"
},
{
"math_id": 9,
"text": "n(t+T)=R_0\\, n(t)"
},
{
"math_id": 10,
"text": "e^{r T}=R_0"
},
{
"math_id": 11,
"text": "T = \\int_0^{\\infty} x e^{-rx} \\ell(x) m(x) \\, \\mathrm{d}x"
},
{
"math_id": 12,
"text": "\\textstyle \\ell(x)"
},
{
"math_id": 13,
"text": "\\textstyle x"
},
{
"math_id": 14,
"text": "\\textstyle m(x)"
},
{
"math_id": 15,
"text": "T = \\frac{ \\lambda \\mathbf{v w}}{\\mathbf{v F w}} = \\frac{1}{\\sum e_{\\lambda}(f_{ij})}"
},
{
"math_id": 16,
"text": "\\textstyle \\lambda=e^r"
},
{
"math_id": 17,
"text": "\\textstyle \\mathbf{F}=(f_{ij})"
},
{
"math_id": 18,
"text": "\\textstyle \\mathbf{v}"
},
{
"math_id": 19,
"text": "\\textstyle \\mathbf{w}"
},
{
"math_id": 20,
"text": "\\textstyle e_{\\lambda}(f_{ij}) = \\frac{f_{ij}}{\\lambda} \\frac{\\partial \\lambda}{\\partial f_{ij}}"
},
{
"math_id": 21,
"text": "\\textstyle \\lambda"
},
{
"math_id": 22,
"text": "T = \\frac{\\int_{x=0}^{\\infty} x \\ell(x) m(x) \\, \\mathrm{d}x}{\\int_{x=0}^{\\infty} \\ell(x) m(x) \\, \\mathrm{d}x}"
}
] | https://en.wikipedia.org/wiki?curid=9503180 |
9503407 | Standard map | Area-preserving chaotic map from a square with side 2π onto itself
The standard map (also known as the Chirikov–Taylor map or as the Chirikov standard map) is an area-preserving chaotic map from a square with side formula_2 onto itself. It is constructed by a Poincaré's surface of section of the kicked rotator, and is defined by:
formula_3
formula_4
where formula_0 and formula_1 are taken modulo formula_2.
The properties of chaos of the standard map were established by Boris Chirikov in 1969.
Physical model.
This map describes the Poincaré's surface of section of the motion of a simple mechanical system known as the kicked rotator. The kicked rotator consists of a stick that is free of the gravitational force, which can rotate frictionlessly in a plane around an axis located in one of its tips, and which is periodically kicked on the other tip.
The standard map is a surface of section applied by a stroboscopic projection on the variables of the kicked rotator. The variables formula_1 and formula_0 respectively determine the angular position of the stick and its angular momentum after the "n"-th kick. The constant "K" measures the intensity of the kicks on the kicked rotator.
The kicked rotator approximates systems studied in the fields of mechanics of particles, accelerator physics, plasma physics, and solid state physics. For example, circular particle accelerators accelerate particles by applying periodic kicks, as they circulate in the beam tube. Thus, the structure of the beam can be approximated by the kicked rotor. However, this map is interesting from a fundamental point of view in physics and mathematics because it is a very simple model of a conservative system that displays Hamiltonian chaos. It is therefore useful to study the development of chaos in this kind of system.
Main properties.
For formula_5 the map is linear and only periodic and quasiperiodic orbits are possible. When plotted in phase space (the θ–"p" plane), periodic orbits appear as closed curves, and quasiperiodic orbits as necklaces of closed curves whose centers lie in another larger closed curve. Which type of orbit is observed depends on the map's initial conditions.
Nonlinearity of the map increases with "K", and with it the possibility to observe chaotic dynamics for appropriate initial conditions. This is illustrated in the figure, which displays a collection of different orbits allowed to the standard map for various values of formula_6. All the orbits shown are periodic or quasiperiodic, with the exception of the green one that is chaotic and develops in a large region of phase space as an apparently random set of points. Particularly remarkable is the extreme uniformity of the distribution in the chaotic region, although this can be deceptive: even within the chaotic regions, there are an infinite number of diminishingly small islands that are never visited during iteration, as shown in the close-up.
Circle map.
The standard map is related to the circle map, which has a single, similar iterated equation:
formula_7
as compared to
formula_8
formula_9
for the standard map, the equations reordered to emphasize similarity. In essence, the circle map forces the momentum to a constant.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "p_n"
},
{
"math_id": 1,
"text": "\\theta_n"
},
{
"math_id": 2,
"text": "2\\pi"
},
{
"math_id": 3,
"text": "p_{n+1} = p_n + K \\sin(\\theta_n)"
},
{
"math_id": 4,
"text": "\\theta_{n+1} = \\theta_n + p_{n+1}"
},
{
"math_id": 5,
"text": "K=0"
},
{
"math_id": 6,
"text": "K > 0"
},
{
"math_id": 7,
"text": "\\theta_{n+1} = \\theta_n + \\Omega - K \\sin(\\theta_n)"
},
{
"math_id": 8,
"text": "\\theta_{n+1} = \\theta_n + p_n + K \\sin(\\theta_n)"
},
{
"math_id": 9,
"text": "p_{n+1} = \\theta_{n+1} - \\theta_{n}"
}
] | https://en.wikipedia.org/wiki?curid=9503407 |
9504881 | Lax–Wendroff method | The Lax–Wendroff method, named after Peter Lax and Burton Wendroff, is a numerical method for the solution of hyperbolic partial differential equations, based on finite differences. It is second-order accurate in both space and time. This method is an example of explicit time integration where the function that defines the governing equation is evaluated at the current time.
Definition.
Suppose one has an equation of the following form:
formula_0
where x and t are independent variables, and the initial state, "u"("x", 0) is given.
Linear case.
In the linear case, where "f"("u") = "Au", and "A" is a constant,
formula_1
Here formula_2 refers to the formula_3 dimension and formula_4 refers to the formula_5 dimension.
This linear scheme can be extended to the general non-linear case in different ways. One of them is letting
formula_6
Non-linear case.
The conservative form of Lax-Wendroff for a general non-linear equation is then:
formula_7
where formula_8 is the Jacobian matrix evaluated at formula_9.
Jacobian free methods.
To avoid the Jacobian evaluation, use a two-step procedure.
Richtmyer method.
What follows is the Richtmyer two-step Lax–Wendroff method. The first step in the Richtmyer two-step Lax–Wendroff method calculates values for "f"("u"("x", "t")) at half time steps, "t""n" + 1/2 and half grid points, "x""i" + 1/2. In the second step values at "t""n" + 1 are calculated using the data for "t""n" and "t""n" + 1/2.
First (Lax) steps:
formula_10
formula_11
Second step:
formula_12
MacCormack method.
Another method of this same type was proposed by MacCormack. MacCormack's method uses first forward differencing and then backward differencing:
First step:
formula_13
Second step:
formula_14
Alternatively,
First step:
formula_15
Second step:
formula_16
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\frac{\\partial u(x,t)}{\\partial t} + \\frac{\\partial f(u(x,t))}{\\partial x} = 0"
},
{
"math_id": 1,
"text": " u_i^{n+1} = u_i^n - \\frac{\\Delta t}{2\\Delta x} A\\left[ u_{i+1}^{n} - u_{i-1}^{n} \\right] + \\frac{\\Delta t^2}{2\\Delta x^2} A^2\\left[ u_{i+1}^{n} -2 u_{i}^{n} + u_{i-1}^{n} \\right]."
},
{
"math_id": 2,
"text": "n"
},
{
"math_id": 3,
"text": "t"
},
{
"math_id": 4,
"text": "i"
},
{
"math_id": 5,
"text": "x"
},
{
"math_id": 6,
"text": " A(u) = f'(u) = \\frac{\\partial f}{\\partial u}"
},
{
"math_id": 7,
"text": " u_i^{n+1} = u_i^n - \\frac{\\Delta t}{2\\Delta x} \\left[ f(u_{i+1}^{n}) - f(u_{i-1}^{n}) \\right] + \\frac{\\Delta t^2}{2\\Delta x^2} \\left[ A_{i+1/2} \\left(f(u_{i+1}^{n}) - f(u_{i}^{n})\\right) - A_{i-1/2}\\left( f(u_{i}^{n})-f(u_{i-1}^{n})\\right) \\right]."
},
{
"math_id": 8,
"text": "A_{i\\pm 1/2}"
},
{
"math_id": 9,
"text": "\\frac{1}{2} (u^n_i + u^n_{i\\pm 1})"
},
{
"math_id": 10,
"text": " u_{i+1/2}^{n+1/2} = \\frac{1}{2}(u_{i+1}^n + u_{i}^n) - \\frac{\\Delta t}{2\\,\\Delta x}( f(u_{i+1}^n) - f(u_{i}^n) ),"
},
{
"math_id": 11,
"text": " u_{i-1/2}^{n+1/2}= \\frac{1}{2}(u_{i}^n + u_{i-1}^n) - \\frac{\\Delta t}{2\\,\\Delta x}( f(u_{i}^n) - f(u_{i-1}^n) )."
},
{
"math_id": 12,
"text": " u_i^{n+1} = u_i^n - \\frac{\\Delta t}{\\Delta x} \\left[ f(u_{i+1/2}^{n+1/2}) - f(u_{i-1/2}^{n+1/2}) \\right]."
},
{
"math_id": 13,
"text": " u_{i}^{*}= u_{i}^n - \\frac{\\Delta t}{\\Delta x}( f(u_{i+1}^n) - f(u_{i}^n) )."
},
{
"math_id": 14,
"text": " u_i^{n+1} = \\frac{1}{2} (u_{i}^n + u_{i}^*) - \\frac{\\Delta t}{2 \\Delta x} \\left[ f(u_{i}^{*}) - f(u_{i-1}^{*}) \\right]."
},
{
"math_id": 15,
"text": " u_{i}^{*} = u_{i}^n - \\frac{\\Delta t}{\\Delta x}( f(u_{i}^n) - f(u_{i-1}^n) )."
},
{
"math_id": 16,
"text": " u_i^{n+1} = \\frac{1}{2} (u_{i}^n + u_{i}^*) - \\frac{\\Delta t}{2 \\Delta x} \\left[ f(u_{i+1}^{*}) - f(u_{i}^{*}) \\right]."
}
] | https://en.wikipedia.org/wiki?curid=9504881 |
9505941 | List of quantum-mechanical systems with analytical solutions | Much insight in quantum mechanics can be gained from understanding the closed-form solutions to the time-dependent non-relativistic Schrödinger equation. It takes the form
formula_0
where formula_1 is the wave function of the system, formula_2 is the Hamiltonian operator, and formula_3 is time. Stationary states of this equation are found by solving the time-independent Schrödinger equation,
formula_4
which is an eigenvalue equation. Very often, only numerical solutions to the Schrödinger equation can be found for a given physical system and its associated potential energy. However, there exists a subset of physical systems for which the form of the eigenfunctions and their associated energies, or eigenvalues, can be found. These quantum-mechanical systems with analytical solutions are listed below.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\n\\hat{H} \\psi\\left(\\mathbf{r}, t\\right) =\n\\left[ - \\frac{\\hbar^2}{2m} \\nabla^2 + V\\left(\\mathbf{r}\\right) \\right] \\psi\\left(\\mathbf{r}, t\\right) = i\\hbar \\frac{\\partial\\psi\\left(\\mathbf{r}, t\\right)}{\\partial t},\n"
},
{
"math_id": 1,
"text": "\\psi"
},
{
"math_id": 2,
"text": "\\hat{H}"
},
{
"math_id": 3,
"text": "t"
},
{
"math_id": 4,
"text": "\n\\left[ - \\frac{\\hbar^2}{2m} \\nabla^2 + V\\left(\\mathbf{r}\\right) \\right] \\psi\\left(\\mathbf{r}\\right) = E \\psi \\left(\\mathbf{r}\\right),\n"
}
] | https://en.wikipedia.org/wiki?curid=9505941 |
950777 | Analytic signal | Particular representation of a signal
In mathematics and signal processing, an analytic signal is a complex-valued function that has no negative frequency components. The real and imaginary parts of an analytic signal are real-valued functions related to each other by the Hilbert transform.
The analytic representation of a real-valued function is an "analytic signal", comprising the original function and its Hilbert transform. This representation facilitates many mathematical manipulations. The basic idea is that the negative frequency components of the Fourier transform (or spectrum) of a real-valued function are superfluous, due to the Hermitian symmetry of such a spectrum. These negative frequency components can be discarded with no loss of information, provided one is willing to deal with a complex-valued function instead. That makes certain attributes of the function more accessible and facilitates the derivation of modulation and demodulation techniques, such as single-sideband.
As long as the manipulated function has no negative frequency components (that is, it is still "analytic"), the conversion from complex back to real is just a matter of discarding the imaginary part. The analytic representation is a generalization of the phasor concept: while the phasor is restricted to time-invariant amplitude, phase, and frequency, the analytic signal allows for time-variable parameters.
Definition.
If formula_0 is a "real-valued" function with Fourier transform formula_1 (where formula_2 is the real value denoting frequency), then the transform has Hermitian symmetry about the formula_3 axis:
formula_4
where formula_5 is the complex conjugate of formula_1.
The function:
formula_6
where
contains only the "non-negative frequency" components of formula_1. And the operation is reversible, due to the Hermitian symmetry of formula_1:
formula_9
The analytic signal of formula_0 is the inverse Fourier transform of formula_10:
formula_11
where
Noting that formula_15 this can also be expressed as a filtering operation that directly removes negative frequency components:
formula_16
Negative frequency components.
Since formula_17, restoring the negative frequency components is a simple matter of discarding formula_18 which may seem counter-intuitive. The complex conjugate formula_19 comprises "only" the negative frequency components. And therefore formula_20 restores the suppressed positive frequency components. Another viewpoint is that the imaginary component in either case is a term that subtracts frequency components from formula_21 The formula_22 operator removes the subtraction, giving the appearance of adding new components.
formula_23 where formula_24
Examples.
Example 1.
Then:
formula_25
The last equality is Euler's formula, of which a corollary is formula_26 In general, the analytic representation of a simple sinusoid is obtained by expressing it in terms of complex-exponentials, discarding the negative frequency component, and doubling the positive frequency component. And the analytic representation of a sum of sinusoids is the sum of the analytic representations of the individual sinusoids.
Example 2.
Here we use Euler's formula to identify and discard the negative frequency.
formula_27
Then:
formula_28
Example 3.
This is another example of using the Hilbert transform method to remove negative frequency components. Nothing prevents us from computing formula_29 for a complex-valued formula_0. But it might not be a reversible representation, because the original spectrum is not symmetrical in general. So except for this example, the general discussion assumes real-valued formula_0.
formula_30, where formula_31.
Then:
formula_32
Properties.
Instantaneous amplitude and phase.
An analytic signal can also be expressed in polar coordinates:
formula_33
where the following time-variant quantities are introduced:
In the accompanying diagram, the blue curve depicts formula_0 and the red curve depicts the corresponding formula_36.
The time derivative of the unwrapped instantaneous phase has units of "radians/second", and is called the "instantaneous angular frequency":
formula_37
The "instantaneous frequency" (in hertz) is therefore:
formula_38
The instantaneous amplitude, and the instantaneous phase and frequency are in some applications used to measure and detect local features of the signal. Another application of the analytic representation of a signal relates to demodulation of modulated signals. The polar coordinates conveniently separate the effects of amplitude modulation and phase (or frequency) modulation, and effectively demodulates certain kinds of signals.
Complex envelope/baseband.
Analytic signals are often shifted in frequency (down-converted) toward 0 Hz, possibly creating [non-symmetrical] negative frequency components:
formula_39
where formula_40 is an arbitrary reference angular frequency.
This function goes by various names, such as "complex envelope" and "complex baseband". The complex envelope is not unique; it is determined by the choice of formula_40. This concept is often used when dealing with passband signals. If formula_0 is a modulated signal, formula_40 might be equated to its carrier frequency.
In other cases, formula_40 is selected to be somewhere in the middle of the desired passband. Then a simple low-pass filter with real coefficients can excise the portion of interest. Another motive is to reduce the highest frequency, which reduces the minimum rate for alias-free sampling. A frequency shift does not undermine the mathematical tractability of the complex signal representation. So in that sense, the down-converted signal is still "analytic". However, restoring the real-valued representation is no longer a simple matter of just extracting the real component. Up-conversion may be required, and if the signal has been sampled (discrete-time), interpolation (upsampling) might also be necessary to avoid aliasing.
If formula_40 is chosen larger than the highest frequency of formula_41 then formula_42 has no positive frequencies. In that case, extracting the real component restores them, but in reverse order; the low-frequency components are now high ones and vice versa. This can be used to demodulate a type of single-sideband signal called "lower sideband" or "inverted sideband".
Other choices of reference frequency are sometimes considered:
In the field of time-frequency signal processing, it was shown that the analytic signal was needed in the definition of the Wigner–Ville distribution so that the method can have the desirable properties needed for practical applications.
Sometimes the phrase "complex envelope" is given the simpler meaning of the complex amplitude of a (constant-frequency) phasor;
other times the complex envelope formula_48 as defined above is interpreted as a time-dependent generalization of the complex amplitude. Their relationship is not unlike that in the real-valued case: varying envelope generalizing constant amplitude.
Extensions of the analytic signal to signals of multiple variables.
The concept of analytic signal is well-defined for signals of a single variable which typically is time. For signals of two or more variables, an analytic signal can be defined in different ways, and two approaches are presented below.
Multi-dimensional analytic signal based on an ad hoc direction.
A straightforward generalization of the analytic signal can be done for a multi-dimensional signal once it is established what is meant by "negative frequencies" for this case. This can be done by introducing a unit vector formula_49 in the Fourier domain and label any frequency vector formula_50 as negative if formula_51. The analytic signal is then produced by removing all negative frequencies and multiply the result by 2, in accordance to the procedure described for the case of one-variable signals. However, there is no particular direction for formula_49 which must be chosen unless there are some additional constraints. Therefore, the choice of formula_49 is ad hoc, or application specific.
The monogenic signal.
The real and imaginary parts of the analytic signal correspond to the two elements of the vector-valued monogenic signal, as it is defined for one-variable signals. However, the monogenic signal can be extended to arbitrary number of variables in a straightforward manner, producing an ("n" + 1)-dimensional vector-valued function for the case of "n"-variable signals.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "s(t)"
},
{
"math_id": 1,
"text": "S(f)"
},
{
"math_id": 2,
"text": "f"
},
{
"math_id": 3,
"text": "f = 0"
},
{
"math_id": 4,
"text": "S(-f) = S(f)^*,"
},
{
"math_id": 5,
"text": "S(f)^*"
},
{
"math_id": 6,
"text": "\n\\begin{align}\nS_\\mathrm{a}(f) &\\triangleq\n\\begin{cases}\n2S(f), &\\text{for}\\ f > 0,\\\\\nS(f), &\\text{for}\\ f = 0,\\\\\n0, &\\text{for}\\ f < 0\n\\end{cases}\\\\\n&= \\underbrace{2 \\operatorname{u}(f)}_{1 + \\sgn(f)}S(f) = S(f) + \\sgn(f)S(f),\n\\end{align}\n"
},
{
"math_id": 7,
"text": "\\operatorname{u}(f)"
},
{
"math_id": 8,
"text": "\\sgn(f)"
},
{
"math_id": 9,
"text": " \n\\begin{align}\nS(f) &=\n\\begin{cases}\n\\frac{1}{2}S_\\mathrm{a}(f), &\\text{for}\\ f > 0,\\\\\nS_\\mathrm{a}(f), &\\text{for}\\ f = 0,\\\\\n\\frac{1}{2}S_\\mathrm{a}(-f)^*, &\\text{for}\\ f < 0\\ \\text{(Hermitian symmetry)}\n\\end{cases}\\\\\n&= \\frac{1}{2}[S_\\mathrm{a}(f) + S_\\mathrm{a}(-f)^*].\n\\end{align}\n"
},
{
"math_id": 10,
"text": "S_\\mathrm{a}(f)"
},
{
"math_id": 11,
"text": "\\begin{align}\ns_\\mathrm{a}(t) &\\triangleq \\mathcal{F}^{-1}[S_\\mathrm{a}(f)]\\\\\n&= \\mathcal{F}^{-1}[S (f)+ \\sgn(f) \\cdot S(f)]\\\\\n&= \\underbrace{\\mathcal{F}^{-1}\\{S(f)\\}}_{s(t)} + \\overbrace{\n\\underbrace{\\mathcal{F}^{-1}\\{\\sgn(f)\\}}_{j\\frac{1}{\\pi t}} * \\underbrace{\\mathcal{F}^{-1}\\{S(f)\\}}_{s(t)}\n}^\\text{convolution}\\\\\n&= s(t) + j\\underbrace{\\left[{1 \\over \\pi t} * s(t)\\right]}_{\\operatorname{\\mathcal{H}}[s(t)]}\\\\\n&= s(t) + j\\hat{s}(t),\n\\end{align}"
},
{
"math_id": 12,
"text": "\\hat{s}(t) \\triangleq \\operatorname{\\mathcal{H}}[s(t)]"
},
{
"math_id": 13,
"text": "*"
},
{
"math_id": 14,
"text": "j"
},
{
"math_id": 15,
"text": "s(t)= s(t)*\\delta(t),"
},
{
"math_id": 16,
"text": "s_\\mathrm{a}(t) = s(t)*\\underbrace{\\left[\\delta(t)+ j{1 \\over \\pi t}\\right]}_{\\mathcal{F}^{-1}\\{2u(f)\\}}."
},
{
"math_id": 17,
"text": "s(t) = \\operatorname{Re}[s_\\mathrm{a}(t)]"
},
{
"math_id": 18,
"text": "\\operatorname{Im}[s_\\mathrm{a}(t)]"
},
{
"math_id": 19,
"text": "s_\\mathrm{a}^*(t)"
},
{
"math_id": 20,
"text": "s(t) = \\operatorname{Re}[s_\\mathrm{a}^*(t)]"
},
{
"math_id": 21,
"text": "s(t)."
},
{
"math_id": 22,
"text": "\\operatorname{Re}"
},
{
"math_id": 23,
"text": "s(t) = \\cos(\\omega t),"
},
{
"math_id": 24,
"text": "\\omega > 0."
},
{
"math_id": 25,
"text": "\\begin{align}\n \\hat{s}(t) &= \\cos\\left(\\omega t - \\frac{\\pi}{2}\\right) = \\sin(\\omega t), \\\\\n s_\\mathrm{a}(t) &= s(t) + j\\hat{s}(t) = \\cos(\\omega t) + j\\sin(\\omega t) = e^{j\\omega t}.\n\\end{align}"
},
{
"math_id": 26,
"text": "\\cos(\\omega t) = \\frac{1}{2} \\left(e^{j\\omega t} + e^{j (-\\omega) t}\\right)."
},
{
"math_id": 27,
"text": "s(t) = \\cos(\\omega t + \\theta) = \\frac{1}{2} \\left(e^{j (\\omega t+\\theta)} + e^{-j (\\omega t+\\theta)}\\right)"
},
{
"math_id": 28,
"text": "s_\\mathrm{a}(t) = \n\\begin{cases}\ne^{j(\\omega t + \\theta)} \\ \\ = \\ e^{j |\\omega| t}\\cdot e^{j\\theta} , & \\text{if} \\ \\omega > 0, \\\\\ne^{-j(\\omega t + \\theta)} = \\ e^{j |\\omega| t}\\cdot e^{-j\\theta} , & \\text{if} \\ \\omega < 0.\n\\end{cases}\n"
},
{
"math_id": 29,
"text": "s_\\mathrm{a}(t)"
},
{
"math_id": 30,
"text": "s(t) = e^{-j\\omega t}"
},
{
"math_id": 31,
"text": "\\omega > 0"
},
{
"math_id": 32,
"text": "\\begin{align}\n \\hat{s}(t) &= je^{-j\\omega t}, \\\\\n s_\\mathrm{a}(t) &= e^{-j\\omega t} + j^2 e^{-j\\omega t} = e^{-j\\omega t} - e^{-j\\omega t} = 0.\n\\end{align}"
},
{
"math_id": 33,
"text": "s_\\mathrm{a}(t) = s_\\mathrm{m}(t)e^{j\\phi(t)},"
},
{
"math_id": 34,
"text": "s_\\mathrm{m}(t) \\triangleq |s_\\mathrm{a}(t)|"
},
{
"math_id": 35,
"text": "\\phi(t) \\triangleq \\arg\\!\\left[s_\\mathrm{a}(t)\\right]"
},
{
"math_id": 36,
"text": "s_\\mathrm{m}(t)"
},
{
"math_id": 37,
"text": "\\omega(t) \\triangleq \\frac{d\\phi}{dt}(t)."
},
{
"math_id": 38,
"text": "f(t)\\triangleq \\frac{1}{2\\pi}\\omega(t)."
},
{
"math_id": 39,
"text": "{s_\\mathrm{a}}_{\\downarrow}(t) \\triangleq s_\\mathrm{a}(t)e^{-j\\omega_0 t} = s_\\mathrm{m}(t)e^{j(\\phi(t) - \\omega_0 t)},"
},
{
"math_id": 40,
"text": "\\omega_0"
},
{
"math_id": 41,
"text": "s_\\mathrm{a}(t),"
},
{
"math_id": 42,
"text": "{s_\\mathrm{a}}_{\\downarrow}(t)"
},
{
"math_id": 43,
"text": "\\int_0^{+\\infty}(\\omega - \\omega_0)^2|S_\\mathrm{a}(\\omega)|^2\\, d\\omega."
},
{
"math_id": 44,
"text": "\\phi(t)"
},
{
"math_id": 45,
"text": "\\int_{-\\infty}^{+\\infty}[\\omega(t) - \\omega_0]^2 |s_\\mathrm{a}(t)|^2\\, dt"
},
{
"math_id": 46,
"text": "\\theta"
},
{
"math_id": 47,
"text": "\\int_{-\\infty}^{+\\infty}[\\phi(t) - (\\omega_0 t + \\theta)]^2\\, dt."
},
{
"math_id": 48,
"text": " s_m(t)"
},
{
"math_id": 49,
"text": "\\boldsymbol \\hat{u}"
},
{
"math_id": 50,
"text": "\\boldsymbol \\xi"
},
{
"math_id": 51,
"text": "\\boldsymbol \\xi \\cdot \\boldsymbol \\hat{u} < 0"
}
] | https://en.wikipedia.org/wiki?curid=950777 |
9508771 | Jacobi theta functions (notational variations) | There are a number of notational systems for the Jacobi theta functions. The notations given in the Wikipedia article define the original function
formula_0
which is equivalent to
formula_1
where formula_2 and formula_3.
However, a similar notation is defined somewhat differently in Whittaker and Watson, p. 487:
formula_4
This notation is attributed to "Hermite, H.J.S. Smith and some other mathematicians". They also define
formula_5
This is a factor of "i" off from the definition of formula_6 as defined in the Wikipedia article. These definitions can be made at least proportional by "x" = "za", but other definitions cannot. Whittaker and Watson, Abramowitz and Stegun, and Gradshteyn and Ryzhik all follow Tannery and Molk, in which
formula_7
formula_8
formula_9
formula_10
Note that there is no factor of π in the argument as in the previous definitions.
Whittaker and Watson refer to still other definitions of formula_11. The warning in Abramowitz and Stegun, "There is a bewildering variety of notations...in consulting books caution should be exercised," may be viewed as an understatement. In any expression, an occurrence of formula_12 should not be assumed to have any particular definition. It is incumbent upon the author to state what definition of formula_12 is intended. | [
{
"math_id": 0,
"text": "\n\\vartheta_{00}(z; \\tau) = \\sum_{n=-\\infty}^\\infty \\exp (\\pi i n^2 \\tau + 2 \\pi i n z)\n"
},
{
"math_id": 1,
"text": "\n\\vartheta_{00}(w, q) = \\sum_{n=-\\infty}^\\infty q^{n^2} w^{2n}\n"
},
{
"math_id": 2,
"text": "q=e^{\\pi i\\tau}"
},
{
"math_id": 3,
"text": "w=e^{\\pi iz}"
},
{
"math_id": 4,
"text": "\n\\vartheta_{0,0}(x) = \\sum_{n=-\\infty}^\\infty q^{n^2} \\exp (2 \\pi i n x/a)\n"
},
{
"math_id": 5,
"text": "\n\\vartheta_{1,1}(x) = \\sum_{n=-\\infty}^\\infty (-1)^n q^{(n+1/2)^2} \\exp (\\pi i (2 n + 1) x/a)\n"
},
{
"math_id": 6,
"text": "\\vartheta_{11}"
},
{
"math_id": 7,
"text": "\n\\vartheta_1(z) = -i \\sum_{n=-\\infty}^\\infty (-1)^n q^{(n+1/2)^2} \\exp ((2 n + 1) i z)"
},
{
"math_id": 8,
"text": "\n\\vartheta_2(z) = \\sum_{n=-\\infty}^\\infty q^{(n+1/2)^2} \\exp ((2 n + 1) i z)"
},
{
"math_id": 9,
"text": "\n\\vartheta_3(z) = \\sum_{n=-\\infty}^\\infty q^{n^2} \\exp (2 n i z)"
},
{
"math_id": 10,
"text": "\n\\vartheta_4(z) = \\sum_{n=-\\infty}^\\infty (-1)^n q^{n^2} \\exp (2 n i z)"
},
{
"math_id": 11,
"text": "\\vartheta_j"
},
{
"math_id": 12,
"text": "\\vartheta(z)"
}
] | https://en.wikipedia.org/wiki?curid=9508771 |
950902 | Bagirmi language | Nilo-Saharan language of Chad and Nigeria
Bagirmi (also Baguirmi; autonym: "tàrà ɓármà)" is the language of the Bagirmi people of Chad belonging to the Central Sudanic family, which has been tenatively classified as part of the Nilo-Saharan superfamily. It was spoken by 44,761 people in 1993, mainly in the Chari-Baguirmi Region, as well as in Mokofi sub-prefecture of Guéra Region. It was the language of the Sultanate of Bagirmi (1522-1871) and then the Wadai Empire before the Scramble for Africa.
During the 1990s, Bagirmi was given written form and texts providing basic literacy instruction were composed through the efforts of Don and Orpha Raun, Christian missionaries of the Church of the Lutheran Brethren of America, late in their Chadian careers. In 2003, Anthony Kimball developed a font to support the Bagirmi alphabet and a Keyman input method for Latin keyboards, and the body of published Baguirmi literature continues to expand. The majority of this literature was distributed in Chad by David Raun, a missionary and the son of Don and Orpha Raun, at a token cost as a service to the Bagirmi-speaking peoples of Chad.
Phonology.
Consonants.
The consonant table presented below contains sounds which are supposed to be native to Bagirmi. The sounds f, v, z, ʃ and h are heard in loan-words.
The sounds given in brackets are variants (not specific phonemes).
si - milk
ji - hand
ri -name
lua - year
mʷu - grass
tut(u) - dry
deb(e) - person
tej(e) - honey
gèl(e) - lefthand
ro - body
tòt(o) - hill
kʷɔrlo - giraffe
kʷɔlɛ - pot
mà kàb(e) - I shall go
köndèi - small basket
Grammar.
Nouns.
Most of the nouns in Bagirmi are disyllabic and the common noun form is a consonant + vowel + consonant + vowel. The final vowel is usually semi-mute.
Examples:
- child
- shadow
The simpliest form of nouns in Bagirmi is monosyllabic and usually consists of a consonant and vowel.
Examples:
- body
- foreigner
- night
In Bagirmi language plurality of nouns is presented by the suffix . This rule applies not only to the simple noun but also to its possible qualifiers and to the end in noun compounds and genetive constructions. In this case, the suffix is added only once at the end of the noun phrase.
Examples:
(eye) formula_0 (eyes)
(sheep) formula_0 (sheeps)
Forms denoting sex.
To indicate sex (man, male) or (woman, female) should be added to a noun.
Examples:
- boy
- girl
Adjectives.
Most words in adjectival constructions act as nominal or verbal roots and cannot be differentiated from them (except the fact that they are more subject to reduplication). These words are “adjectives” only due to their applications. Also, a lot of these words can take both nominal and verbal affixes.
Pronouns.
Personal pronouns in Bagirmi are used as:
Examples:
– I am a man
– you are a Bagirmi
– he/she is big
– I myself
– we ourselves
B. forms of personal mention applied as object of a verb as possessor in the genitive case and also after prepositions.
The first and third persons: sing. and are used after a consonant, and after a vowel.
Examples:
As object of verb:
(following a consonant) – they see me
(following a vowel) – they leave me
After a preposition:
– with me
– with you
– with him/her
C. forms of personal mention applied before suffixes and postpositions.
In this position pronouns don't have any changes except the omission of semi-mute vowels. It's only about the first and third persons of B-forms.
The examples demonstrate only the general locative postposition .
– on me
– on him
– on us
Verbs.
Verb classes.
For conjugational purposes verbs are divided into five classes built on the form of the verbal roots. Verbal roots mainly have a monosyllabic or dissyllabic form. A reliable indicator of class is the presence or absence of the prefix in the Indefenite Aspect or the Infinitive.
Verb aspects.
There are two types of verb aspects in Bagirmi language: the Definite Aspect and the Indefinite Aspect. The Definite Aspect is applicable to complete, momentary verb actions. The Indefinite Aspect, conversely, represents verb actions which are incomplete, progressive. The Definite Aspect is also used to indicate the Imperative mood. The Indefinite Aspect is defined by the prefix in verbs from Class I and Class II.
Negation.
The negation of verbs is presented by adding a postposition . The initial vowel is omitted when preceded by another vowel (except the situation when pronouns and are placed).
Examples:
– I did not eat
– we did not go
– you (pl) do not, or did not, want
There is also a postposition which means “no more”, “no longer”.
Examples:
– I did not do it again
– they did not go there any longer
Word order.
Bagirmi language saves a direct word order in a sentence (subject + verb + object).
<templatestyles src="Interlinear/styles.css" />
When it comes to the genitive construction, the possessor always follows the possessed.
Examples:
– captive of the Patia
Adverbs.
In Bagirmi there are only several words whose function is adverbial and could be described as adverbs. Majority of adverbial constructions are made up of nouns, pronouns, verbs, adjectivals, with or without implementing of prepositions and postpositions, could contain a phrase or even a sentence. Not rarely an adverbial phrase is built up by integrating a preposition or postposition with a noun or pronoun.
The usual place of adverbials is at the end of a sentence. This position is especially suitable for interrogatives and adverbials of place and manner.
Numerals.
In Bagirmi there are no ordinal numbers. The order is expressed only by the cardinals, adverbs and postpositions. And adverbials ("times") can be expressed by using mʷot(o) (under).
Example:
- He came on the third day
- He came fifth
- He did this ten times
Word order.
In Bagirmi language the order of numerals and nouns in relation to each other is reverse.
Examples:
- two days
- five months
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightarrow"
}
] | https://en.wikipedia.org/wiki?curid=950902 |
950971 | Mohr–Coulomb theory | Mathematical model in materials science
Mohr–Coulomb theory is a mathematical model (see yield surface) describing the response of brittle materials such as concrete, or rubble piles, to shear stress as well as normal stress. Most of the classical engineering materials follow this rule in at least a portion of their shear failure envelope. Generally the theory applies to materials for which the compressive strength far exceeds the tensile strength.
In geotechnical engineering it is used to define shear strength of soils and rocks at different effective stresses.
In structural engineering it is used to determine failure load as well as the angle of fracture of a displacement fracture in concrete and similar materials. Coulomb's friction hypothesis is used to determine the combination of shear and normal stress that will cause a fracture of the material. Mohr's circle is used to determine which principal stresses will produce this combination of shear and normal stress, and the angle of the plane in which this will occur. According to the principle of normality the stress introduced at failure will be perpendicular to the line describing the fracture condition.
It can be shown that a material failing according to Coulomb's friction hypothesis will show the displacement introduced at failure forming an angle to the line of fracture equal to the angle of friction. This makes the strength of the material determinable by comparing the external mechanical work introduced by the displacement and the external load with the internal mechanical work introduced by the strain and stress at the line of failure. By conservation of energy the sum of these must be zero and this will make it possible to calculate the failure load of the construction.
A common improvement of this model is to combine Coulomb's friction hypothesis with Rankine's principal stress hypothesis to describe a separation fracture. An alternative view derives the Mohr-Coulomb criterion as extension failure.
History of the development.
The Mohr–Coulomb theory is named in honour of Charles-Augustin de Coulomb and Christian Otto Mohr. Coulomb's contribution was a 1776 essay entitled "Essai sur une application des règles des maximis et minimis à quelques problèmes de statique relatifs à l'architecture"
Mohr developed a generalised form of the theory around the end of the 19th century.
As the generalised form affected the interpretation of the criterion, but not the substance of it, some texts continue to refer to the criterion as simply the 'Coulomb criterion'.
Mohr–Coulomb failure criterion.
The Mohr–Coulomb failure criterion represents the linear envelope that is obtained from a plot of the shear strength of a material versus the applied normal stress. This relation is expressed as
formula_0
where formula_1 is the shear strength, formula_2 is the normal stress, formula_3 is the intercept of the failure envelope with the formula_1 axis, and formula_4 is the slope of the failure envelope. The quantity formula_3 is often called the cohesion and the angle formula_5 is called the angle of internal friction. Compression is assumed to be positive in the following discussion. If compression is assumed to be negative then formula_2 should be replaced with formula_6.
If formula_7, the Mohr–Coulomb criterion reduces to the Tresca criterion. On the other hand, if formula_8 the Mohr–Coulomb model is equivalent to the Rankine model. Higher values of formula_5 are not allowed.
From Mohr's circle we have
formula_9
where
formula_10
and formula_11 is the maximum principal stress and formula_12 is the minimum principal stress.
Therefore, the Mohr–Coulomb criterion may also be expressed as
formula_13
This form of the Mohr–Coulomb criterion is applicable to failure on a plane that is parallel to the formula_14 direction.
Mohr–Coulomb failure criterion in three dimensions.
The Mohr–Coulomb criterion in three dimensions is often expressed as
formula_15
The Mohr–Coulomb failure surface is a cone with a hexagonal cross section in deviatoric stress space.
The expressions for formula_1 and formula_2 can be generalized to three dimensions by developing expressions for the normal stress and the resolved shear stress on a plane of arbitrary orientation with respect to the coordinate axes (basis vectors). If the unit normal to the plane of interest is
formula_16
where formula_17 are three orthonormal unit basis vectors, and if the principal stresses formula_18 are aligned with the basis vectors formula_19, then the expressions for formula_20 are
formula_21
The Mohr–Coulomb failure criterion can then be evaluated using the usual expression
formula_22
for the six planes of maximum shear stress.
Mohr–Coulomb failure surface in Haigh–Westergaard space.
The Mohr–Coulomb failure (yield) surface is often expressed in Haigh–Westergaad coordinates. For example, the function
formula_23
can be expressed as
formula_24
Alternatively, in terms of the invariants formula_25 we can write
formula_26
where formula_27
Mohr–Coulomb yield and plasticity.
The Mohr–Coulomb yield surface is often used to model the plastic flow of geomaterials (and other cohesive-frictional materials). Many such materials show dilatational behavior under triaxial states of stress which the Mohr–Coulomb model does not include. Also, since the yield surface has corners, it may be inconvenient to use the original Mohr–Coulomb model to determine the direction of plastic flow (in the flow theory of plasticity).
A common approach is to use a non-associated plastic flow potential that is smooth. An example of such a potential is the function
formula_28
where formula_29 is a parameter, formula_30 is the value of formula_3 when the plastic strain is zero (also called the initial cohesion yield stress), formula_31 is the angle made by the yield surface in the Rendulic plane at high values of formula_32 (this angle is also called the dilation angle), and formula_33 is an appropriate function that is also smooth in the deviatoric stress plane.
Typical values of cohesion and angle of internal friction.
Cohesion (alternatively called the cohesive strength) and friction angle values for rocks and some common soils are listed in the tables below.
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\n \\tau = \\sigma~\\tan(\\phi) + c\n "
},
{
"math_id": 1,
"text": "\\tau"
},
{
"math_id": 2,
"text": "\\sigma"
},
{
"math_id": 3,
"text": "c"
},
{
"math_id": 4,
"text": "\\tan(\\phi)"
},
{
"math_id": 5,
"text": "\\phi"
},
{
"math_id": 6,
"text": "-\\sigma"
},
{
"math_id": 7,
"text": "\\phi = 0"
},
{
"math_id": 8,
"text": "\\phi = 90^\\circ"
},
{
"math_id": 9,
"text": " \\sigma = \\sigma_m - \\tau_m \\sin\\phi ~;~~ \\tau = \\tau_m \\cos\\phi "
},
{
"math_id": 10,
"text": " \\tau_m = \\cfrac{\\sigma_1-\\sigma_3}{2} ~;~~ \\sigma_m = \\cfrac{\\sigma_1+\\sigma_3}{2} "
},
{
"math_id": 11,
"text": "\\sigma_1"
},
{
"math_id": 12,
"text": "\\sigma_3"
},
{
"math_id": 13,
"text": " \\tau_m = \\sigma_m \\sin\\phi + c \\cos\\phi ~. "
},
{
"math_id": 14,
"text": "\\sigma_2"
},
{
"math_id": 15,
"text": "\n\\left\\{\\begin{align}\n \\pm\\cfrac{\\sigma_1 - \\sigma_2}{2} & = \\left[\\cfrac{\\sigma_1 + \\sigma_2}{2}\\right]\\sin(\\phi) + c\\cos(\\phi) \\\\\n \\pm\\cfrac{\\sigma_2 - \\sigma_3}{2} & = \\left[\\cfrac{\\sigma_2 + \\sigma_3}{2}\\right]\\sin(\\phi) + c\\cos(\\phi)\\\\\n \\pm\\cfrac{\\sigma_3 - \\sigma_1}{2} & = \\left[\\cfrac{\\sigma_3 + \\sigma_1}{2}\\right]\\sin(\\phi) + c\\cos(\\phi).\n\\end{align}\\right.\n"
},
{
"math_id": 16,
"text": "\n \\mathbf{n} = n_1~\\mathbf{e}_1 + n_2~\\mathbf{e}_2 + n_3~\\mathbf{e}_3\n "
},
{
"math_id": 17,
"text": "\\mathbf{e}_i,~~ i=1,2,3"
},
{
"math_id": 18,
"text": "\\sigma_1, \\sigma_2, \\sigma_3"
},
{
"math_id": 19,
"text": "\\mathbf{e}_1, \\mathbf{e}_2, \\mathbf{e}_3"
},
{
"math_id": 20,
"text": "\\sigma,\\tau"
},
{
"math_id": 21,
"text": "\n\\begin{align}\n \\sigma & = n_1^2 \\sigma_{1} + n_2^2 \\sigma_{2} + n_3^2 \\sigma_{3} \\\\\n \\tau & = \\sqrt{(n_1\\sigma_{1})^2 + (n_2\\sigma_{2})^2 + (n_3\\sigma_{3})^2 - \\sigma^2} \\\\\n & = \\sqrt{n_1^2 n_2^2 (\\sigma_1-\\sigma_2)^2 + n_2^2 n_3^2 (\\sigma_2-\\sigma_3)^2 +\n n_3^2 n_1^2 (\\sigma_3 - \\sigma_1)^2}.\n\\end{align}\n "
},
{
"math_id": 22,
"text": " \\tau = \\sigma~\\tan(\\phi) + c "
},
{
"math_id": 23,
"text": " \\cfrac{\\sigma_1-\\sigma_3}{2} = \\cfrac{\\sigma_1+\\sigma_3}{2}~\\sin\\phi + c\\cos\\phi "
},
{
"math_id": 24,
"text": "\n \\left[\\sqrt{3}~\\sin\\left(\\theta+\\cfrac{\\pi}{3}\\right) - \\sin\\phi\\cos\\left(\\theta+\\cfrac{\\pi}{3}\\right)\\right]\\rho - \\sqrt{2}\\sin(\\phi)\\xi = \\sqrt{6} c \\cos\\phi.\n "
},
{
"math_id": 25,
"text": "p, q, r"
},
{
"math_id": 26,
"text": "\n \\left[\\cfrac{1}{\\sqrt{3}~\\cos\\phi}~\\sin\\left(\\theta+\\cfrac{\\pi}{3}\\right) - \\cfrac{1}{3}\\tan\\phi~\\cos\\left(\\theta+\\cfrac{\\pi}{3}\\right)\\right]q - p~\\tan\\phi = c\n "
},
{
"math_id": 27,
"text": " \\theta = \\cfrac{1}{3}\\arccos\\left[\\left(\\cfrac{r}{q}\\right)^3\\right] ~. "
},
{
"math_id": 28,
"text": " g:= \\sqrt{(\\alpha c_\\mathrm{y} \\tan\\psi)^2 + G^2(\\phi, \\theta)~ q^2} - p \\tan\\phi "
},
{
"math_id": 29,
"text": "\\alpha"
},
{
"math_id": 30,
"text": "c_\\mathrm{y}"
},
{
"math_id": 31,
"text": "\\psi"
},
{
"math_id": 32,
"text": "p"
},
{
"math_id": 33,
"text": "G(\\phi,\\theta)"
}
] | https://en.wikipedia.org/wiki?curid=950971 |
9511414 | Okapi BM25 | Ranking function used by search engines
In information retrieval, Okapi BM25 ("BM" is an abbreviation of "best matching") is a ranking function used by search engines to estimate the relevance of documents to a given search query. It is based on the probabilistic retrieval framework developed in the 1970s and 1980s by Stephen E. Robertson, Karen Spärck Jones, and others.
The name of the actual ranking function is "BM25". The fuller name, "Okapi BM25", includes the name of the first system to use it, which was the Okapi information retrieval system, implemented at London's City University in the 1980s and 1990s. BM25 and its newer variants, e.g. BM25F (a version of BM25 that can take document structure and anchor text into account), represent TF-IDF-like retrieval functions used in document retrieval.
The ranking function.
BM25 is a bag-of-words retrieval function that ranks a set of documents based on the query terms appearing in each document, regardless of their proximity within the document. It is a family of scoring functions with slightly different components and parameters. One of the most prominent instantiations of the function is as follows.
Given a query Q, containing keywords formula_0, the BM25 score of a document D is:
formula_1
where formula_2 is the number of times that the keyword formula_3 occurs in the document D, formula_4 is the length of the document D in words, and avgdl is the average document length in the text collection from which documents are drawn. formula_5 and b are free parameters, usually chosen, in absence of an advanced optimization, as formula_6 and formula_7. formula_8 is the IDF (inverse document frequency) weight of the query term formula_3. It is usually computed as:
formula_9
where N is the total number of documents in the collection, and formula_10 is the number of documents containing formula_3.
There are several interpretations for IDF and slight variations on its formula. In the original BM25 derivation, the IDF component is derived from the Binary Independence Model.
IDF information theoretic interpretation.
Here is an interpretation from information theory. Suppose a query term formula_11 appears in formula_12 documents. Then a randomly picked document formula_13 will contain the term with probability formula_14 (where formula_15 is again the cardinality of the set of documents in the collection). Therefore, the information content of the message "formula_13 contains formula_11" is:
formula_16
Now suppose we have two query terms formula_17 and formula_18. If the two terms occur in documents entirely independently of each other, then the probability of seeing both formula_17 and formula_18 in a randomly picked document formula_13 is:
formula_19
and the information content of such an event is:
formula_20
With a small variation, this is exactly what is expressed by the IDF component of BM25.
formula_24
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "q_1, ..., q_n"
},
{
"math_id": 1,
"text": " \\text{score}(D,Q) = \\sum_{i=1}^{n} \\text{IDF}(q_i) \\cdot \\frac{f(q_i, D) \\cdot (k_1 + 1)}{f(q_i, D) + k_1 \\cdot \\left(1 - b + b \\cdot \\frac{|D|}{\\text{avgdl}}\\right)}"
},
{
"math_id": 2,
"text": "f(q_i, D)"
},
{
"math_id": 3,
"text": "q_i"
},
{
"math_id": 4,
"text": "|D|"
},
{
"math_id": 5,
"text": "k_1"
},
{
"math_id": 6,
"text": "k_1 \\in [1.2,2.0]"
},
{
"math_id": 7,
"text": "b = 0.75"
},
{
"math_id": 8,
"text": "\\text{IDF}(q_i)"
},
{
"math_id": 9,
"text": "\\text{IDF}(q_i) = \\ln \\left(\\frac{N - n(q_i) + 0.5}{n(q_i) + 0.5}+1\\right)"
},
{
"math_id": 10,
"text": "n(q_i)"
},
{
"math_id": 11,
"text": "q"
},
{
"math_id": 12,
"text": "n(q)"
},
{
"math_id": 13,
"text": "D"
},
{
"math_id": 14,
"text": "\\frac{n(q)}{N}"
},
{
"math_id": 15,
"text": "N"
},
{
"math_id": 16,
"text": "-\\log \\frac{n(q)}{N} = \\log \\frac{N}{n(q)}."
},
{
"math_id": 17,
"text": "q_1"
},
{
"math_id": 18,
"text": "q_2"
},
{
"math_id": 19,
"text": "\\frac{n(q_1)}{N} \\cdot \\frac{n(q_2)}{N},"
},
{
"math_id": 20,
"text": "\\sum_{i=1}^{2} \\log \\frac{N}{n(q_i)}."
},
{
"math_id": 21,
"text": "b=1"
},
{
"math_id": 22,
"text": "b=0"
},
{
"math_id": 23,
"text": "\\delta"
},
{
"math_id": 24,
"text": " \\text{score}(D,Q) = \\sum_{i=1}^{n} \\text{IDF}(q_i) \\cdot \\left[ \\frac{f(q_i, D) \\cdot (k_1 + 1)}{f(q_i, D) + k_1 \\cdot \\left(1 - b + b \\cdot \\frac{|D|}{\\text{avgdl}}\\right)} + \\delta \\right]"
}
] | https://en.wikipedia.org/wiki?curid=9511414 |
9512449 | Canonical units | A canonical unit is a unit of measurement agreed upon as default in a certain context.
In astrodynamics.
In astrodynamics, canonical units are defined in terms of some important object’s orbit that serves as a reference. In this system, a reference mass, for example the Sun’s, is designated as 1 “canonical mass unit” and the mean distance from the orbiting object to the reference object is considered the “canonical distance unit”.
Canonical units are useful when the precise distances and masses of objects in space are not available. Moreover, by designating the mass of some chosen central or primary object to be “1 canonical mass unit” and the mean distance of the reference object to another object in question to be “1 canonical distance unit”, many calculations can be simplified.
Overview.
The "Canonical Distance Unit" formula_0 is defined to be the mean radius of the reference orbit.
The "Canonical Time Unit" formula_1 is defined by the "gravitational parameter" formula_2:
formula_3
where
formula_4 is the gravitational constant
formula_5 is the mass of the central reference body
In canonical units, the gravitational parameter is given by:
formula_6
Any triplet of numbers, formula_7 formula_8 and formula_9 that satisfy the equation above is a “canonical” set.
The quantity of the time unit [CTU] can be solved in another unit system (e.g. the metric system) if the mass and radius of the central body have been determined. Using the above equation and applying dimensional analysis, set the two equations expressing formula_2 equal to each other:
formula_10
The time unit ([CTU]) can be converted to another unit system for a more useful qualitative solution using the following equation:
formula_11
For Earth-orbiting satellites, approximate unit conversions are as follows:
Astronomical Unit.
The astronomical unit (AU) is the canonical distance unit for the orbit around the Sun of the combined Earth-Moon system (based on the formerly best-known value). The corresponding time unit is the (sidereal) year)), and the mass is the total mass of the Sun (M☉).
Footnotes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\; \\text{[CDU]} \\;"
},
{
"math_id": 1,
"text": "\\; \\text{[CTU]} \\;"
},
{
"math_id": 2,
"text": "\\; \\mu \\;"
},
{
"math_id": 3,
"text": "\\mu \\equiv G \\, M ~"
},
{
"math_id": 4,
"text": "\\; G \\;"
},
{
"math_id": 5,
"text": "\\; M \\equiv \\text{[CMU]} \\;"
},
{
"math_id": 6,
"text": "\\mu = 1 \\, \\frac{\\;\\text{[CDU]}^3\\,}{\\;\\text{[CTU]}^2 ~}"
},
{
"math_id": 7,
"text": "\\, M \\, ,"
},
{
"math_id": 8,
"text": "\\, \\text{[CDU]}\\, ,"
},
{
"math_id": 9,
"text": "\\, \\text{[CTU]}\\, ,"
},
{
"math_id": 10,
"text": "\\mu \\equiv G \\times M = 1 \\, \\frac{\\;\\text{[CDU]}^3\\,}{\\;\\text{[CTU]}^2\\,} ~"
},
{
"math_id": 11,
"text": " \\text{[CTU]} = \\sqrt{ \\frac{ \\text{[CDU]}^3 }{ G \\, M } } ~"
}
] | https://en.wikipedia.org/wiki?curid=9512449 |
951532 | The Residents Radio Special | The Residents Radio Special is an album released by The Residents in 1977. This cassette was a promotional item issued to radio stations shortly after the release of "Fingerprince". It was soon offered through the mail-order service in limited quantities on cassette. The cassette was re-released in 1980 and 1984. A limited edition, entitled "Eat Exuding Oinks!", was released in 2001, featuring the original radio show and the digitally remastered versions of the songs. A highlight is the first official release of the cover of The Mothers of Invention's "King Kong", with Snakefinger on guitar. | [
{
"math_id": 0,
"text": "\\cong"
}
] | https://en.wikipedia.org/wiki?curid=951532 |
95154 | Associative array | Data structure that associates keys with values
In computer science, an associative array, map, symbol table, or dictionary is an abstract data type that stores a collection of (key, value) pairs, such that each possible key appears at most once in the collection. In mathematical terms, an associative array is a function with "finite" domain. It supports 'lookup', 'remove', and 'insert' operations.
The dictionary problem is the classic problem of designing efficient data structures that implement associative arrays.
The two major solutions to the dictionary problem are hash tables and search trees.
It is sometimes also possible to solve the problem using directly addressed arrays, binary search trees, or other more specialized structures.
Many programming languages include associative arrays as primitive data types, while many other languages provide software libraries that support associative arrays. Content-addressable memory is a form of direct hardware-level support for associative arrays.
Associative arrays have many applications including such fundamental programming patterns as memoization and the decorator pattern.
The name does not come from the associative property known in mathematics. Rather, it arises from the association of values with keys. It is not to be confused with associative processors.
Operations.
In an associative array, the association between a key and a value is often known as a "mapping"; the same word may also be used to refer to the process of creating a new association.
The operations that are usually defined for an associative array are:
add a new formula_0 pair to the collection, mapping the key to its new value. Any existing mapping is overwritten. The arguments to this operation are the key and the value.
remove a formula_0 pair from the collection, unmapping a given key from its value. The argument to this operation is the key.
find the value (if any) that is bound to a given key. The argument to this operation is the key, and the value is returned from the operation. If no value is found, some lookup functions raise an exception, while others return a default value (such as zero, null, or a specific value passed to the constructor).
Associative arrays may also include other operations such as determining the number of mappings or constructing an iterator to loop over all the mappings. For such operations, the order in which the mappings are returned is usually implementation-defined.
A multimap generalizes an associative array by allowing multiple values to be associated with a single key. A bidirectional map is a related abstract data type in which the mappings operate in both directions: each value must be associated with a unique key, and a second lookup operation takes a value as an argument and looks up the key associated with that value.
Properties.
The operations of the associative array should satisfy various properties:
where codice_5 and codice_6 are keys, codice_7 is a value, codice_8 is an associative array, and codice_9 creates a new, empty associative array.
Example.
Suppose that the set of loans made by a library is represented in a data structure. Each book in a library may be checked out by one patron at a time. However, a single patron may be able to check out multiple books. Therefore, the information about which books are checked out to which patrons may be represented by an associative array, in which the books are the keys and the patrons are the values. Using notation from Python or JSON, the data structure would be:
"Pride and Prejudice": "Alice",
"Wuthering Heights": "Alice",
"Great Expectations": "John"
A lookup operation on the key "Great Expectations" would return "John". If John returns his book, that would cause a deletion operation, and if Pat checks out a book, that would cause an insertion operation, leading to a different state:
"Pride and Prejudice": "Alice",
"The Brothers Karamazov": "Pat",
"Wuthering Heights": "Alice"
Implementation.
For dictionaries with very few mappings, it may make sense to implement the dictionary using an association list, which is a linked list of mappings. With this implementation, the time to perform the basic dictionary operations is linear in the total number of mappings. However, it is easy to implement and the constant factors in its running time are small.
Another very simple implementation technique, usable when the keys are restricted to a narrow range, is direct addressing into an array: the value for a given key "k" is stored at the array cell "A"["k"], or if there is no mapping for "k" then the cell stores a special sentinel value that indicates the lack of a mapping. This technique is simple and fast, with each dictionary operation taking constant time. However, the space requirement for this structure is the size of the entire keyspace, making it impractical unless the keyspace is small.
The two major approaches for implementing dictionaries are a hash table or a search tree.
Hash table implementations.
The most frequently used general-purpose implementation of an associative array is with a hash table: an array combined with a hash function that separates each key into a separate "bucket" of the array. The basic idea behind a hash table is that accessing an element of an array via its index is a simple, constant-time operation. Therefore, the average overhead of an operation for a hash table is only the computation of the key's hash, combined with accessing the corresponding bucket within the array. As such, hash tables usually perform in O(1) time, and usually outperform alternative implementations.
Hash tables must be able to handle collisions: the mapping by the hash function of two different keys to the same bucket of the array. The two most widespread approaches to this problem are separate chaining and open addressing. In separate chaining, the array does not store the value itself but stores a pointer to another container, usually an association list, that stores all the values matching the hash. By contrast, in open addressing, if a hash collision is found, the table seeks an empty spot in an array to store the value in a deterministic manner, usually by looking at the next immediate position in the array.
Open addressing has a lower cache miss ratio than separate chaining when the table is mostly empty. However, as the table becomes filled with more elements, open addressing's performance degrades exponentially. Additionally, separate chaining uses less memory in most cases, unless the entries are very small (less than four times the size of a pointer).
Tree implementations.
Self-balancing binary search trees.
Another common approach is to implement an associative array with a self-balancing binary search tree, such as an AVL tree or a red–black tree.
Compared to hash tables, these structures have both strengths and weaknesses. The worst-case performance of self-balancing binary search trees is significantly better than that of a hash table, with a time complexity in big O notation of O(log "n"). This is in contrast to hash tables, whose worst-case performance involves all elements sharing a single bucket, resulting in O("n") time complexity. In addition, and like all binary search trees, self-balancing binary search trees keep their elements in order. Thus, traversing its elements follows a least-to-greatest pattern, whereas traversing a hash table can result in elements being in seemingly random order. Because they are in order, tree-based maps can also satisfy range queries (find all values between two bounds) whereas a hashmap can only find exact values. However, hash tables have a much better average-case time complexity than self-balancing binary search trees of O(1), and their worst-case performance is highly unlikely when a good hash function is used.
A self-balancing binary search tree can be used to implement the buckets for a hash table that uses separate chaining. This allows for average-case constant lookup, but assures a worst-case performance of O(log "n"). However, this introduces extra complexity into the implementation and may cause even worse performance for smaller hash tables, where the time spent inserting into and balancing the tree is greater than the time needed to perform a linear search on all elements of a linked list or similar data structure.
Other trees.
Associative arrays may also be stored in unbalanced binary search trees or in data structures specialized to a particular type of keys such as radix trees, tries, Judy arrays, or van Emde Boas trees, though the relative performance of these implementations varies. For instance, Judy trees have been found to perform less efficiently than hash tables, while carefully selected hash tables generally perform more efficiently than adaptive radix trees, with potentially greater restrictions on the data types they can handle. The advantages of these alternative structures come from their ability to handle additional associative array operations, such as finding the mapping whose key is the closest to a queried key when the query is absent in the set of mappings.
Ordered dictionary.
The basic definition of a dictionary does not mandate an order. To guarantee a fixed order of enumeration, ordered versions of the associative array are often used. There are two senses of an ordered dictionary:
The latter is more common. Such ordered dictionaries can be implemented using an association list, by overlaying a doubly linked list on top of a normal dictionary, or by moving the actual data out of the sparse (unordered) array and into a dense insertion-ordered one.
Language support.
Associative arrays can be implemented in any programming language as a package and many language systems provide them as part of their standard library. In some languages, they are not only built into the standard system, but have special syntax, often using array-like subscripting.
Built-in syntactic support for associative arrays was introduced in 1969 by SNOBOL4, under the name "table". TMG offered tables with string keys and integer values. MUMPS made multi-dimensional associative arrays, optionally persistent, its key data structure. SETL supported them as one possible implementation of sets and maps. Most modern scripting languages, starting with AWK and including Rexx, Perl, PHP, Tcl, JavaScript, Maple, Python, Ruby, Wolfram Language, Go, and Lua, support associative arrays as a primary container type. In many more languages, they are available as library functions without special syntax.
In Smalltalk, Objective-C, .NET, Python, REALbasic, Swift, VBA and Delphi they are called "dictionaries"; in Perl, Ruby and Seed7 they are called "hashes"; in C++, C#, Java, Go, Clojure, Scala, OCaml, Haskell they are called "maps" (see map (C++), unordered_map (C++), and ); in Common Lisp and Windows PowerShell, they are called "hash tables" (since both typically use this implementation); in Maple and Lua, they are called "tables". In PHP, all arrays can be associative, except that the keys are limited to integers and strings. In JavaScript (see also JSON), all objects behave as associative arrays with string-valued keys, while the Map and WeakMap types take arbitrary objects as keys. In Lua, they are used as the primitive building block for all data structures. In Visual FoxPro, they are called "Collections". The D language also supports associative arrays.
Permanent storage.
Many programs using associative arrays will need to store that data in a more permanent form, such as a computer file. A common solution to this problem is a generalized concept known as "archiving" or "serialization", which produces a text or binary representation of the original objects that can be written directly to a file. This is most commonly implemented in the underlying object model, like .Net or Cocoa, which includes standard functions that convert the internal data into text. The program can create a complete text representation of any group of objects by calling these methods, which are almost always already implemented in the base associative array class.
For programs that use very large data sets, this sort of individual file storage is not appropriate, and a database management system (DB) is required. Some DB systems natively store associative arrays by serializing the data and then storing that serialized data and the key. Individual arrays can then be loaded or saved from the database using the key to refer to them. These key–value stores have been used for many years and have a history as long as that of the more common relational database (RDBs), but a lack of standardization, among other reasons, limited their use to certain niche roles. RDBs were used for these roles in most cases, although saving objects to a RDB can be complicated, a problem known as object-relational impedance mismatch.
After approximately 2010, the need for high-performance databases suitable for cloud computing and more closely matching the internal structure of the programs using them led to a renaissance in the key–value store market. These systems can store and retrieve associative arrays in a native fashion, which can greatly improve performance in common web-related workflows.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(key, value)"
}
] | https://en.wikipedia.org/wiki?curid=95154 |
951614 | Mathematical modelling of infectious diseases | Using mathematical models to understand infectious disease transmission
Mathematical models can project how infectious diseases progress to show the likely outcome of an epidemic (including in plants) and help inform public health and plant health interventions. Models use basic assumptions or collected statistics along with mathematics to find parameters for various infectious diseases and use those parameters to calculate the effects of different interventions, like mass vaccination programs. The modelling can help decide which intervention(s) to avoid and which to trial, or can predict future growth patterns, etc.
History.
The modelling of infectious diseases is a tool that has been used to study the mechanisms by which diseases spread, to predict the future course of an outbreak and to evaluate strategies to control an epidemic.
The first scientist who systematically tried to quantify causes of death was John Graunt in his book "Natural and Political Observations made upon the Bills of Mortality", in 1662. The bills he studied were listings of numbers and causes of deaths published weekly. Graunt's analysis of causes of death is considered the beginning of the "theory of competing risks" which according to Daley and Gani is "a theory that is now well established among modern epidemiologists".
The earliest account of mathematical modelling of spread of disease was carried out in 1760 by Daniel Bernoulli. Trained as a physician, Bernoulli created a mathematical model to defend the practice of inoculating against smallpox. The calculations from this model showed that universal inoculation against smallpox would increase the life expectancy from 26 years 7 months to 29 years 9 months. Daniel Bernoulli's work preceded the modern understanding of germ theory.
In the early 20th century, William Hamer and Ronald Ross applied the law of mass action to explain epidemic behaviour.
The 1920s saw the emergence of compartmental models. The Kermack–McKendrick epidemic model (1927) and the Reed–Frost epidemic model (1928) both describe the relationship between susceptible, infected and immune individuals in a population. The Kermack–McKendrick epidemic model was successful in predicting the behavior of outbreaks very similar to that observed in many recorded epidemics.
Recently, agent-based models (ABMs) have been used in exchange for simpler compartmental models. For example, epidemiological ABMs have been used to inform public health (nonpharmaceutical) interventions against the spread of SARS-CoV-2. Epidemiological ABMs, in spite of their complexity and requiring high computational power, have been criticized for simplifying and unrealistic assumptions. Still, they can be useful in informing decisions regarding mitigation and suppression measures in cases when ABMs are accurately calibrated.
Assumptions.
Models are only as good as the assumptions on which they are based. If a model makes predictions that are out of line with observed results and the mathematics is correct, the initial assumptions must change to make the model useful.
Types of epidemic models.
Stochastic.
"Stochastic" means being or having a random variable. A stochastic model is a tool for estimating probability distributions of potential outcomes by allowing for random variation in one or more inputs over time. Stochastic models depend on the chance variations in risk of exposure, disease and other illness dynamics. Statistical agent-level disease dissemination in small or large populations can be determined by stochastic methods.
Deterministic.
When dealing with large populations, as in the case of tuberculosis, deterministic or compartmental mathematical models are often used. In a deterministic model, individuals in the population are assigned to different subgroups or compartments, each representing a specific stage of the epidemic.
The transition rates from one class to another are mathematically expressed as derivatives, hence the model is formulated using differential equations. While building such models, it must be assumed that the population size in a compartment is differentiable with respect to time and that the epidemic process is deterministic. In other words, the changes in population of a compartment can be calculated using only the history that was used to develop the model.
Sub-exponential growth.
A common explanation for the growth of epidemics holds that 1 person infects 2, those 2 infect 4 and so on and so on with the number of infected doubling every generation.
It is analogous to a game of tag where 1 person tags 2, those 2 tag 4 others who've never been tagged and so on. As this game progresses it becomes increasing frenetic as the tagged run past the previously tagged to hunt down those who have never been tagged.
Thus this model of an epidemic leads to a curve that grows exponentially until it crashes to zero as all the population have been infected. i.e. no herd immunity and no peak and gradual decline as seen in reality.
Epidemic Models on Networks.
Epidemics can be modeled as diseases spreading over networks of contact between people. Such a network can be represented mathematically with a graph and is called the contact network. Every node in a contact network is a representation of an individual and each link (edge) between a pair of nodes represents the contact between them. Links in the contact networks may be used to transmit the disease between the individuals and each disease has its own dynamics on top of its contact network. The combination of disease dynamics under the influence of interventions, if any, on a contact network may be modeled with another network, known as a transmission network. In a transmission network, all the links are responsible for transmitting the disease. If such a network is a locally tree-like network, meaning that any local neighborhood in such a network takes the form of a tree, then the basic reproduction can be written in terms of the average excess degree of the transmission network such that:
formula_0
where formula_1 is the mean-degree (average degree) of the network and formula_2 is the second moment of the transmission network degree distribution. It is, however, not always straightforward to find the transmission network out of the contact network and the disease dynamics. For example, if a contact network can be approximated with an Erdős–Rényi graph with a Poissonian degree distribution, and the disease spreading parameters are as defined in the example above, such that formula_3 is the transmission rate per person and the disease has a mean infectious period of formula_4, then the basic reproduction number is formula_5 since formula_6 for a Poisson distribution.
Reproduction number.
The "basic reproduction number" (denoted by "R"0) is a measure of how transferable a disease is. It is the average number of people that a single infectious person will infect over the course of their infection. This quantity determines whether the infection will increase sub-exponentially, die out, or remain constant: if "R"0 > 1, then each person on average infects more than one other person so the disease will spread; if "R"0 < 1, then each person infects fewer than one person on average so the disease will die out; and if "R"0 = 1, then each person will infect on average exactly one other person, so the disease will become "endemic:" it will move throughout the population but not increase or decrease.
Endemic steady state.
An infectious disease is said to be endemic when it can be sustained in a population without the need for external inputs. This means that, on average, each infected person is infecting "exactly" one other person (any more and the number of people infected will grow sub-exponentially and there will be an epidemic, any less and the disease will die out). In mathematical terms, that is:
formula_7
The basic reproduction number ("R"0) of the disease, assuming everyone is susceptible, multiplied by the proportion of the population that is actually susceptible ("S") must be one (since those who are not susceptible do not feature in our calculations as they cannot contract the disease). Notice that this relation means that for a disease to be in the endemic steady state, the higher the basic reproduction number, the lower the proportion of the population susceptible must be, and vice versa. This expression has limitations concerning the susceptibility proportion, e.g. the "R"0 equals 0.5 implicates S has to be 2, however this proportion exceeds the population size.
Assume the rectangular stationary age distribution and let also the ages of infection have the same distribution for each birth year. Let the average age of infection be "A", for instance when individuals younger than "A" are susceptible and those older than "A" are immune (or infectious). Then it can be shown by an easy argument that the proportion of the population that is susceptible is given by:
formula_8
We reiterate that "L" is the age at which in this model every individual is assumed to die. But the mathematical definition of the endemic steady state can be rearranged to give:
formula_9
Therefore, due to the transitive property:
formula_10
This provides a simple way to estimate the parameter "R"0 using easily available data.
For a population with an exponential age distribution,
formula_11
This allows for the basic reproduction number of a disease given "A" and "L" in either type of population distribution.
Compartmental models in epidemiology.
Compartmental models are formulated as Markov chains. A classic compartmental model in epidemiology is the SIR model, which may be used as a simple model for modelling epidemics. Multiple other types of compartmental models are also employed.
The SIR model.
In 1927, W. O. Kermack and A. G. McKendrick created a model in which they considered a fixed population with only three compartments: susceptible, formula_12; infected, formula_13; and recovered, formula_14. The compartments used for this model consist of three classes:
Other compartmental models.
There are many modifications of the SIR model, including those that include births and deaths, where upon recovery there is no immunity (SIS model), where immunity lasts only for a short period of time (SIRS), where there is a latent period of the disease where the person is not infectious (SEIS and SEIR), and where infants can be born with immunity (MSIR).
Infectious disease dynamics.
Mathematical models need to integrate the increasing volume of data being generated on host-pathogen interactions. Many theoretical studies of the population dynamics, structure and evolution of infectious diseases of plants and animals, including humans, are concerned with this problem.
Research topics include:
Mathematics of mass vaccination.
If the proportion of the population that is immune exceeds the herd immunity level for the disease, then the disease can no longer persist in the population and its transmission dies out. Thus, a disease can be eliminated from a population if enough individuals are immune due to either vaccination or recovery from prior exposure to disease. For example, smallpox eradication, with the last wild case in 1977, and certification of the eradication of indigenous transmission of 2 of the 3 types of wild poliovirus (type 2 in 2015, after the last reported case in 1999, and type 3 in 2019, after the last reported case in 2012).
The herd immunity level will be denoted "q". Recall that, for a stable state:
formula_15
In turn,
formula_16
which is approximately:
formula_17
"S" will be (1 − "q"), since "q" is the proportion of the population that is immune and "q" + "S" must equal one (since in this simplified model, everyone is either susceptible or immune). Then:
formula_18
Remember that this is the threshold level. Die out of transmission will only occur if the proportion of immune individuals "exceeds" this level due to a mass vaccination programme.
We have just calculated the critical immunization threshold (denoted "qc"). It is the minimum proportion of the population that must be immunized at birth (or close to birth) in order for the infection to die out in the population.
formula_19
Because the fraction of the final size of the population "p" that is never infected can be defined as:
formula_20
Hence,
formula_21
Solving for formula_22, we obtain:
formula_23
When mass vaccination cannot exceed the herd immunity.
If the vaccine used is insufficiently effective or the required coverage cannot be reached, the program may fail to exceed "qc". Such a program will protect vaccinated individuals from disease, but may change the dynamics of transmission.
Suppose that a proportion of the population "q" (where "q" < "qc") is immunised at birth against an infection with "R"0 > 1. The vaccination programme changes "R"0 to "Rq" where
formula_24
This change occurs simply because there are now fewer susceptibles in the population who can be infected. "Rq" is simply "R"0 minus those that would normally be infected but that cannot be now since they are immune.
As a consequence of this lower basic reproduction number, the average age of infection "A" will also change to some new value "Aq" in those who have been left unvaccinated.
Recall the relation that linked "R"0, "A" and "L". Assuming that life expectancy has not changed, now:
formula_25
formula_26
But "R"0 = "L"/"A" so:
formula_27
Thus, the vaccination program may raise the average age of infection, and unvaccinated individuals will experience a reduced force of infection due to the presence of the vaccinated group. For a disease that leads to greater clinical severity in older populations, the unvaccinated proportion of the population may experience the disease relatively later in life than would occur in the absence of vaccine.
When mass vaccination exceeds the herd immunity.
If a vaccination program causes the proportion of immune individuals in a population to exceed the critical threshold for a significant length of time, transmission of the infectious disease in that population will stop. If elimination occurs everywhere at the same time, then this can lead to eradication.
Interruption of endemic transmission of an infectious disease, which occurs if each infected individual infects less than one other, is achieved by maintaining vaccination coverage to keep the proportion of immune individuals above the critical immunization threshold.
Elimination everywhere at the same time such that the infectious agent dies out (for example, smallpox and rinderpest).
Reliability.
Models have the advantage of examining multiple outcomes simultaneously, rather than making a single forecast. Models have shown broad degrees of reliability in past pandemics, such as SARS, SARS-CoV-2, Swine flu, MERS and Ebola.
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "R_0 = \\frac{{\\langle k^2 \\rangle}}{{\\langle k \\rangle}} - 1,"
},
{
"math_id": 1,
"text": "\n{\\langle k \\rangle}\n"
},
{
"math_id": 2,
"text": "\n{\\langle k^2 \\rangle}\n"
},
{
"math_id": 3,
"text": "\\beta"
},
{
"math_id": 4,
"text": "\\dfrac{1}{\\gamma}"
},
{
"math_id": 5,
"text": "R_0 = \\dfrac{\\beta}{\\gamma}{\\langle k \\rangle}"
},
{
"math_id": 6,
"text": "\n{\\langle k^2 \\rangle} -{\\langle k \\rangle}^2 = {\\langle k \\rangle}\n"
},
{
"math_id": 7,
"text": "\n\\ R_0 S \\ = 1.\n"
},
{
"math_id": 8,
"text": "\nS = \\frac{A}{L}.\n"
},
{
"math_id": 9,
"text": "\nS = \\frac {1} {R_0}.\n"
},
{
"math_id": 10,
"text": "\n\\frac {1} {R_0} = \\frac {A} {L} \\Rightarrow R_0 = \\frac {L} {A}.\n"
},
{
"math_id": 11,
"text": "\nR_0 = 1 + \\frac {L} {A}.\n"
},
{
"math_id": 12,
"text": "S(t)"
},
{
"math_id": 13,
"text": "I(t)"
},
{
"math_id": 14,
"text": "R(t)"
},
{
"math_id": 15,
"text": "R_0 \\cdot S = 1."
},
{
"math_id": 16,
"text": "R_0=\\frac{N}{S} = \\frac{\\mu N \\operatorname E(T_L)}{\\mu N \\operatorname E[\\min(T_L,T_S)]} = \\frac{\\operatorname E(T_L)}{\\operatorname E[\\min(T_L, T_S)]},"
},
{
"math_id": 17,
"text": "\\frac{\\operatorname \\operatorname E(T_L)}{\\operatorname \\operatorname E(T_S)} = 1+\\frac{\\lambda}{\\mu} = \\frac{\\beta N }{v}."
},
{
"math_id": 18,
"text": "\n\\begin{align}\n& R_0 \\cdot (1-q) = 1, \\\\[6pt]\n& 1-q = \\frac {1} {R_0}, \\\\[6pt]\n& q = 1 - \\frac {1} {R_0}.\n\\end{align}\n"
},
{
"math_id": 19,
"text": " q_c = 1 - \\frac {1} {R_0}. "
},
{
"math_id": 20,
"text": " \\lim_{t\\to\\infty} S(t) = e^{-\\int_0^\\infty \\lambda(t) \\, dt} = 1-p."
},
{
"math_id": 21,
"text": " p = 1- e^{-\\int_0^\\infty \\beta I(t) \\, dt} = 1-e^{-R_0 p}."
},
{
"math_id": 22,
"text": "R_0"
},
{
"math_id": 23,
"text": " R_0 = \\frac{-\\ln(1-p)}{p}."
},
{
"math_id": 24,
"text": "R_q = R_0(1-q)"
},
{
"math_id": 25,
"text": "R_q = \\frac{L}{A_q},"
},
{
"math_id": 26,
"text": "A_q = \\frac{L}{R_q} = \\frac{L}{R_0(1-q)}."
},
{
"math_id": 27,
"text": "A_q = \\frac{L}{(L/A)(1-q)} = \\frac{AL}{L(1-q)} = \\frac {A} {1-q}."
}
] | https://en.wikipedia.org/wiki?curid=951614 |
95178 | Y′UV | Mathematical color model
Y′UV, also written YUV, is the color model found in the PAL analogue color TV standard. A color is described as a Y′ component (luma) and two chroma components U and V. The prime symbol (') denotes that the luma is calculated from gamma-corrected RGB input and that it is different from true luminance. Today, the term YUV is commonly used in the computer industry to describe colorspaces that are encoded using YCbCr.
In TV formats, color information (U and V) was added separately via a subcarrier so that a black-and-white receiver would still be able to receive and display a color picture transmission in the receiver's native black-and-white format, with no need for extra transmission bandwidth.
As for etymology, Y, Y′, U, and V are not abbreviations. The use of the letter Y for luminance can be traced back to the choice of XYZ primaries. This lends itself naturally to the usage of the same letter in luma (Y′), which approximates a perceptually uniform correlate of luminance. Likewise, U and V were chosen to differentiate the U and V axes from those in other spaces, such as the x and y chromaticity space. See the equations below or compare the historical development of the math.
Related color models.
The scope of the terms Y′UV, YUV, YCbCr, YPbPr, etc., is sometimes ambiguous and overlapping.
All these formats are based on a luma component and two chroma components describing the color difference from gray. In all formats other than Y′IQ, each chroma component is a scaled version of the difference between red/blue and Y; the main difference lies in the scaling factors used, which is determined by color primaries and the intended numeric range (compare the use of "Umax" and "Vmax" in with a fixed in ). In Y′IQ, the UV plane is rotated by 33°.
History.
Y′UV was invented when engineers wanted color television in a black-and-white infrastructure. They needed a signal transmission method that was compatible with black-and-white (B&W) TV while being able to add color. The luma component already existed as the black and white signal; they added the UV signal to this as a solution.
The UV representation of chrominance was chosen over straight R and B signals because U and V are color difference signals. In other words, the U and V signals tell the television to shift the color of a certain spot without altering its brightness. Or the U and V signals tell the monitor to make one color brighter at the cost of the other and by how much it should be shifted. The higher (or the lower when negative) the U and V values are, the more saturated (colorful) the spot gets. The closer the U and V values get to zero, the lesser it shifts the color meaning that the red, green and blue lights will be more equally bright, producing a grayer spot. This is the benefit of using color difference signals, i.e. instead of telling how much red there is to a color, it tells by how much it is more red than green or blue.
In turn this meant that when the U and V signals would be zero or absent, it would just display a grayscale image. If R and B were to have been used, these would have non-zero values even in a B&W scene, requiring all three data-carrying signals. This was important in the early days of color television, because old black and white TV signals had no U and V signals present, meaning the color TV would just display it as B&W TV out of the box. In addition, black and white receivers could take the Y′ signal and ignore the U- and V-color signals, making Y′UV backward-compatible with all existing black-and-white equipment, input and output. If the color-TV standard wouldn't have used color difference signals, it could mean a color TV would make funny colors out of a B&W broadcast or it would need additional circuitry to translate the B&W signal to color.
It was necessary to assign a narrower bandwidth to the chrominance channel because there was no additional bandwidth available. If some of the luminance information arrived via the chrominance channel (as it would have if RB signals were used instead of differential UV signals), B&W resolution would have been compromised.
Conversion to/from RGB.
SDTV with BT.470.
Y′UV signals are typically created from RGB (red, green and blue) source. Weighted values of R, G, and B are summed to produce Y′, a measure of overall brightness or luminance. U and V are computed as scaled differences between Y′ and the B and R values.
PAL (NTSC used YIQ, which is further rotated) standard defines the following constants, derived from BT.470 System M primaries and white point using SMPTE RP 177 (same constants called matrix coefficients were used later in BT.601, although it uses 1/2 instead of 0.436 and 0.615):
formula_0
PAL signals in Y′UV are computed from R'G'B' (only SECAM IV used linear RGB) as follows:
formula_1
The resulting ranges of Y′, U, and V respectively are [0, 1], [−"U"max, "U"max], and [−"V"max, "V"max].
Inverting the above transformation converts Y′UV to RGB:
formula_2
Equivalently, substituting values for the constants and expressing them as matrices gives these formulas for BT.470 System M (PAL):
formula_3
For small values of Y' it is possible to get R, G, or B values that are negative so in practice we clamp the RGB results to the interval [0,1] or more correctly clamp inside the Y'CbCr.
In BT.470 a mistake was made because 0.115 was used instead of 0.114 for blue and 0.493 was the result instead of 0.492. In practice that did not affect the decoders because the approximation 1/2.03 was used.
HDTV with BT.709.
For HDTV the ATSC decided to change the basic values for WR and WB compared to the previously selected values in the SDTV system. For HDTV these values are provided by Rec. 709. This decision further impacted on the matrix for the Y′UV↔RGB conversion so that its member values are also slightly different. As a result, with SDTV and HDTV there are generally two distinct Y′UV representations possible for any RGB triple: a SDTV-Y′UV and an HDTV-Y′UV one. This means in detail that when directly converting between SDTV and HDTV, the luma (Y′) information is roughly the same but the representation of the chroma (U & V) channel information needs conversion. Still in coverage of the CIE 1931 color space the Rec. 709 color space is almost identical to Rec. 601 and covers 35.9%. In contrast to this UHDTV with Rec. 2020 covers a much larger area and thus its very own matrix was derived for YCbCr (no YUV/Y′UV, since decommissioning of analog TV).
BT.709 defines these weight values:
formula_4
The "U"max and "V"max values are from above.
The conversion matrices for analog form of BT.709 are these, but there is no evidence those were ever used in practice (instead only actually described form of BT.709 is used, the YCbCr form):
formula_5
Luminance/chrominance systems in general.
The primary advantage of luma/chroma systems such as Y′UV, and its relatives Y′IQ and YDbDr, is that they remain compatible with black and white analog television (largely due to the work of Georges Valensi). The Y′ channel saves all the data recorded by black and white cameras, so it produces a signal suitable for reception on old monochrome displays. In this case, the U and V are simply discarded. If displaying color, all three channels are used, and the original RGB information can be decoded.
Another advantage of Y′UV is that some of the information can be discarded in order to reduce bandwidth. The human eye has fairly little spatial sensitivity to color: the accuracy of the brightness information of the luminance channel has far more impact on the image detail discerned than that of the other two. Understanding this human shortcoming, standards such as NTSC and PAL reduce the bandwidth of the chrominance channels considerably. (Bandwidth is in the temporal domain, but this translates into the spatial domain as the image is scanned out.)
Therefore, the resulting U and V signals can be substantially "compressed". In the NTSC (Y′IQ) and PAL systems, the chrominance signals had significantly narrower bandwidth than that for the luminance. Early versions of NTSC rapidly alternated between particular colors in identical image areas to make them appear adding up to each other to the human eye, while all modern analogue and even most digital video standards use chroma subsampling by recording a picture's color information at reduced resolution. Only half the horizontal resolution compared to the brightness information is kept (termed 4:2:2 chroma subsampling), and often the vertical resolution is also halved (giving 4:2:0). The 4:x:x standard was adopted due to the very earliest color NTSC standard which used a chroma subsampling of 4:1:1 (where the horizontal color resolution is quartered while the vertical is full resolution) so that the picture carried only a quarter as much color resolution compared to brightness resolution. Today, only high-end equipment processing uncompressed signals uses a chroma subsampling of 4:4:4 with identical resolution for both brightness and color information.
The I and Q axes were chosen according to bandwidth needed by human vision, one axis being that requiring the most bandwidth, and the other (fortuitously at 90 degrees) the minimum. However, true I and Q demodulation was relatively more complex, requiring two analog delay lines, and NTSC receivers rarely used it.
However, this color modulation strategy is lossy, particularly because of crosstalk from the luma to the chroma-carrying wire, and vice versa, in analogue equipment (including RCA connectors to transfer a digital signal, as all they carry is analogue composite video, which is either YUV, YIQ, or even CVBS). Furthermore, NTSC and PAL encoded color signals in a manner that causes high bandwidth chroma and luma signals to mix with each other in a bid to maintain backward compatibility with black and white television equipment, which results in dot crawl and cross color artifacts. When the NTSC standard was created in the 1950s, this was not a real concern since the quality of the image was limited by the monitor equipment, not the limited-bandwidth signal being received. However today's modern television is capable of displaying more information than is contained in these lossy signals. To keep pace with the abilities of new display technologies, attempts were made since the late 1970s to preserve more of the Y′UV signal while transferring images, such as SCART (1977) and S-Video (1987) connectors.
Instead of Y′UV, Y′CbCr was used as the standard format for (digital) common video compression algorithms such as MPEG-2. Digital television and DVDs preserve their compressed video streams in the MPEG-2 format, which uses a fully defined Y′CbCr color space, although retaining the established process of chroma subsampling. Cinepak, a video codec from 1991, used a modified YUV 4:2:0 colorspace. The professional CCIR 601 digital video format also uses Y′CbCr at the common chroma subsampling rate of 4:2:2, primarily for compatibility with previous analog video standards. This stream can be easily mixed into any output format needed.
Y′UV is not an absolute color space. It is a way of encoding RGB information, and the actual color displayed depends on the actual RGB colorants used to display the signal. Therefore, a value expressed as Y′UV is only predictable if standard RGB colorants are used (i.e. a fixed set of primary chromaticities, or particular set of red, green, and blue).
Furthermore, the range of colors and brightnesses (known as the color gamut and color volume) of RGB (whether it be BT.601 or Rec. 709) is far smaller than the range of colors and brightnesses allowed by Y′UV. This can be very important when converting from Y′UV (or Y′CbCr) to RGB, since the formulas above can produce "invalid" RGB values – i.e., values below 0% or very far above 100% of the range (e.g., outside the standard 16–235 luma range (and 16–240 chroma range) for TVs and HD content, or outside 0–255 for standard definition on PCs). Unless these values are dealt with they will usually be "clipped" (i.e., limited) to the valid range of the channel affected. This changes the hue of the color, which is very undesirable, so it is therefore often considered better to desaturate the offending colors such that they fall within the RGB gamut.
Likewise, when RGB at a given bit depth is converted to YUV at the same bit depth, several RGB colors can become the same Y′UV color, resulting in information loss.
Relation with Y′CbCr.
Y′UV is often used as a term for YCbCr. However, while related, they are different formats with different scale factors; additionally, unlike YCbCr, Y’UV has historically used two different scale factors for the U component vs. the V component. Not scaled matrix is used in Photo CD's PhotoYCC. U and V are bipolar signals which can be positive or negative, and are zero for grays, whereas YCbCr usually scales all channels to either the 16–235 range or the 0–255 range, which makes Cb and Cr unsigned quantities which are 128 for grays.
Nevertheless, the relationship between them in the standard case is simple. In particular, the Y' channels of both are linearly related to each other, both Cb and U are related linearly to (B-Y), and both Cr and V are related linearly to (R-Y).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\begin{align}\n W_R &= 0.299, \\\\\n W_G &= 1 - W_R - W_B = 0.587, \\\\\n W_B &= 0.114, \\\\\n U_\\text{max} &= 0.436, \\\\\n V_\\text{max} &= 0.615.\n\\end{align}"
},
{
"math_id": 1,
"text": "\\begin{align}\n Y' &= W_R R' + W_G G' + W_B B' = 0.299 R' + 0.587 G' + 0.114 B', \\\\\n U &= U_\\text{max} \\frac{B' - Y'}{1 - W_B} \\approx 0.492(B' - Y'), \\\\\n V &= V_\\text{max} \\frac{R' - Y'}{1 - W_R} \\approx 0.877(R' - Y').\n\\end{align}"
},
{
"math_id": 2,
"text": "\\begin{align}\n R' &= Y' + V \\frac{1 - W_R}{V_\\text{max}} = Y' + \\frac{V}{0.877} = Y' + 1.14 V,\\\\\n G' &= Y' - U \\frac{W_B (1 - W_B)}{U_\\text{max} W_G} - V \\frac{W_R (1 - W_R)}{V_\\text{max} W_G} \\\\\n &= Y' - \\frac{0.232 U}{0.587} - \\frac{0.341 V}{0.587} = Y' - 0.395 U - 0.581 V, \\\\\n B' &= Y' + U \\frac{1 - W_B}{U_\\text{max}} = Y' + \\frac{U}{0.492} = Y' + 2.033 U.\n\\end{align}"
},
{
"math_id": 3,
"text": "\\begin{align}\n \\begin{bmatrix} Y' \\\\ U \\\\ V \\end{bmatrix}\n &=\n \\begin{bmatrix}\n 0.299 & 0.587 & 0.114 \\\\\n -0.14713 & -0.28886 & 0.436 \\\\\n 0.615 & -0.51499 & -0.10001\n \\end{bmatrix}\n \\begin{bmatrix} R' \\\\ G' \\\\ B' \\end{bmatrix}, \\\\\n\n \\begin{bmatrix} R' \\\\ G' \\\\ B' \\end{bmatrix}\n &=\n \\begin{bmatrix}\n 1 & 0 & 1.13983 \\\\\n 1 & -0.39465 & -0.58060 \\\\\n 1 & 2.03211 & 0\n \\end{bmatrix}\n \\begin{bmatrix} Y' \\\\ U \\\\ V \\end{bmatrix}.\n\\end{align}"
},
{
"math_id": 4,
"text": "\\begin{align}\n W_R &= 0.2126, \\\\\n W_G &= 1 - W_R - W_B = 0.7152, \\\\\n W_B &= 0.0722 \\\\\n\\end{align}"
},
{
"math_id": 5,
"text": "\\begin{align}\n \\begin{bmatrix} Y' \\\\ U \\\\ V \\end{bmatrix}\n &=\n \\begin{bmatrix}\n 0.2126 & 0.7152 & 0.0722 \\\\\n -0.09991 & -0.33609 & 0.436 \\\\\n 0.615 & -0.55861 & -0.05639\n \\end{bmatrix}\n \\begin{bmatrix} R' \\\\ G' \\\\ B' \\end{bmatrix} \\\\\n\n \\begin{bmatrix} R' \\\\ G' \\\\ B' \\end{bmatrix}\n &=\n \\begin{bmatrix}\n 1 & 0 & 1.28033 \\\\\n 1 & -0.21482 & -0.38059 \\\\\n 1 & 2.12798 & 0\n \\end{bmatrix}\n \\begin{bmatrix} Y' \\\\ U \\\\ V \\end{bmatrix}\n\\end{align}"
}
] | https://en.wikipedia.org/wiki?curid=95178 |
9518854 | Microscopic traffic flow model | Microscopic traffic flow models are a class of scientific models of vehicular traffic dynamics.
In contrast, to macroscopic models, microscopic traffic flow models simulate single vehicle-driver units, so the dynamic variables of the models represent microscopic properties like the position and velocity of single vehicles.
Car-following models.
Also known as "time-continuous models", all car-following models have in common that they are defined by ordinary differential equations describing the complete dynamics of the vehicles' positions formula_0 and velocities formula_1. It is assumed that the input stimuli of the drivers are restricted to their own velocity formula_1, the net distance (bumper-to-bumper distance) formula_2 to the leading vehicle formula_3 (where formula_4 denotes the vehicle length), and the velocity formula_5 of the leading vehicle. The equation of motion of each vehicle is characterized by an acceleration function that depends on those input stimuli:
formula_6
In general, the driving behavior of a single driver-vehicle unit formula_7 might not merely depend on the immediate leader formula_3 but on the formula_8 vehicles in front. The equation of motion in this more generalized form reads:
formula_9
Cellular automaton models.
Cellular automaton (CA) models use integer variables to describe the dynamical properties of the system. The road is divided into sections of a certain length formula_10 and the time is discretized to steps of formula_11. Each road section can either be occupied by a vehicle or empty and the dynamics are given by updated rules of the form:
formula_12
formula_13
(the simulation time formula_14 is measured in units of formula_11 and the vehicle positions formula_0 in units of formula_10).
The time scale is typically given by the reaction time of a human driver, formula_15. With formula_11 fixed, the length of the road sections determines the granularity of the model. At a complete standstill, the average road length occupied by one vehicle is approximately 7.5 meters. Setting formula_10 to this value leads to a model where one vehicle always occupies exactly one section of the road and a velocity of 5 corresponds to formula_16, which is then set to be the maximum velocity a driver wants to drive at. However, in such a model, the smallest possible acceleration would be formula_17 which is unrealistic. Therefore, many modern CA models use a finer spatial discretization, for example formula_18, leading to a smallest possible acceleration of formula_19.
Although cellular automaton models lack the accuracy of the time-continuous car-following models, they still have the ability to reproduce a wide range of traffic phenomena. Due to the simplicity of the models, they are numerically very efficient and can be used to simulate large road networks in real-time or even faster.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "x_\\alpha"
},
{
"math_id": 1,
"text": "v_\\alpha"
},
{
"math_id": 2,
"text": "s_\\alpha = x_{\\alpha-1} - x_\\alpha - \\ell_{\\alpha-1}"
},
{
"math_id": 3,
"text": "\\alpha-1"
},
{
"math_id": 4,
"text": "\\ell_{\\alpha-1}"
},
{
"math_id": 5,
"text": "v_{\\alpha-1}"
},
{
"math_id": 6,
"text": "\\ddot{x}_\\alpha(t) = \\dot{v}_\\alpha(t) = F(v_\\alpha(t), s_\\alpha(t), v_{\\alpha-1}(t), s_{\\alpha-1}(t))"
},
{
"math_id": 7,
"text": "\\alpha"
},
{
"math_id": 8,
"text": "n_a"
},
{
"math_id": 9,
"text": "\\dot{v}_\\alpha(t) = f(x_\\alpha(t), v_\\alpha(t), x_{\\alpha-1}(t), v_{\\alpha-1}(t), \\ldots, x_{\\alpha-n_a}(t), v_{\\alpha-n_a}(t))"
},
{
"math_id": 10,
"text": "\\Delta x"
},
{
"math_id": 11,
"text": "\\Delta t"
},
{
"math_id": 12,
"text": "v_\\alpha^{t+1} = f(s_\\alpha^t, v_\\alpha^t, v_{\\alpha-1}^t, \\ldots)"
},
{
"math_id": 13,
"text": "x_\\alpha^{t+1} = x_\\alpha^t + v_\\alpha^{t+1}\\Delta t"
},
{
"math_id": 14,
"text": "t"
},
{
"math_id": 15,
"text": "\\Delta t = 1 \\text{s}"
},
{
"math_id": 16,
"text": "5 \\Delta x/\\Delta t = 135 \\text{km/h}"
},
{
"math_id": 17,
"text": "\\Delta x/(\\Delta t)^2 = 7.5 \\text{m}/\\text{s}^2"
},
{
"math_id": 18,
"text": "\\Delta x = 1.5 \\text{m}"
},
{
"math_id": 19,
"text": "1.5 \\text{m}/\\text{s}^2"
}
] | https://en.wikipedia.org/wiki?curid=9518854 |
9519121 | Quadratically constrained quadratic program | In mathematical optimization, a quadratically constrained quadratic program (QCQP) is an optimization problem in which both the objective function and the constraints are quadratic functions. It has the form
formula_0
where "P"0, ..., "P""m" are "n"-by-"n" matrices and "x" ∈ R"n" is the optimization variable.
If "P"0, ..., "P""m" are all positive semidefinite, then the problem is convex. If these matrices are neither positive nor negative semidefinite, the problem is non-convex. If "P"1, ... ,"P""m" are all zero, then the constraints are in fact linear and the problem is a quadratic program.
Hardness.
Solving the general case is an NP-hard problem. To see this, note that the two constraints "x"1("x"1 − 1) ≤ 0 and "x"1("x"1 − 1) ≥ 0 are equivalent to the constraint "x"1("x"1 − 1) = 0, which is in turn equivalent to the constraint "x"1 ∈ {0, 1}. Hence, any 0–1 integer program (in which all variables have to be either 0 or 1) can be formulated as a quadratically constrained quadratic program. Since 0–1 integer programming is NP-hard in general, QCQP is also NP-hard.
Relaxation.
There are two main relaxations of QCQP: using semidefinite programming (SDP), and using the reformulation-linearization technique (RLT). For some classes of QCQP problems (precisely, QCQPs with zero diagonal elements in the data matrices), second-order cone programming (SOCP) and linear programming (LP) relaxations providing the same objective value as the SDP relaxation are available.
Nonconvex QCQPs with non-positive off-diagonal elements can be exactly solved by the SDP or SOCP relaxations, and there are polynomial-time-checkable sufficient conditions for SDP relaxations of general QCQPs to be exact. Moreover, it was shown that a class of random general QCQPs has exact semidefinite relaxations with high probability as long as the number of constraints grows no faster than a fixed polynomial in the number of variables.
Semidefinite programming.
When "P"0, ..., "P""m" are all positive-definite matrices, the problem is convex and can be readily solved using interior point methods, as done with semidefinite programming.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\begin{align}\n& \\text{minimize} && \\tfrac12 x^\\mathrm{T} P_0 x + q_0^\\mathrm{T} x \\\\\n& \\text{subject to} && \\tfrac12 x^\\mathrm{T} P_i x + q_i^\\mathrm{T} x + r_i \\leq 0 \\quad \\text{for } i = 1,\\dots,m , \\\\\n&&& Ax = b, \n\\end{align} "
}
] | https://en.wikipedia.org/wiki?curid=9519121 |
9519371 | Ratio distribution | Probability distribution
A ratio distribution (also known as a quotient distribution) is a probability distribution constructed as the distribution of the ratio of random variables having two other known distributions.
Given two (usually independent) random variables "X" and "Y", the distribution of the random variable "Z" that is formed as the ratio "Z" = "X"/"Y" is a "ratio distribution".
An example is the Cauchy distribution (also called the "normal ratio distribution"), which comes about as the ratio of two normally distributed variables with zero mean.
Two other distributions often used in test-statistics are also ratio distributions:
the "t"-distribution arises from a Gaussian random variable divided by an independent chi-distributed random variable,
while the "F"-distribution originates from the ratio of two independent chi-squared distributed random variables.
More general ratio distributions have been considered in the literature.
Often the ratio distributions are heavy-tailed, and it may be difficult to work with such distributions and develop an associated statistical test.
A method based on the median has been suggested as a "work-around".
Algebra of random variables.
The ratio is one type of algebra for random variables:
Related to the ratio distribution are the product distribution, sum distribution and difference distribution. More generally, one may talk of combinations of sums, differences, products and ratios.
Many of these distributions are described in Melvin D. Springer's book from 1979 "The Algebra of Random Variables".
The algebraic rules known with ordinary numbers do not apply for the algebra of random variables.
For example, if a product is "C = AB" and a ratio is "D=C/A" it does not necessarily mean that the distributions of "D" and "B" are the same.
Indeed, a peculiar effect is seen for the Cauchy distribution: The product and the ratio of two independent Cauchy distributions (with the same scale parameter and the location parameter set to zero) will give the same distribution.
This becomes evident when regarding the Cauchy distribution as itself a ratio distribution of two Gaussian distributions of zero means: Consider two Cauchy random variables, formula_0 and formula_1 each constructed from two Gaussian distributions formula_2 and formula_3 then
formula_4
where formula_5. The first term is the ratio of two Cauchy distributions while the last term is the product of two such distributions.
Derivation.
A way of deriving the ratio distribution of formula_6 from the joint distribution of the two other random variables "X , Y" , with joint pdf formula_7, is by integration of the following form
formula_8
If the two variables are independent then formula_9 and this becomes
formula_10
This may not be straightforward. By way of example take the classical problem of the ratio of two standard Gaussian samples. The joint pdf is
formula_11
Defining formula_6 we have
formula_12
Using the known definite integral formula_13 we get
formula_14
which is the Cauchy distribution, or Student's "t" distribution with "n" = 1
The Mellin transform has also been suggested for derivation of ratio distributions.
In the case of positive independent variables, proceed as follows. The diagram shows a separable bivariate distribution formula_15 which has support in the positive quadrant formula_16 and we wish to find the pdf of the ratio formula_17. The hatched volume above the line formula_18 represents the cumulative distribution of the function formula_19 multiplied with the logical function formula_20. The density is first integrated in horizontal strips; the horizontal strip at height "y" extends from "x" = 0 to "x = Ry" and has incremental probability formula_21.<br>
Secondly, integrating the horizontal strips upward over all "y" yields the volume of probability above the line
formula_22
Finally, differentiate formula_23 with respect to formula_24 to get the pdf formula_25.
formula_26
Move the differentiation inside the integral:
formula_27
and since
formula_28
then
formula_29
As an example, find the pdf of the ratio "R" when
formula_30
We have
formula_31
thus
formula_32
Differentiation wrt. "R" yields the pdf of "R"
formula_33
Moments of random ratios.
From Mellin transform theory, for distributions existing only on the positive half-line formula_34, we have the product identity formula_35 provided formula_36 are independent. For the case of a ratio of samples like formula_37, in order to make use of this identity it is necessary to use moments of the inverse distribution. Set formula_38 such that formula_39.
Thus, if the moments of formula_40 and formula_41 can be determined separately, then the moments of formula_42 can be found. The moments of formula_43 are determined from the inverse pdf of formula_44 , often a tractable exercise. At simplest, formula_45.
To illustrate, let formula_46 be sampled from a standard Gamma distribution
formula_47 whose formula_48-th moment is formula_49.
formula_50is sampled from an inverse Gamma distribution with parameter formula_51 and has pdf formula_52. The moments of this pdf are
formula_53
Multiplying the corresponding moments gives
formula_54
Independently, it is known that the ratio of the two Gamma samples formula_55 follows the Beta Prime distribution:
formula_56 whose moments are formula_57
Substituting formula_58 we have
formula_59
which is consistent with the product of moments above.
Means and variances of random ratios.
In the Product distribution section, and derived from Mellin transform theory (see section above), it is found that the mean of a product of independent variables is equal to the product of their means. In the case of ratios, we have
formula_60
which, in terms of probability distributions, is equivalent to
formula_61
Note that formula_62 i.e., formula_63
The variance of a ratio of independent variables is
formula_64
Normal ratio distributions.
Uncorrelated central normal ratio.
When "X" and "Y" are independent and have a Gaussian distribution with zero mean, the form of their ratio distribution is a Cauchy distribution.
This can be derived by setting formula_65 then showing that formula_66 has circular symmetry. For a bivariate uncorrelated Gaussian distribution we have
formula_67
If formula_68 is a function only of "r" then formula_66 is uniformly distributed on formula_69 with density formula_70 so the problem reduces to finding the probability distribution of "Z" under the mapping
formula_65
We have, by conservation of probability
formula_71
and since formula_72
formula_73
and setting formula_74 we get
formula_75
There is a spurious factor of 2 here. Actually, two values of formula_66 spaced by formula_76 map onto the same value of "z", the density is doubled, and the final result is
formula_77
When either of the two Normal distributions is non-central then the result for the distribution of the ratio is much more complicated and is given below in the succinct form presented by David Hinkley. The trigonometric method for a ratio does however extend to radial distributions like bivariate normals or a bivariate Student "t" in which the density depends only on radius formula_78. It does not extend to the ratio of two independent Student "t" distributions which give the Cauchy ratio shown in a section below for one degree of freedom.
Uncorrelated noncentral normal ratio.
In the absence of correlation formula_79, the probability density function of the two normal variables "X" = "N"("μX", "σX"2) and "Y" = "N"("μY", "σY"2) ratio "Z" = "X"/"Y" is given exactly by the following expression, derived in several sources:
formula_80
where
formula_81
formula_82
formula_83
formula_84
and formula_85 is the normal cumulative distribution function:
formula_86
formula_91
formula_92
Correlated central normal ratio.
The above expression becomes more complicated when the variables "X" and "Y" are correlated. If formula_93 but formula_94 and formula_95 the more general Cauchy distribution is obtained
formula_96
where "ρ" is the correlation coefficient between "X" and "Y" and
formula_97
formula_98
The complex distribution has also been expressed with Kummer's confluent hypergeometric function or the Hermite function.
Correlated noncentral normal ratio.
This was shown in Springer 1979 problem 4.28.
A transformation to the log domain was suggested by Katz(1978) (see binomial section below). Let the ratio be
formula_99.
Take logs to get
formula_100
Since formula_101 then asymptotically
formula_102
Alternatively, Geary (1930) suggested that
formula_103
has approximately a standard Gaussian distribution:
This transformation has been called the "Geary–Hinkley transformation"; the approximation is good if "Y" is unlikely to assume negative values, basically formula_104.
Exact correlated noncentral normal ratio.
This is developed by Dale (Springer 1979 problem 4.28) and Hinkley 1969. Geary showed how the correlated ratio formula_105 could be transformed into a near-Gaussian form and developed an approximation for formula_106 dependent on the probability of negative denominator values formula_107 being vanishingly small. Fieller's later correlated ratio analysis is exact but care is needed when combining modern math packages with verbal conditions in the older literature. Pham-Ghia has exhaustively discussed these methods. Hinkley's correlated results are exact but it is shown below that the correlated ratio condition can also be transformed into an uncorrelated one so only the simplified Hinkley equations above are required, not the full correlated ratio version.
Let the ratio be:
formula_108
in which formula_109 are zero-mean correlated normal variables with variances formula_110 and formula_111 have means formula_112
Write formula_113 such that formula_114 become uncorrelated and formula_115 has standard deviation
formula_116
The ratio:
formula_117
is invariant under this transformation and retains the same pdf.
The formula_118 term in the numerator appears to be made separable by expanding:
formula_119
to get
formula_120
in which formula_121 and "z" has now become a ratio of uncorrelated non-central normal samples with an invariant "z"-offset (this is not formally proven, though appears to have been used by Geary),
Finally, to be explicit, the pdf of the ratio formula_105 for correlated variables is found by inputting the modified parameters formula_122 and formula_123 into the Hinkley equation above which returns the pdf for the correlated ratio with a constant offset formula_124 on formula_105.
The figures above show an example of a positively correlated ratio with formula_125 in which the shaded wedges represent the increment of area selected by given ratio formula_126 which accumulates probability where they overlap the distribution. The theoretical distribution, derived from the equations under discussion combined with Hinkley's equations, is highly consistent with a simulation result using 5,000 samples. In the top figure it is clear that for a ratio formula_127 the wedge has almost bypassed the main distribution mass altogether and this explains the local minimum in the theoretical pdf formula_128. Conversely as formula_129 moves either toward or away from one the wedge spans more of the central mass, accumulating a higher probability.
Complex normal ratio.
The ratio of correlated zero-mean circularly symmetric complex normal distributed variables was determined by Baxley et al. and has since been extended to the nonzero-mean and nonsymmetric case. In the correlated zero-mean case, the joint distribution of "x", "y" is
formula_130
where
formula_131
formula_132 is an Hermitian transpose and
formula_133
The PDF of formula_6 is found to be
formula_134
In the usual event that formula_135 we get
formula_136
Further closed-form results for the CDF are also given.
The graph shows the pdf of the ratio of two complex normal variables with a correlation coefficient of formula_137. The pdf peak occurs at roughly the complex conjugate of a scaled down formula_138.
Ratio of log-normal.
The ratio of independent or correlated log-normals is log-normal. This follows, because if formula_139 and formula_140 are log-normally distributed, then formula_141 and formula_142 are normally distributed. If they are independent or their logarithms follow a bivarate normal distribution, then the logarithm of their ratio is the difference of independent or correlated normally distributed random variables, which is normally distributed.
This is important for many applications requiring the ratio of random variables that must be positive, where joint distribution of formula_139 and formula_140 is adequately approximated by a log-normal. This is a common result of the multiplicative central limit theorem, also known as Gibrat's law, when formula_143 is the result of an accumulation of many small percentage changes and must be positive and approximately log-normally distributed.
Uniform ratio distribution.
With two independent random variables following a uniform distribution, e.g.,
formula_144
the ratio distribution becomes
formula_145
Cauchy ratio distribution.
If two independent random variables, "X" and "Y" each follow a Cauchy distribution with median equal to zero and shape factor formula_146
formula_147
then the ratio distribution for the random variable formula_148 is
formula_149
This distribution does not depend on formula_146 and the result stated by Springer (p158 Question 4.6) is not correct.
The ratio distribution is similar to but not the same as the product distribution of the random variable formula_150:
formula_151
More generally, if two independent random variables "X" and "Y" each follow a Cauchy distribution with median equal to zero and shape factor formula_146 and formula_152 respectively, then:
The result for the ratio distribution can be obtained from the product distribution by replacing formula_152 with formula_156
Ratio of standard normal to standard uniform.
If "X" has a standard normal distribution and "Y" has a standard uniform distribution, then "Z" = "X" / "Y" has a distribution known as the "slash distribution", with probability density function
formula_157
where φ("z") is the probability density function of the standard normal distribution.
Chi-squared, Gamma, Beta distributions.
Let "G" be a normal(0,1) distribution, "Y" and "Z" be chi-squared distributions with "m" and "n" degrees of freedom respectively, all independent, with formula_158. Then
formula_159 the Student's "t" distribution
formula_160 i.e. Fisher's F-test distribution
formula_161 the beta distribution
formula_162 the "standard" beta prime distribution
If formula_163, a noncentral chi-squared distribution, and formula_164 and formula_165 is independent of formula_166 then
formula_167, a noncentral F-distribution.
formula_168
defines formula_169, Fisher's F density distribution, the PDF of the ratio of two Chi-squares with "m, n" degrees of freedom.
The CDF of the Fisher density, found in F-tables is defined in the beta prime distribution article.
If we enter an "F"-test table with "m" = 3, "n" = 4 and 5% probability in the right tail, the critical value is found to be 6.59. This coincides with the integral
formula_170
For gamma distributions "U" and "V" with arbitrary shape parameters "α"1 and "α"2 and their scale parameters both set to unity, that is, formula_171, where formula_172, then
formula_173
formula_174
formula_175
If formula_176, then formula_177. Note that here "θ" is a scale parameter, rather than a rate parameter.
If formula_178, then by rescaling the formula_66 parameter to unity we have
formula_179
formula_180
Thus
formula_181
in which formula_182 represents the "generalised" beta prime distribution.
In the foregoing it is apparent that if formula_183 then formula_184. More explicitly, since
formula_185
if formula_186
then
formula_187
where
formula_188
Rayleigh Distributions.
If "X", "Y" are independent samples from the Rayleigh distribution formula_189, the ratio "Z" = "X"/"Y" follows the distribution
formula_190
and has cdf
formula_191
The Rayleigh distribution has scaling as its only parameter. The distribution of formula_192 follows
formula_193
and has cdf
formula_194
Fractional gamma distributions (including chi, chi-squared, exponential, Rayleigh and Weibull).
The generalized gamma distribution is
formula_195
which includes the regular gamma, chi, chi-squared, exponential, Rayleigh, Nakagami and Weibull distributions involving fractional powers. Note that here "a" is a scale parameter, rather than a rate parameter; "d" is a shape parameter.
If formula_196
then formula_197
where formula_198
Modelling a mixture of different scaling factors.
In the ratios above, Gamma samples, U, V may have differing sample sizes formula_199 but must be drawn from the same distribution formula_200 with equal scaling formula_201.
In situations where U and V are differently scaled, a variables transformation allows the modified random ratio pdf to be determined. Let formula_202 where formula_203 arbitrary and, from above, formula_204.
Rescale V arbitrarily, defining formula_205
We have formula_206 and substitution into Y gives formula_207
Transforming X to Y gives formula_208
Noting formula_209 we finally have
formula_210
Thus, if formula_211 and formula_212
<br>then formula_213 is distributed as formula_214 with formula_215
The distribution of Y is limited here to the interval [0,1]. It can be generalized by scaling such that if formula_216 then
formula_217
where formula_218
formula_219 is then a sample from formula_220
Reciprocals of samples from beta distributions.
Though not ratio distributions of two variables, the following identities for one variable are useful:
If formula_221 then formula_222
If formula_223 then formula_224
combining the latter two equations yields
If formula_221 then formula_225.
If formula_223 then formula_226
Corollary
formula_227
formula_228, the distribution of the reciprocals of formula_229 samples.
If formula_230 then formula_231 and
formula_232
Further results can be found in the Inverse distribution article.
Binomial distribution.
This result was derived by Katz et al.
Suppose formula_234 and
formula_235 and formula_46, formula_44 are independent. Let formula_236.
Then formula_237 is approximately normally distributed with mean formula_238 and variance formula_239.
The binomial ratio distribution is of significance in clinical trials: if the distribution of "T" is known as above, the probability of a given ratio arising purely by chance can be estimated, i.e. a false positive trial. A number of papers compare the robustness of different approximations for the binomial ratio.
Poisson and truncated Poisson distributions.
In the ratio of Poisson variables "R = X/Y" there is a problem that "Y" is zero with finite probability so "R" is undefined. To counter this, consider the truncated, or censored, ratio "R' = X/Y"' where zero sample of "Y" are discounted. Moreover, in many medical-type surveys, there are systematic problems with the reliability of the zero samples of both X and Y and it may be good practice to ignore the zero samples anyway.
The probability of a null Poisson sample being formula_240, the generic pdf of a left truncated Poisson distribution is
formula_241
which sums to unity. Following Cohen, for "n" independent trials, the multidimensional truncated pdf is
formula_242
and the log likelihood becomes
formula_243
On differentiation we get
formula_244
and setting to zero gives the maximum likelihood estimate formula_245
formula_246
Note that as formula_247 then formula_248 so the truncated maximum likelihood formula_249 estimate, though correct for both truncated and untruncated distributions, gives a truncated mean formula_250 value which is highly biassed relative to the untruncated one. Nevertheless it appears that formula_250 is a sufficient statistic for formula_249 since formula_245 depends on the data only through the sample mean formula_251 in the previous equation which is consistent with the methodology of the conventional Poisson distribution.
Absent any closed form solutions, the following approximate reversion for truncated formula_249 is valid over the whole range formula_252.
formula_253
which compares with the non-truncated version which is simply formula_254. Taking the ratio formula_255 is a valid operation even though formula_256 may use a non-truncated model while formula_257 has a left-truncated one.
The asymptotic large-formula_258 (and Cramér–Rao bound) is
formula_259
in which substituting "L" gives
formula_260
Then substituting formula_250 from the equation above, we get Cohen's variance estimate
formula_261
The variance of the point estimate of the mean formula_249, on the basis of "n" trials, decreases asymptotically to zero as "n" increases to infinity. For small formula_249 it diverges from the truncated pdf variance in Springael for example, who quotes a variance of
formula_262
for "n" samples in the left-truncated pdf shown at the top of this section. Cohen showed that the variance of the estimate relative to the variance of the pdf, formula_263, ranges from 1 for large formula_264 (100% efficient) up to 2 as formula_264 approaches zero (50% efficient).
These mean and variance parameter estimates, together with parallel estimates for "X", can be applied to Normal or Binomial approximations for the Poisson ratio. Samples from trials may not be a good fit for the Poisson process; a further discussion of Poisson truncation is by Dietz and Bohning and there is a Zero-truncated Poisson distribution Wikipedia entry.
Double Lomax distribution.
This distribution is the ratio of two Laplace distributions. Let "X" and "Y" be standard Laplace identically distributed random variables and let "z" = "X" / "Y". Then the probability distribution of "z" is
formula_265
Let the mean of the "X" and "Y" be "a". Then the standard double Lomax distribution is symmetric around "a".
This distribution has an infinite mean and variance.
If "Z" has a standard double Lomax distribution, then 1/"Z" also has a standard double Lomax distribution.
The standard Lomax distribution is unimodal and has heavier tails than the Laplace distribution.
For 0 < "a" < 1, the "a"-th moment exists.
formula_266
where Γ is the gamma function.
Ratio distributions in multivariate analysis.
Ratio distributions also appear in multivariate analysis. If the random matrices X and Y follow a Wishart distribution then the ratio of the determinants
formula_267
is proportional to the product of independent F random variables. In the case where X and Y are from independent standardized Wishart distributions then the ratio
formula_268
has a Wilks' lambda distribution.
Ratios of Quadratic Forms involving Wishart Matrices.
In relation to Wishart matrix distributions if formula_269 is a sample Wishart matrix and vector formula_270 is arbitrary, but statistically independent, corollary 3.2.9 of Muirhead states
formula_271
The discrepancy of one in the sample numbers arises from estimation of the sample mean when forming the sample covariance, a consequence of Cochran's theorem. Similarly
formula_272
which is Theorem 3.2.12 of Muirhead
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "C_1"
},
{
"math_id": 1,
"text": "C_2"
},
{
"math_id": 2,
"text": "C_1=G_1/G_2"
},
{
"math_id": 3,
"text": "C_2 = G_3/G_4"
},
{
"math_id": 4,
"text": "\\frac{C_1}{C_2} = \\frac{{G_1}/{G_2}}{{G_3}/{G_4}} = \\frac{G_1 G_4}{G_2 G_3} = \\frac{G_1}{G_2} \\times \\frac{G_4}{G_3} = C_1 \\times C_3,"
},
{
"math_id": 5,
"text": "C_3 = G_4/G_3"
},
{
"math_id": 6,
"text": " Z = X/Y "
},
{
"math_id": 7,
"text": " p_{X,Y}(x,y) "
},
{
"math_id": 8,
"text": "p_Z(z) = \\int^{+\\infty}_{-\\infty} |y|\\, p_{X,Y}(zy, y) \\, dy. "
},
{
"math_id": 9,
"text": "p_{XY}(x,y) = p_X(x) p_Y(y) "
},
{
"math_id": 10,
"text": "p_Z(z) = \\int^{+\\infty}_{-\\infty} |y|\\, p_X(zy) p_Y(y) \\, dy. "
},
{
"math_id": 11,
"text": " p_{X,Y}(x,y) = \\frac {1}{2 \\pi }\\exp\\left(-\\frac{x^2}{2} \\right) \\exp \\left(-\\frac{y^2}{2} \\right) "
},
{
"math_id": 12,
"text": " \\begin{align}\np_Z(z) &= \\frac {1}{2 \\pi }\\int_{-\\infty}^{\\infty} \\, |y| \\, \\exp\\left(-\\frac{\\left(zy\\right)^2}{2} \\right) \\, \\exp\\left(-\\frac{ y^2}{2} \\right) \\, dy \\\\\n&= \\frac {1}{2 \\pi } \\int_{-\\infty}^{\\infty} \\,|y| \\, \\exp\\left(-\\frac{y^2 \\left(z^2 + 1\\right)}{2} \\right) \\, dy\n\\end{align} "
},
{
"math_id": 13,
"text": " \\int_0^{\\infty} \\, x \\, \\exp\\left(-cx^2 \\right) \\, dx = \\frac {1}{2c} "
},
{
"math_id": 14,
"text": " p_Z(z) = \\frac {1}{ \\pi (z^2 + 1)} "
},
{
"math_id": 15,
"text": " f_{x,y}(x,y)=f_x(x)f_y(y) "
},
{
"math_id": 16,
"text": " x,y > 0 "
},
{
"math_id": 17,
"text": " R= X/Y"
},
{
"math_id": 18,
"text": " y = x/ R"
},
{
"math_id": 19,
"text": " f_{x,y}(x,y) "
},
{
"math_id": 20,
"text": " X/Y \\le R"
},
{
"math_id": 21,
"text": " f_y(y)dy \\int_0^{Ry} f_x(x) \\,dx "
},
{
"math_id": 22,
"text": " F_R(R) = \\int_0^\\infty f_y(y) \\left(\\int_0^{Ry} f_x(x)dx \\right) dy "
},
{
"math_id": 23,
"text": " F_R(R)"
},
{
"math_id": 24,
"text": "R "
},
{
"math_id": 25,
"text": " f_R(R) "
},
{
"math_id": 26,
"text": " f_R(R) = \\frac{d}{dR} \\left[ \\int_0^\\infty f_y(y) \\left(\\int_0^{Ry} f_x(x)dx \\right) dy \\right] "
},
{
"math_id": 27,
"text": " f_R(R) = \\int_0^\\infty f_y(y) \\left(\\frac{d}{dR} \\int_0^{Ry} f_x(x)dx \\right) dy "
},
{
"math_id": 28,
"text": " \\frac{d}{dR} \\int_0^{Ry} f_x(x)dx = yf_x(Ry)"
},
{
"math_id": 29,
"text": " f_R(R) = \\int_0^\\infty f_y(y) \\; f_x(Ry) \\; y \\; dy "
},
{
"math_id": 30,
"text": " f_x(x) = \\alpha e^{-\\alpha x}, \\;\\;\\;\\; f_y(y) = \\beta e^{-\\beta y}, \\;\\;\\; x,y \\ge 0 "
},
{
"math_id": 31,
"text": " \\int_0^{Ry} f_x(x)dx = - e^{-\\alpha x} \\vert_0^{Ry} = 1- e^{-\\alpha Ry}"
},
{
"math_id": 32,
"text": " \\begin{align} F_R(R) &= \\int_0^\\infty f_y(y) \\left( 1- e^{-\\alpha Ry} \\right) dy \n =\\int_0^\\infty \\beta e^{-\\beta y} \\left( 1- e^{-\\alpha Ry} \\right) dy \\\\\n& = 1 - \\frac{\\alpha R}{\\beta + \\alpha R} \\\\\n& = \\frac{ R}{\\tfrac{\\beta}{\\alpha} + R} \n\\end{align}\n"
},
{
"math_id": 33,
"text": " f_R(R) =\\frac{d}{dR} \\left( \\frac{ R}{\\tfrac{\\beta}{\\alpha} + R} \\right) = \\frac{\\tfrac{\\beta}{\\alpha}} {\\left( \\tfrac{\\beta}{\\alpha} + R \\right)^2 } "
},
{
"math_id": 34,
"text": "x \\ge 0 "
},
{
"math_id": 35,
"text": " \\operatorname{E}[(UV)^p ] = \\operatorname{E}[U^p ] \\;\\; \\operatorname{E}[V^p ] "
},
{
"math_id": 36,
"text": " U, \\; V "
},
{
"math_id": 37,
"text": " \\operatorname{E}[(X/Y)^p] "
},
{
"math_id": 38,
"text": " 1/Y = Z "
},
{
"math_id": 39,
"text": " \\operatorname{E}[(XZ)^p ] = \\operatorname{E}[X^p ] \\; \\operatorname{E}[Y^{-p} ]"
},
{
"math_id": 40,
"text": " X^p "
},
{
"math_id": 41,
"text": " Y^{-p}"
},
{
"math_id": 42,
"text": " X/Y "
},
{
"math_id": 43,
"text": " Y^{-p} "
},
{
"math_id": 44,
"text": "Y"
},
{
"math_id": 45,
"text": "\\operatorname{E}[ Y^{-p} ] =\\int_0^\\infty y^{-p} f_y(y)\\,dy "
},
{
"math_id": 46,
"text": "X"
},
{
"math_id": 47,
"text": "x^{\\alpha - 1}e^{-x}/\\Gamma(\\alpha) "
},
{
"math_id": 48,
"text": " p"
},
{
"math_id": 49,
"text": " \\Gamma(\\alpha + p) / \\Gamma(\\alpha)"
},
{
"math_id": 50,
"text": "Z = Y^{-1} "
},
{
"math_id": 51,
"text": "\\beta"
},
{
"math_id": 52,
"text": " \\; \\Gamma^{-1}(\\beta) z^{1+\\beta} e^{-1/z}"
},
{
"math_id": 53,
"text": " \\operatorname{E}[Z^p]= \\operatorname{E}[Y^{-p}] = \\frac { \\Gamma(\\beta - p)}{ \\Gamma(\\beta) }, \\; p<\\beta. "
},
{
"math_id": 54,
"text": " \\operatorname{E}[(X/Y)^p]=\\operatorname{E}[X^p] \\; \\operatorname{E}[Y^{-p}] = \\frac { \\Gamma(\\alpha + p)}{ \\Gamma(\\alpha) } \\frac { \\Gamma(\\beta - p)}{ \\Gamma(\\beta) }, \\; p<\\beta. "
},
{
"math_id": 55,
"text": "R = X/Y "
},
{
"math_id": 56,
"text": " f_{\\beta'}(r, \\alpha, \\beta) = B(\\alpha, \\beta)^{-1} r^{\\alpha-1} (1+r)^{-(\\alpha + \\beta)} "
},
{
"math_id": 57,
"text": " \\operatorname{E}[R^p]= \\frac { \\Beta(\\alpha + p,\\beta-p)}{ \\Beta(\\alpha, \\beta) } "
},
{
"math_id": 58,
"text": " \\Beta(\\alpha, \\beta) =\\frac { \\Gamma(\\alpha)\\Gamma(\\beta)}{ \\Gamma(\\alpha +\\beta) } "
},
{
"math_id": 59,
"text": " \\operatorname{E}[R^p] = \\frac { \\Gamma(\\alpha + p)\\Gamma(\\beta - p)} { \\Gamma(\\alpha +\\beta) } \\Bigg/ \n\\frac { \\Gamma(\\alpha)\\Gamma(\\beta)} { \\Gamma(\\alpha +\\beta) } =\n\\frac { \\Gamma(\\alpha +p)\\Gamma(\\beta - p)} { \\Gamma(\\alpha) \\Gamma(\\beta) } "
},
{
"math_id": 60,
"text": " \\operatorname{E}(X/Y) = \\operatorname{E}(X)\\operatorname{E}(1/Y) "
},
{
"math_id": 61,
"text": " \\operatorname{E}(X/Y) = \\int_{-\\infty}^\\infty x f_x(x) \\, dx \\times \\int_{-\\infty}^\\infty y^{-1} f_y(y) \\, dy"
},
{
"math_id": 62,
"text": " \\operatorname{E}(1/Y) \\neq \\frac{1}{\\operatorname{E}(Y)} "
},
{
"math_id": 63,
"text": "\n\\int_{-\\infty}^\\infty y^{-1} f_y(y) \\, dy \\ne \\frac{1}{\\int_{-\\infty}^\\infty y f_y(y) \\, dy}\n"
},
{
"math_id": 64,
"text": " \\begin{align} \\operatorname{Var}(X/Y) & = \\operatorname{E}( [X/Y]^2) - \\operatorname{E^2}(X/Y)\n\\\\ & = \\operatorname{E}(X^2) \\operatorname{E}(1/Y^2) - \n\\operatorname{E}^2(X) \\operatorname{E}^2(1/Y)\n\\end{align}"
},
{
"math_id": 65,
"text": " Z = X/Y = \\tan \\theta "
},
{
"math_id": 66,
"text": " \\theta "
},
{
"math_id": 67,
"text": " \\begin{align} p(x,y) &= \\tfrac{1}{\\sqrt {2 \\pi} } e^{-\\frac{1}{2} x^2 } \\times \n \\tfrac{1}{\\sqrt {2\\pi}} e^{-\\frac{1}{2} y^2 }\n\\\\ &= \\tfrac{1}{ 2\\pi} e^{-\\frac{1}{2} (x^2 + y^2 )} \n\\\\ & = \\tfrac{1}{ 2\\pi} e^{-\\frac{1}{2} r^2 } \\text{ with } r^2 = x^2 + y^2 \n\\end{align} "
},
{
"math_id": 68,
"text": " p(x,y) "
},
{
"math_id": 69,
"text": " [0, 2\\pi ] "
},
{
"math_id": 70,
"text": "1/2\\pi"
},
{
"math_id": 71,
"text": " p_z(z) |dz| = p_{\\theta}(\\theta)|d\\theta| "
},
{
"math_id": 72,
"text": " dz/d\\theta = 1/ \\cos^2 \\theta "
},
{
"math_id": 73,
"text": " p_z(z) = \\frac{p_{\\theta}(\\theta)}{ |dz/d\\theta| } = \\tfrac{1}{2\\pi}{\\cos^2 \\theta } "
},
{
"math_id": 74,
"text": " \\cos^2 \\theta = \\frac{1}{1+ (\\tan \\theta)^2}= \\frac{1}{1+z^2} "
},
{
"math_id": 75,
"text": " p_z(z) = \\frac{1/2\\pi}{1+z^2 } "
},
{
"math_id": 76,
"text": "\\pi"
},
{
"math_id": 77,
"text": " p_z(z) = \\frac{1/\\pi}{1+z^2 } , \\;\\; -\\infty < z < \\infty "
},
{
"math_id": 78,
"text": " r = \\sqrt{ x^2 + y^2 }"
},
{
"math_id": 79,
"text": "(\\operatorname{cor}(X,Y)=0)"
},
{
"math_id": 80,
"text": " p_Z(z)= \\frac{b(z) \\cdot d(z)}{a^3(z)} \\frac{1}{\\sqrt{2 \\pi} \\sigma_x \\sigma_y} \\left[\\Phi \\left( \\frac{b(z)}{a(z)}\\right) - \\Phi \\left(-\\frac{b(z)}{a(z)}\\right) \\right] + \\frac{1}{a^2(z) \\cdot \\pi \\sigma_x \\sigma_y } e^{- \\frac{c}{2}} "
},
{
"math_id": 81,
"text": " a(z)= \\sqrt{\\frac{1}{\\sigma_x^2} z^2 + \\frac{1}{\\sigma_y^2}} "
},
{
"math_id": 82,
"text": " b(z)= \\frac{\\mu_x }{\\sigma_x^2} z + \\frac{\\mu_y}{\\sigma_y^2} "
},
{
"math_id": 83,
"text": " c = \\frac{\\mu_x^2}{\\sigma_x^2} + \\frac{\\mu_y^2}{\\sigma_y^2} "
},
{
"math_id": 84,
"text": " d(z) = e^{\\frac{b^2(z) - ca^2(z)}{2a^2(z)}} "
},
{
"math_id": 85,
"text": "\\Phi"
},
{
"math_id": 86,
"text": " \\Phi(t)= \\int_{-\\infty}^{t}\\, \\frac{1}{\\sqrt{2 \\pi}} e^{- \\frac{1}{2} u^2}\\ du \\, . "
},
{
"math_id": 87,
"text": " p=\\frac{\\mu_x}{\\sqrt{2}\\sigma_x} "
},
{
"math_id": 88,
"text": " q=\\frac{\\mu_y}{\\sqrt{2}\\sigma_y} "
},
{
"math_id": 89,
"text": " r=\\frac{\\mu_x}{\\mu_y} "
},
{
"math_id": 90,
"text": " p_Z^\\dagger(z) "
},
{
"math_id": 91,
"text": " p_Z^\\dagger(z)=\\frac{1}{\\sqrt{\\pi}} \\frac{p}{\\mathrm{erf}[q]} \\frac{1}{r} \\frac{1+\\frac{p^2}{q^2}\\frac{z}{r}}{\\left(1+\\frac{p^2}{q^2}\\left[\\frac{z}{r}\\right]^2\\right)^\\frac{3}{2}} e^{-\\frac{p^2\\left(\\frac{z}{r}-1 \\right)^2}{1+\\frac{p^2}{q^2}\\left[\\frac{z}{r}\\right]^2}} "
},
{
"math_id": 92,
"text": "\\sigma_z^2=\\frac{\\mu_x^2}{\\mu_y^2} \\left(\\frac{\\sigma_x^2}{\\mu_x^2} + \\frac{\\sigma_y^2}{\\mu_y^2}\\right)"
},
{
"math_id": 93,
"text": " \\mu_x = \\mu_y = 0 "
},
{
"math_id": 94,
"text": "\\sigma_X \\neq \\sigma_Y"
},
{
"math_id": 95,
"text": "\\rho \\neq 0"
},
{
"math_id": 96,
"text": "p_Z(z) = \\frac{1}{\\pi} \\frac{\\beta}{(z-\\alpha)^2 + \\beta^2},"
},
{
"math_id": 97,
"text": "\\alpha = \\rho \\frac{\\sigma_x}{\\sigma_y},"
},
{
"math_id": 98,
"text": "\\beta = \\frac{\\sigma_x}{\\sigma_y} \\sqrt{1-\\rho^2}."
},
{
"math_id": 99,
"text": " T \\sim \\frac{\\mu_x + \\mathbb{N}(0, \\sigma_x^2 )}{\\mu_y + \\mathbb{N}(0, \\sigma_y^2 )}\n= \\frac{\\mu_x + X}{\\mu_y + Y} \n= \\frac{\\mu_x}{\\mu_y}\\frac{1+ \\frac{X}{\\mu_x}}{1+ \\frac{Y}{\\mu_y}} "
},
{
"math_id": 100,
"text": " \\log_e(T) = \\log_e \\left(\\frac{\\mu_x}{\\mu_y} \\right)\n + \\log_e \\left( 1+ \\frac{X}{\\mu_x} \\right)\n - \\log_e \\left( 1+ \\frac{Y}{\\mu_y} \\right) \n. "
},
{
"math_id": 101,
"text": " \\log_e(1+\\delta) = \\delta - \\frac{\\delta^2}{2} + \\frac{\\delta^3}{3} + \\cdots "
},
{
"math_id": 102,
"text": " \\log_e(T) \\approx \\log_e \\left(\\frac{\\mu_x}{\\mu_y} \\right)+ \\frac{X}{\\mu_x} - \n \\frac{Y}{\\mu_y}\n \\sim \\log_e \\left(\\frac{\\mu_x}{\\mu_y} \\right) + \\mathbb{N} \\left( 0, \\frac{\\sigma_x^2}{\\mu_x^2} + \\frac{\\sigma_y^2}{\\mu_y^2} \\right) \n. "
},
{
"math_id": 103,
"text": "t \\approx \\frac{\\mu_y T - \\mu_x}{\\sqrt{\\sigma_y^2 T^2 - 2\\rho \\sigma_x \\sigma_y T + \\sigma_x^2}}"
},
{
"math_id": 104,
"text": " \\mu_y > 3\\sigma_y "
},
{
"math_id": 105,
"text": "z"
},
{
"math_id": 106,
"text": "t"
},
{
"math_id": 107,
"text": "x+\\mu_x<0"
},
{
"math_id": 108,
"text": "z=\\frac {x+\\mu_x}{y+\\mu_y}"
},
{
"math_id": 109,
"text": "x, y "
},
{
"math_id": 110,
"text": "\\sigma_x^2, \\sigma_y^2"
},
{
"math_id": 111,
"text": "X, Y "
},
{
"math_id": 112,
"text": "\\mu_x, \\mu_y."
},
{
"math_id": 113,
"text": "x'=x-\\rho y\\sigma_x /\\sigma_y"
},
{
"math_id": 114,
"text": "x', y "
},
{
"math_id": 115,
"text": "x'"
},
{
"math_id": 116,
"text": " \\sigma_x' = \\sigma_x \\sqrt {1- \\rho^2}. "
},
{
"math_id": 117,
"text": "z=\\frac{x' + \\rho y\\sigma_x/\\sigma_y+\\mu_x}{y+\\mu_y}"
},
{
"math_id": 118,
"text": "y"
},
{
"math_id": 119,
"text": "{x' + \\rho y\\sigma_x/\\sigma_y+\\mu_x} =x'+\\mu_x -\\rho \\mu_y \\frac{\\sigma_x}{\\sigma_y} + \\rho (y+\\mu_y)\\frac{\\sigma_x}{\\sigma_y}"
},
{
"math_id": 120,
"text": "z=\\frac {x'+\\mu_x'}{y+\\mu_y} + \\rho \\frac{ \\sigma_x}{\\sigma_y}"
},
{
"math_id": 121,
"text": "\\mu'_x=\\mu_x - \\rho \\mu_y \\frac { \\sigma_x }{\\sigma_y} "
},
{
"math_id": 122,
"text": " \\sigma_x', \\mu_x', \\sigma_y, \\mu_y "
},
{
"math_id": 123,
"text": " \\rho'=0 "
},
{
"math_id": 124,
"text": " - \\rho \\frac{\\sigma_x}{\\sigma_y} "
},
{
"math_id": 125,
"text": "\\sigma_x= \\sigma_y=1, \\mu_x=0, \\mu_y=0.5, \\rho = 0.975"
},
{
"math_id": 126,
"text": " x/y \\in [r, r + \\delta] "
},
{
"math_id": 127,
"text": " z = x/y \\approx 1"
},
{
"math_id": 128,
"text": " p_Z(x/y) "
},
{
"math_id": 129,
"text": "x/y"
},
{
"math_id": 130,
"text": " f_{x,y}(x,y) = \\frac{1}{\\pi^2 |\\Sigma|} \\exp \\left ( - \\begin{bmatrix}x \\\\ y \\end{bmatrix}^H \\Sigma ^{-1}\\begin{bmatrix}x \\\\ y \\end{bmatrix} \\right ) "
},
{
"math_id": 131,
"text": " \\Sigma = \\begin{bmatrix}\n \\sigma_x^2 & \\rho \\sigma_x \\sigma_y \\\\\n \\rho^* \\sigma_x \\sigma_y & \\sigma_y^2 \\end{bmatrix}, \\;\\; x=x_r+ix_i, \\;\\; y=y_r+iy_i "
},
{
"math_id": 132,
"text": " (\\cdot)^H "
},
{
"math_id": 133,
"text": " \\rho = \\rho_r +i \\rho_i \n= \\operatorname{E} \\bigg(\\frac{xy^*}{\\sigma_x \\sigma_y} \\bigg )\\;\\; \\in \\;\\left |\\mathbb{C} \\right| \\le 1"
},
{
"math_id": 134,
"text": " \\begin{align} f_{z}(z_r,z_i) & = \\frac{1-|\\rho|^2}{\\pi \\sigma_x^2 \\sigma_y^2 }\n \\Biggr ( \\frac{|z|^2}{\\sigma_x^2} + \\frac{1}{\\sigma_y^2} -2\\frac{\\rho_r z_r - \\rho_i z_i}{\\sigma_x \\sigma_y} \\Biggr)^{-2} \\\\\n& = \\frac{1-|\\rho|^2}{\\pi \\sigma_x^2 \\sigma_y^2 }\n\\Biggr ( \\;\\; \\Biggr | \\frac{z}{\\sigma_x} - \\frac{\\rho^* }{\\sigma_y} \\Biggr |^2 +\\frac{1-|\\rho|^2}{\\sigma_y^2} \\Biggr)^{-2} \n\\end{align} "
},
{
"math_id": 135,
"text": " \\sigma_x = \\sigma_y "
},
{
"math_id": 136,
"text": " f_{z}(z_r,z_i) = \\frac{1-|\\rho|^2}{\\pi \\left ( \\;\\; | z - \\rho^* |^2 + 1-|\\rho|^2 \\right)^{2} }"
},
{
"math_id": 137,
"text": " \\rho = 0.7 \\exp (i \\pi /4) "
},
{
"math_id": 138,
"text": " \\rho "
},
{
"math_id": 139,
"text": "X_1"
},
{
"math_id": 140,
"text": "X_2"
},
{
"math_id": 141,
"text": "\\ln(X_1)"
},
{
"math_id": 142,
"text": "\\ln(X_2)"
},
{
"math_id": 143,
"text": "X_i"
},
{
"math_id": 144,
"text": "p_X(x) = \\begin{cases}\n1 & 0 < x < 1 \\\\\n0 & \\text{otherwise}\n\\end{cases}"
},
{
"math_id": 145,
"text": "p_Z(z) = \\begin{cases}\n1/2 \\qquad & 0 < z < 1 \\\\ \n\\frac{1}{2z^2} \\qquad & z \\geq 1 \\\\\n0 \\qquad & \\text{otherwise}\n\\end{cases}"
},
{
"math_id": 146,
"text": "a"
},
{
"math_id": 147,
"text": "p_X(x|a) = \\frac{a}{\\pi (a^2 + x^2)}"
},
{
"math_id": 148,
"text": "Z = X/Y"
},
{
"math_id": 149,
"text": "p_Z(z|a) = \\frac{1}{\\pi^2(z^2-1)} \\ln(z^2)."
},
{
"math_id": 150,
"text": "W=XY"
},
{
"math_id": 151,
"text": "p_W(w|a) = \\frac{a^2}{\\pi^2(w^2-a^4)} \\ln \\left(\\frac{w^2}{a^4}\\right)."
},
{
"math_id": 152,
"text": "b"
},
{
"math_id": 153,
"text": "p_Z(z|a,b) = \\frac{ab}{\\pi^2(b^2z^2-a^2)} \\ln \\left(\\frac{b^2 z^2}{a^2}\\right)."
},
{
"math_id": 154,
"text": "W = XY"
},
{
"math_id": 155,
"text": "p_W(w|a,b) = \\frac{ab}{\\pi^2(w^2-a^2b^2)} \\ln \\left(\\frac{w^2}{a^2b^2}\\right)."
},
{
"math_id": 156,
"text": "\\frac{1}{b}."
},
{
"math_id": 157,
"text": "p_Z(z) = \\begin{cases}\n\\left[ \\varphi(0) - \\varphi(z) \\right] / z^2 \\quad & z \\ne 0 \\\\\n\\varphi(0) / 2 \\quad & z = 0, \\\\\n\\end{cases}"
},
{
"math_id": 158,
"text": " f_\\chi (x,k)\n = \\frac {x^ {\\frac{k}{2}-1} e^{-x/2} } { 2^{k/2} \\Gamma(k/2) }"
},
{
"math_id": 159,
"text": " \\frac{ G }{ \\sqrt{ Y / m } } \\sim t_m "
},
{
"math_id": 160,
"text": " \\frac{ Y / m }{ Z / n } = F_{ m, n } "
},
{
"math_id": 161,
"text": " \\frac{ Y }{ Y + Z } \\sim \\beta( \\tfrac{m}{2},\\tfrac{n}{2} )"
},
{
"math_id": 162,
"text": " \\;\\;\\frac{ Y }{ Z } \\sim \\beta'( \\tfrac{m}{2},\\tfrac{n}{2} )"
},
{
"math_id": 163,
"text": " V_1 \\sim {\\chi'}_{k_1}^2(\\lambda)"
},
{
"math_id": 164,
"text": "V_2 \\sim {\\chi'}_{k_2}^2(0)"
},
{
"math_id": 165,
"text": "V_1"
},
{
"math_id": 166,
"text": "V_2"
},
{
"math_id": 167,
"text": "\\frac{V_1/k_1}{V_2/k_2} \\sim F'_{k_1,k_2}(\\lambda)"
},
{
"math_id": 168,
"text": " \\frac{m}{n} F'_{m,n} = \\beta'( \\tfrac{m}{2},\\tfrac{n}{2}) \\text{ or } \n F'_{m,n} = \\beta'( \\tfrac{m}{2},\\tfrac{n}{2} ,1,\\tfrac{n }{m}) "
},
{
"math_id": 169,
"text": " F'_{m,n} "
},
{
"math_id": 170,
"text": " F_{3,4}(6.59) = \\int_{6.59}^\\infty \\beta'(x; \\tfrac{m}{2},\\tfrac{n}{2},1,\\tfrac{n}{m} ) dx = 0.05 "
},
{
"math_id": 171,
"text": " U \\sim \\Gamma ( \\alpha_1 , 1), V \\sim \\Gamma(\\alpha_2, 1) "
},
{
"math_id": 172,
"text": "\n \\Gamma (x;\\alpha,1) = \\frac { x^{\\alpha-1} e^{-x}}{\\Gamma(\\alpha)} "
},
{
"math_id": 173,
"text": " \\frac{ U }{ U + V } \\sim \\beta( \\alpha_1, \\alpha_2 ), \\qquad \\text{ expectation } = \\frac{\\alpha_1}{\\alpha_1 + \\alpha_2 }"
},
{
"math_id": 174,
"text": "\\frac{U}{V} \\sim \\beta'(\\alpha_1,\\alpha_2), \\qquad \\qquad \\text{ expectation } = \\frac{\\alpha_1}{ \\alpha_2 -1}, \\; \\alpha_2 > 1"
},
{
"math_id": 175,
"text": "\\frac{V}{U} \\sim \\beta'(\\alpha_2, \\alpha_1), \\qquad \\qquad \\text{ expectation } = \\frac{\\alpha_2}{ \\alpha_1 -1}, \\; \\alpha_1 > 1"
},
{
"math_id": 176,
"text": " U \\sim \\Gamma (x;\\alpha,1) "
},
{
"math_id": 177,
"text": "\\theta U \\sim \\Gamma (x;\\alpha,\\theta) = \\frac { x^{\\alpha-1} e^{- \\frac{x}{\\theta}}}{ \\theta^k \\Gamma(\\alpha)} "
},
{
"math_id": 178,
"text": "U \\sim \\Gamma(\\alpha_1, \\theta_1 ),\\; V \\sim \\Gamma(\\alpha_2, \\theta_2 ) "
},
{
"math_id": 179,
"text": " \\frac {\\frac {U}{\\theta_1}} { \\frac {U}{\\theta_1} + \\frac {V}{\\theta_2}}\n= \\frac{ \\theta_2 U }{ \\theta_2 U + \\theta_1 V } \\sim \\beta( \\alpha_1, \\alpha_2 )"
},
{
"math_id": 180,
"text": " \\frac {\\frac {U}{\\theta_1}} { \\frac {V}{\\theta_2}}\n= \\frac{ \\theta_2 }{ \\theta_1 } \\frac{U }{ V }\\sim \\beta'( \\alpha_1, \\alpha_2 )"
},
{
"math_id": 181,
"text": " \\frac {U}{V} \\sim \\beta'( \\alpha_1, \\alpha_2, 1, \\frac{\\theta_1 }{ \\theta_2 } ) \\quad \\text{ and } \\operatorname{E} \\left[ \\frac {U}{V} \\right] = \\frac{\\theta_1 }{ \\theta_2 } \\frac{\\alpha_1}{\\alpha_2 - 1 }"
},
{
"math_id": 182,
"text": "\\beta'(\\alpha,\\beta,p,q)"
},
{
"math_id": 183,
"text": " X \\sim \\beta'( \\alpha_1, \\alpha_2, 1, 1 ) \\equiv \\beta'( \\alpha_1, \\alpha_2 ) "
},
{
"math_id": 184,
"text": " \\theta X \\sim \\beta'( \\alpha_1, \\alpha_2, 1, \\theta ) "
},
{
"math_id": 185,
"text": " \\beta'(x; \\alpha_1, \\alpha_2, 1, R ) = \\frac{1}{R} \\beta' (\\frac{x}{R} ; \\alpha_1, \\alpha_2) "
},
{
"math_id": 186,
"text": " U \\sim \\Gamma(\\alpha_1, \\theta_1 ), V \\sim \\Gamma(\\alpha_2, \\theta_2 ) "
},
{
"math_id": 187,
"text": " \\frac {U}{V} \\sim \\frac{1}{R} \\beta' ( \\frac{x}{R} ; \\alpha_1, \\alpha_2 ) \n= \\frac { \\left(\\frac{x}{R} \\right)^{\\alpha_1-1} } {\\left(1+\\frac{x}{R} \\right)^{\\alpha_1+\\alpha_2}} \\cdot\n \\frac {1} { \\;R\\;B( \\alpha_1, \\alpha_2 )}, \\;\\; x \\ge 0 "
},
{
"math_id": 188,
"text": " R = \\frac {\\theta_1}{\\theta_2}, \\; \\;\\;\nB( \\alpha_1, \\alpha_2 ) = \\frac {\\Gamma(\\alpha_1) \\Gamma(\\alpha_2)} {\\Gamma(\\alpha_1 + \\alpha_2)} "
},
{
"math_id": 189,
"text": " f_r(r) = (r/\\sigma^2) e^ {-r^2/2\\sigma^2}, \\;\\; r \\ge 0 "
},
{
"math_id": 190,
"text": " f_z(z) = \\frac{2 z}{ (1 + z^2 )^2 }, \\;\\; z \\ge 0 "
},
{
"math_id": 191,
"text": " F_z(z) = 1 - \\frac{1}{ 1 + z^2 } = \\frac{z^2}{ 1 + z^2 }, \\;\\;\\; z \\ge 0 "
},
{
"math_id": 192,
"text": " Z = \\alpha X/Y "
},
{
"math_id": 193,
"text": " f_z(z,\\alpha) = \\frac{2 \\alpha z}{ (\\alpha + z^2 )^2 }, \\;\\; z > 0 "
},
{
"math_id": 194,
"text": " F_z(z, \\alpha) = \\frac{ z^2 }{ \\alpha + z^2 }, \\;\\;\\; z \\ge 0 "
},
{
"math_id": 195,
"text": " f(x;a,d,r)=\\frac{r}{\\Gamma(d/r) a^d } x^{d-1} e^{-(x/a)^r} \\; x \\ge 0; \\;\\; a, \\; d, \\;r > 0"
},
{
"math_id": 196,
"text": " U \\sim f(x;a_1,d_1,r), \\; \\; V \\sim f(x;a_2,d_2,r) \\text{ are independent, and } W = U/V "
},
{
"math_id": 197,
"text": " g(w) = \\frac{r \\left ( \\frac {a_1}{a_2} \\right )^{d_2} }{B \\left ( \\frac{d_1}{r}, \\frac{d_2}{r} \\right ) } \\frac{w^{-d_2 -1}}{ \\left( 1 + \\left( \\frac{a_2}{a_1} \\right)^{-r} w^{-r} \\right) ^ \\frac{d_1+d_2}{r} } , \\; \\; w>0 "
},
{
"math_id": 198,
"text": " B(u,v) = \\frac{\\Gamma(u) \\Gamma(v)}{\\Gamma(u+v)} "
},
{
"math_id": 199,
"text": " \\alpha_1, \\alpha_2 "
},
{
"math_id": 200,
"text": " \\frac { x^{\\alpha-1} e^{- \\frac{x}{\\theta}}}{ \\theta^k \\Gamma(\\alpha)} "
},
{
"math_id": 201,
"text": "\\theta"
},
{
"math_id": 202,
"text": " X = \\frac {U} { U + V} = \\frac {1} { 1 + B} "
},
{
"math_id": 203,
"text": " U \\sim \\Gamma(\\alpha_1,\\theta), V \\sim \\Gamma(\\alpha_2,\\theta), \\theta "
},
{
"math_id": 204,
"text": " X \\sim Beta(\\alpha_1, \\alpha_2), B = V/U \\sim Beta'(\\alpha_2, \\alpha_1) "
},
{
"math_id": 205,
"text": " Y \\sim \\frac {U} { U + \\varphi V} = \\frac {1} { 1 + \\varphi B}, \\;\\; 0 \\le \\varphi \\le \\infty "
},
{
"math_id": 206,
"text": " B = \\frac{1-X}{X} "
},
{
"math_id": 207,
"text": " Y = \\frac {X}{\\varphi + (1-\\varphi)X}, dY/dX = \\frac {\\varphi}{(\\varphi + (1-\\varphi)X)^2} "
},
{
"math_id": 208,
"text": " f_Y(Y) = \\frac{f_X (X) } {| dY/dX|} = \\frac {\\beta(X,\\alpha_1,\\alpha_2)}{ \\varphi / [\\varphi + (1-\\varphi) X]^2} "
},
{
"math_id": 209,
"text": " X = \\frac {\\varphi Y}{ 1-(1 - \\varphi)Y} "
},
{
"math_id": 210,
"text": " f_Y(Y, \\varphi) = \\frac{\\varphi } { [1 - (1 - \\varphi)Y]^2} \\beta \\left (\\frac {\\varphi Y}{ 1 - (1-\\varphi) Y}, \\alpha_1, \\alpha_2 \\right), \\;\\;\\; 0 \\le Y \\le 1 "
},
{
"math_id": 211,
"text": " U \\sim \\Gamma(\\alpha_1,\\theta_1) "
},
{
"math_id": 212,
"text": " V \\sim \\Gamma(\\alpha_2,\\theta_2) "
},
{
"math_id": 213,
"text": " Y = \\frac {U} { U + V} "
},
{
"math_id": 214,
"text": " f_Y(Y, \\varphi) "
},
{
"math_id": 215,
"text": " \\varphi = \\frac {\\theta_2}{\\theta_1} "
},
{
"math_id": 216,
"text": " Y \\sim f_Y(Y,\\varphi) "
},
{
"math_id": 217,
"text": " \\Theta Y \\sim f_Y( Y,\\varphi, \\Theta) "
},
{
"math_id": 218,
"text": " f_Y( Y,\\varphi, \\Theta) = \\frac{\\varphi / \\Theta } { [1 - (1 - \\varphi)Y / \\Theta]^2} \\beta \\left (\\frac {\\varphi Y / \\Theta}{ 1 - (1-\\varphi) Y / \\Theta}, \\alpha_1, \\alpha_2 \\right), \\;\\;\\; 0 \\le Y \\le \\Theta "
},
{
"math_id": 219,
"text": " \\Theta Y "
},
{
"math_id": 220,
"text": " \\frac {\\Theta U} { U + \\varphi V} "
},
{
"math_id": 221,
"text": "X \\sim \\beta (\\alpha,\\beta)"
},
{
"math_id": 222,
"text": " \\mathbf x = \\frac{X}{1-X} \\sim \\beta'(\\alpha,\\beta) "
},
{
"math_id": 223,
"text": " \\mathbf Y \\sim \\beta' (\\alpha,\\beta)"
},
{
"math_id": 224,
"text": " y = \\frac{1}{\\mathbf Y} \\sim \\beta'(\\beta,\\alpha) "
},
{
"math_id": 225,
"text": " \\mathbf x = \\frac{1}{X} -1 \\sim \\beta'(\\beta,\\alpha) "
},
{
"math_id": 226,
"text": " y = \\frac{\\mathbf Y}{1 + \\mathbf Y} \\sim \\beta(\\alpha,\\beta) "
},
{
"math_id": 227,
"text": " \\frac{1}{1 + \\mathbf Y}\n = \\frac{ \\mathbf Y ^ {-1}}{\\mathbf Y^{-1} + 1} \\sim \\beta(\\beta,\\alpha) "
},
{
"math_id": 228,
"text": " 1+\\mathbf Y \\sim \\{ \\; \\beta(\\beta,\\alpha) \\; \\} ^{-1} "
},
{
"math_id": 229,
"text": " \\beta(\\beta,\\alpha) "
},
{
"math_id": 230,
"text": " U \\sim \\Gamma ( \\alpha , 1), V \\sim \\Gamma(\\beta, 1) "
},
{
"math_id": 231,
"text": " \\frac{U}{V} \\sim \\beta' ( \\alpha, \\beta )"
},
{
"math_id": 232,
"text": "\\frac{U / V}{1+U / V} = \\frac{U}{V + U } \\sim \\beta(\\alpha,\\beta) "
},
{
"math_id": 233,
"text": " X, \\; Y "
},
{
"math_id": 234,
"text": "X \\sim \\text{Binomial}(n,p_1)"
},
{
"math_id": 235,
"text": "Y \\sim \\text{Binomial}(m,p_2)"
},
{
"math_id": 236,
"text": "T=\\frac{X/n}{Y/m}"
},
{
"math_id": 237,
"text": "\\log(T)"
},
{
"math_id": 238,
"text": "\\log(p_1/p_2)"
},
{
"math_id": 239,
"text": "\\frac{(1/p_1)-1}{n}+\\frac{(1/p_2)-1}{m}"
},
{
"math_id": 240,
"text": " e^{-\\lambda} "
},
{
"math_id": 241,
"text": " \\tilde p_x(x;\\lambda)= \\frac {1}{1-e^{-\\lambda} } \n{ \\frac{e^{-\\lambda} \\lambda^{x}}{x!} }, \\;\\;\\; x \\in 1,2,3, \\cdots\n "
},
{
"math_id": 242,
"text": " \\tilde p(x_1, x_2, \\dots ,x_n;\\lambda)= \\frac{1}{(1-e^{-\\lambda})^{n} } \n\\prod_{i=1}^n{ \\frac{e^{-\\lambda} \\lambda^{x_i}}{x_i!} }, \\;\\;\\; x_i \\in 1,2,3, \\cdots\n "
},
{
"math_id": 243,
"text": " L = \\ln (\\tilde p) =-n\\ln (1-e^{-\\lambda}) \n-n \\lambda + \\ln(\\lambda) \\sum_1^n x_i - \\ln \\prod_1^n (x_i!), \\;\\;\\; x_i \\in 1,2,3, \\cdots\n "
},
{
"math_id": 244,
"text": " dL/d\\lambda = \\frac{-n}{ 1-e^{-\\lambda}} + \\frac{1}{\\lambda}\\sum_{i=1}^n x_i "
},
{
"math_id": 245,
"text": " \\hat \\lambda_{ML} "
},
{
"math_id": 246,
"text": " \\frac{\\hat \\lambda_{ML}}{ 1-e^{-\\hat \\lambda_{ML} }} = \\frac{1}{n} \\sum_{i=1}^n x_i = \\bar x "
},
{
"math_id": 247,
"text": " \\hat \\lambda \\to 0 "
},
{
"math_id": 248,
"text": " \\bar x \\to 1 "
},
{
"math_id": 249,
"text": " \\lambda "
},
{
"math_id": 250,
"text": " \\bar x "
},
{
"math_id": 251,
"text": " \\bar x = \\frac{1}{n} \\sum_{i=1}^n x_i "
},
{
"math_id": 252,
"text": " 0 \\le \\lambda \\le \\infty; \\; 1 \\le \\bar x \\le \\infty "
},
{
"math_id": 253,
"text": " \\hat \\lambda = \\bar x - e^{-( \\bar x -1) } - 0.07(\\bar x -1)e^{-0.666(\\bar x-1)} + \\epsilon, \\;\\;\\;|\\epsilon | < 0.006 "
},
{
"math_id": 254,
"text": " \\hat \\lambda = \\bar x "
},
{
"math_id": 255,
"text": " R = \\hat \\lambda_X / \\hat \\lambda_Y "
},
{
"math_id": 256,
"text": " \\hat \\lambda_X "
},
{
"math_id": 257,
"text": " \\hat \\lambda_Y "
},
{
"math_id": 258,
"text": " n\\lambda \\text{ variance of }\\hat \\lambda "
},
{
"math_id": 259,
"text": " \\mathbb{Var} ( \\hat \\lambda ) \\ge - \\left( \\mathbb{E}\\left[ \\frac{\\delta ^2 L }{ \\delta \\lambda^2 } \\right]_{\\lambda=\\hat \\lambda} \\right) ^{-1} "
},
{
"math_id": 260,
"text": " \\frac{\\delta ^2 L }{ \\delta \\lambda^2 } = \n-n \\left[ \\frac{ \\bar x}{\\lambda ^2 } - \\frac{e^{-\\lambda}}{(1-e^{-\\lambda})^2 } \\right] "
},
{
"math_id": 261,
"text": " \\mathbb{Var} ( \\hat \\lambda ) \\ge \\frac{ \\hat\\lambda}{n} \\frac { (1-e^{-\\hat\\lambda})^2 }{ 1 - (\\hat\\lambda + 1) e^{-\\hat\\lambda} } "
},
{
"math_id": 262,
"text": " \\mathbb {Var} ( \\lambda) = \\frac {\\lambda / n}{1-e^{-\\lambda}} \\left [ 1 - \\frac{\\lambda e^{-\\lambda} }{1-e^{-\\lambda}}\\right] "
},
{
"math_id": 263,
"text": " \\mathbb {Var} ( \\hat \\lambda) / \\mathbb {Var} ( \\lambda) "
},
{
"math_id": 264,
"text": " \\lambda "
},
{
"math_id": 265,
"text": " f( x ) = \\frac{ 1 }{ 2 ( 1 + |z| )^2 } "
},
{
"math_id": 266,
"text": " E( Z^a ) = \\frac{ \\Gamma( 1 + a ) }{ \\Gamma( 1 - a ) } "
},
{
"math_id": 267,
"text": "\\varphi = |\\mathbf{X}|/|\\mathbf{Y}|"
},
{
"math_id": 268,
"text": "\\Lambda = {|\\mathbf{X}|/|\\mathbf{X}+\\mathbf{Y}|} "
},
{
"math_id": 269,
"text": " S \\sim W_p(\\Sigma, \\nu + 1)"
},
{
"math_id": 270,
"text": " V "
},
{
"math_id": 271,
"text": " \\frac{ V^T S V}{ V^T \\Sigma V } \\sim \\chi^2_{\\nu } "
},
{
"math_id": 272,
"text": " \\frac{ V^T \\Sigma^{-1} V}{ V^T S^{-1} V } \\sim \\chi^2_{\\nu-p+1} "
}
] | https://en.wikipedia.org/wiki?curid=9519371 |
9519906 | Valve RF amplifier | Device for electrically amplifying the power of an electrical radio frequency signal
A valve RF amplifier (UK and Aus.) or tube amplifier (U.S.) is a device for electrically amplifying the power of an electrical radio frequency .
Low to medium power valve amplifiers for frequencies below the microwaves were largely replaced by solid state amplifiers during the 1960s and 1970s, initially for receivers and low power stages of transmitters, transmitter output stages switching to transistors somewhat later. Specially constructed valves are still in use for very high power transmitters, although rarely in new designs.
Valve characteristics.
Valves are high voltage / low current devices in comparison with transistors. Tetrode and pentode valves have very flat anode current vs. anode voltage indicating high anode output impedances. Triodes show a stronger relationship between anode voltage and anode current.
The high working voltage makes them well suited for radio transmitters and valves remain in use today for very high power short wave radio transmitters, where solid state techniques would require many devices in parallel, and very high DC supply currents. High power solid state transmitters also require a complex combination of transformers and tuning networks, whereas a valve-based transmitter would use a single, relatively simple tuned network.
Thus while solid state high power short wave transmitters are technically possible, economic considerations still favor valves above 3 MHz and 10,000 watts.
Radio amateurs also use valve amplifiers in the 500–1500 watt range mainly for economic reasons.
Audio vs. RF amplifiers.
Valve audio amplifiers typically amplify the entire audio range between 20 Hz and 20 kHz or higher. They use an iron core transformer to provide a suitable high impedance load to the valve(s) while driving a speaker, which is typically 8 Ohms. Audio amplifiers normally use a single valve in class A, or a pair in class B or class AB.
An RF power amplifier is tuned to a single frequency as low as 18 kHz and as high as the UHF range of frequencies, for the purpose of radio transmission or industrial heating. They use a narrow tuned circuit to provide the valve with a suitably high load impedance and feed a load that is typically 50 or 75 Ohms. RF amplifiers normally operate class C or class AB.
Although the frequency ranges for audio amplifiers and RF amplifiers overlap, the class of operation, method of output coupling and percent operational bandwidth will differ. Power valves are capable of high frequency response, up to at least 30 MHz. Indeed, many of the Directly Heated Single Ended Triode (DH-SET) audio amplifiers use radio transmitting valves originally designed to operate as RF amplifiers in the high frequency range.
Distortion.
The most efficient valve-based RF amplifiers operate class C. If used with no tuned circuit in the output, this would distort the input signal, producing harmonics. However, class C amplifiers normally use a high Q output network which removes the harmonics, leaving an undistorted sine wave identical to the input waveform. Class C is suitable only for amplifying signals with a constant amplitude, such as FM, FSK, and some CW (Morse code) signals. Where the amplitude of the input signal to the amplifier varies as with single-sideband modulation, amplitude modulation, video and complex digital signals, the amplifier must operate class A or AB, to preserve the envelope of the driving signal in an undistorted form. Such amplifiers are referred to as linear amplifiers.
It is also common to modify the gain of an amplifier operating class C so as to produce amplitude modulation. If done in a linear manner, this modulated amplifier is capable of low distortion. The output signal can be viewed as a product of the input RF signal and the modulating signal.
The development of FM broadcasting improved fidelity by using a greater bandwidth which was available in the VHF range, and where atmospheric noise was absent. FM also has an inherent ability to reject noise, which is mostly amplitude modulated. Valve technology suffers high-frequency limitations due to cathode-anode transit time. However, tetrodes are successfully used into the VHF range and triodes into the low GHz range. Modern FM broadcast transmitters use both valve and solid state devices, with valves tending to be more used at the highest power levels. FM transmitters operate class C with very low distortion.
Today's digital radio that carries coded data over various phase modulations (such as GMSK, QPSK, etc.) and also the increasing demand for spectrum have forced a dramatic change in the way radio is used, e.g. the cellular radio concept. Today's cellular radio and digital broadcast standards are extremely demanding in terms of the spectral envelope and out of band emissions that are acceptable (in the case of GSM for example, −70 dB or better just a few hundred kilohertz from center frequency). Digital transmitters must therefore operate in the linear modes, with much attention given to achieving low distortion.
Applications.
Historic transmitters and receivers.
Valve stages were used to amplify the received radio frequency signals, the intermediate frequencies, the video signal and the audio signals at the various points in the receiver. Historically (pre WWII) "transmitting tubes" were among the most powerful tubes available, were usually direct heated by thoriated filaments that glowed like light bulbs. Some tubes were built to be very rugged, capable of being driven so hard that the anode would itself glow cherry red, the anodes being machined from solid material (rather than fabricated from thin sheet) to be able to withstand this without distorting when heated. Notable tubes of this type are the 845 and 211. Later beam power tubes such as the 807 and (direct heated) 813 were also used in large numbers in (especially military) radio transmitters.
Bandwidth of valve vs solid state amplifiers.
Today, radio transmitters are overwhelmingly solid state, even at microwave frequencies (cellular radio base stations). Depending on the application, a fair number of radio frequency amplifiers continue to have valve construction, due to their simplicity, where as, it takes several output transistors with complex splitting and combining circuits to equal the same amount of output power of a single valve.
Valve amplifier circuits are significantly different from broadband solid state circuits. Solid state devices have a very low output impedance which allows matching via a broadband transformer covering a large range of frequencies, for example 1.8 to 30 MHz. With either class C or AB operation, these must include low pass filters to remove harmonics. While the proper low pass filter must be switch selected for the frequency range of interest, the result is considered to be a "no tune" design. Valve amplifiers have a tuned network that serves as both the low pass harmonic filter and impedance matching to the output load. In either case, both solid state and valve devices need such filtering networks before the RF signal is output to the load.
Radio circuits.
Unlike audio amplifiers, in which the analog output signal is of the same form and frequency as the input signal, RF circuits may modulate low frequency information (audio, video, or data) onto a carrier (at a much higher frequency), and the circuitry comprises several distinct stages. For example, a radio transmitter may contain:
Transmitter anode circuits.
The most common anode circuit is a tuned LC circuit where the anodes are connected at a voltage node. This circuit is often known as the anode tank circuit.
Active (or tuned grid) amplifier.
An example of this used at VHF/UHF include the 4CX250B, an example of a twin tetrode is the QQV06/40A.
Neutralization is a term used in TGTP (tuned grid tuned plate) amplifiers for the methods and circuits used for stabilization against unwanted oscillations at the operating frequency caused by the inadvertent introduction of some of the output signal back into the input circuits. This mainly occurs via the grid to plate capacity, but can also come via other paths, making circuit layout important. To cancel the unwanted feedback signal, a portion of the output signal is deliberately introduced into the input circuit with the same amplitude but opposite phase.
When using a tuned circuit in the input, the network must match the driving source to the input impedance of the grid. This impedance will be determined by the grid current in Class C or AB2 operation. In AB1 operation, the grid circuit should be designed to avoid excessive step up voltage, which although it might provide more stage gain, as in audio designs, it will increase instability and make neutralization more critical.
In common with all three basic designs shown here, the anode of the valve is connected to a resonant LC circuit which has another inductive link which allows the RF signal to be passed to the output.
The circuit shown has been largely replaced by a Pi network which allows simpler adjustment and adds low pass filtering.
Operation.
The anode current is controlled by the electrical potential (voltage) of the first grid. A DC bias is applied to the valve to ensure that the part of the transfer equation which is most suitable to the required application is used. The input signal is able to perturb (change) the potential of the grid, this in turn will change the anode current (also known as the plate current).
In the RF designs shown on this page, a tuned circuit is between the anode and the high voltage supply. This tuned circuit is brought to resonance presenting an inductive load that is well matched to the valve and thus results in an efficient power transfer.
As the current flowing through the anode connection is controlled by the grid, then the current flowing through the load is also controlled by the grid.
One of the disadvantages of a tuned grid compared to other RF designs is that neutralization is required.
Passive grid amplifier.
A passive grid circuit used at VHF/UHF frequencies might use the 4CX250B tetrode. An example of a twin tetrode would be the QQV06/40A. The tetrode has a screen grid which is between the anode and the first grid, which being grounded for RF, acts as a shield to reducing the effective capacitance between the first grid and the anode. The combination of the effects of the screen grid and the grid damping resistor often allow the use of this design without neutralization. The screen found in tetrodes and pentodes, greatly increases the valve's gain by reducing the effect of anode voltage on anode current.
The input signal is applied to the valve's first grid via a capacitor. The value of the grid resistor determines the gain of the amplifier stage. The higher the resistor the greater the gain, the lower the damping effect and the greater the risk of instability. With this type of stage good layout is less vital.
Grounded grid amplifier.
This design normally uses a triode so valves such as the 4CX250B are not suitable for this circuit, unless the screen and control grids are joined, effectively converting the tetrode into a triode. This circuit design has been used at 1296 MHz using disk seal triode valves such as the 2C39A.
The grid is grounded and the drive is applied to the cathode through a capacitor. The heater supply must be isolated from the cathode as unlike the other designs the cathode is not connected to RF ground. Some valves, such as the 811A, are designed for "zero bias" operation and the cathode can be at ground potential for DC. Valves that require a negative grid bias can be used by putting a positive DC voltage on the cathode. This can be achieved by putting a zener diode between the cathode and ground or using a separate bias supply.
Neutralization.
The valve interelectrode capacitance which exists between the input and output of the amplifier and other stray coupling may allow enough energy to feed back into input so as to cause self-oscillation in an amplifier stage. For the higher gain designs this effect must be counteracted. Various methods exist for introducing an out-of-phase signal from the output back to the input so that the effect is cancelled. Even when the feed back is not sufficient to cause oscillation it can produce other effects, such as difficult tuning. Therefore, neutralization can be helpful, even for an amplifier that does not oscillate. Many grounded grid amplifiers use no neutralization, but at 30 MHz adding it can smooth out the tuning.
An important part of the neutralization of a tetrode or pentode is the design of the screen grid circuit. To provide the greatest shielding effect, the screen must be well-grounded at the frequency of operation. Many valves will have a "self-neutralizing" frequency somewhere in the VHF range. This results from a series resonance consisting of the screen capacity and the inductance of the screen lead, thus providing a very low impedance path to ground.
UHF.
Transit time effects are important at these frequencies, so feedback is not normally usable and for performance critical applications alternative linearisation techniques have to be used such as degeneration and feedforward.
Tube noise and noise figure.
Noise figure is not usually an issue for power amplifier valves, however, in receivers using valves it can be important. While such uses are obsolete, this information is included for historical interest.
Like any amplifying device, valves add noise to the signal to be amplified. Even with a hypothetical perfect amplifier, however, noise is unavoidably present due to thermal fluctuations in the signal source (usually assumed to be at room temperature, "T" = 295 K). Such fluctuations cause an electrical noise power of formula_0, where "k"B is the Boltzmann constant and "B" the bandwidth. Correspondingly, the voltage noise of a resistance "R" into an open circuit is formula_1 and the current noise into a short circuit is formula_2.
The noise figure is defined as the ratio of the noise power at the output of the amplifier relative to the noise power that would be present at the output if the amplifier were noiseless (due to amplification of thermal noise of the signal source). An equivalent definition is: noise figure is the factor by which insertion of the amplifier degrades the signal to noise ratio. It is often expressed in decibels (dB). An amplifier with a 0 dB noise figure would be perfect.
The noise properties of tubes at audio frequencies can be modeled well by a perfect noiseless tube having a source of voltage noise in series with the grid. For the EF86 tube, for example, this voltage noise is specified (see e.g., the Valvo, Telefunken or Philips data sheets) as 2 microvolts integrated over a frequency range of approximately 25 Hz to 10 kHz. (This refers to the integrated noise, see below for the frequency dependence of the noise spectral density.) This equals the voltage noise of a 25 kΩ resistor. Thus, if the signal source has an impedance of 25 kΩ or more, the noise of the tube is actually smaller than the noise of the source. For a source of 25 kΩ, the noise generated by tube and source are the same, so the total noise power at the output of the amplifier is twice the noise power at the output of the perfect amplifier. The noise figure is then two, or 3 dB. For higher impedances, such as 250 kΩ, the EF86's voltage noise is formula_3 lower than the source's own noise. It therefore adds 1/10 of the noise power caused by the source, and the noise figure is 0.4 dB. For a low-impedance source of 250 Ω, on the other hand, the noise voltage contribution of the tube is 10 times larger than the signal source, so that the noise power is one hundred times larger than that caused by the source. The noise figure in this case is 20 dB.
To obtain low noise figure the impedance of the source can be increased by a transformer. This is eventually limited by the input capacity of the tube, which sets a limit on how high the signal impedance can be made if a certain bandwidth is desired.
The noise voltage density of a given tube is a function of frequency. At frequencies above 10 kHz or so, it is basically constant ("white noise"). White noise is often expressed by an equivalent noise resistance, which is defined as the resistance which produces the same voltage noise as present at the tube input. For triodes, it is approximately (2-4)/"g"m, where "g"m is the transconductivity. For pentodes, it is higher, about (5-7)/"g"m. Tubes with high "g"m thus tend to have lower noise at high frequencies. For example, it is 300 Ω for one half of the ECC88, 250 Ω for an E188CC (both have "g"m = 12.5 mA/V) and as low as 65 Ω for a tride-connected D3a ("g"m = 40 mA/V).
In the audio frequency range (below 1–100 kHz), "1/"f"" noise becomes dominant, which rises like 1/"f". (This is the reason for the relatively high noise resistance of the EF86 in the above example.) Thus, tubes with low noise at high frequency do not necessarily have low noise in the audio frequency range. For special low-noise audio tubes, the frequency at which 1/"f" noise takes over is reduced as far as possible, maybe to approximately a kilohertz. It can be reduced by choosing very pure materials for the cathode nickel, and running the tube at an optimized (generally low) anode current.
At radio frequencies, things are more complicated: (i) The input impedance of a tube has a real component that goes down like 1/"f"² (due to cathode lead inductance and transit time effects). This means the input impedance can no longer be increased arbitrarily in order to reduce the noise figure. (ii) This input resistance has its own thermal noise, just like any resistor. (The "temperature" of this resistor for noise purposes is more close to the cathode temperature than to room temperature). Thus, the noise figure of tube amplifiers increases with frequency. At 200 MHz, a noise figure of 2.5 (or 4 dB) can be reached with the ECC2000 tube in an optimized "cascode"-circuit with an optimized source impedance. At 800 MHz, tubes like EC8010 have noise figures of about 10 dB or more. Planar triodes are better, but very early, transistors have reached noise figures substantially lower than tubes at UHF. Thus, the tuners of television sets were among the first parts of consumer electronics were transistors were used.
Decline.
Semiconductor amplifiers have overwhelmingly displaced valve amplifiers for low- and medium-power applications at all frequencies.
Valves continue to be used in some high-power, high-frequency amplifiers used for short wave broadcasting, VHF and UHF TV and (VHF) FM radio, also in existing "radar, countermeasures equipment, or communications equipment" using specially designed valves, such as the klystron, gyrotron, traveling-wave tube, and crossed-field amplifier; however, new designs for such products are now invariably semiconductor-based.
Footnotes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "k_B T B"
},
{
"math_id": 1,
"text": "4*k_B*T*B*R)^{1/2}"
},
{
"math_id": 2,
"text": "4*k_B*T*B/R)^{1/2}"
},
{
"math_id": 3,
"text": "1/10^{1/2}"
}
] | https://en.wikipedia.org/wiki?curid=9519906 |
9520954 | Algebraic cycle | In mathematics, an algebraic cycle on an algebraic variety "V" is a formal linear combination of subvarieties of "V". These are the part of the algebraic topology of "V" that is directly accessible by algebraic methods. Understanding the algebraic cycles on a variety can give profound insights into the structure of the variety.
The most trivial case is codimension zero cycles, which are linear combinations of the irreducible components of the variety. The first non-trivial case is of codimension one subvarieties, called divisors. The earliest work on algebraic cycles focused on the case of divisors, particularly divisors on algebraic curves. Divisors on algebraic curves are formal linear combinations of points on the curve. Classical work on algebraic curves related these to intrinsic data, such as the regular differentials on a compact Riemann surface, and to extrinsic properties, such as embeddings of the curve into projective space.
While divisors on higher-dimensional varieties continue to play an important role in determining the structure of the variety, on varieties of dimension two or more there are also higher codimension cycles to consider. The behavior of these cycles is strikingly different from that of divisors. For example, every curve has a constant "N" such that every divisor of degree zero is linearly equivalent to a difference of two effective divisors of degree at most "N". David Mumford proved that, on a smooth complete complex algebraic surface "S" with positive geometric genus, the analogous statement for the group formula_0 of rational equivalence classes of codimension two cycles in "S" is false. The hypothesis that the geometric genus is positive essentially means (by the Lefschetz theorem on (1,1)-classes) that the cohomology group formula_1 contains transcendental information, and in effect Mumford's theorem implies that, despite formula_0 having a purely algebraic definition, it shares transcendental information with formula_1. Mumford's theorem has since been greatly generalized.
The behavior of algebraic cycles ranks among the most important open questions in modern mathematics. The Hodge conjecture, one of the Clay Mathematics Institute's Millennium Prize Problems, predicts that the topology of a complex algebraic variety forces the existence of certain algebraic cycles. The Tate conjecture makes a similar prediction for étale cohomology. Alexander Grothendieck's standard conjectures on algebraic cycles yield enough cycles to construct his category of motives and would imply that algebraic cycles play a vital role in any cohomology theory of algebraic varieties. Conversely, Alexander Beilinson proved that the existence of a category of motives implies the standard conjectures. Additionally, cycles are connected to algebraic "K"-theory by Bloch's formula, which expresses groups of cycles modulo rational equivalence as the cohomology of "K"-theory sheaves.
Definition.
Let "X" be a scheme which is finite type over a field "k". An algebraic "r"-cycle on "X" is a formal linear combination
formula_2
of "r"-dimensional closed integral "k"-subschemes of "X". The coefficient "n""i" is the "multiplicity" of "V""i". The set of all "r"-cycles is the free abelian group
formula_3
where the sum is over closed integral subschemes "V" of "X". The groups of cycles for varying "r" together form a group
formula_4
This is called the group of algebraic cycles, and any element is called an algebraic cycle. A cycle is effective or positive if all its coefficients are non-negative.
Closed integral subschemes of "X" are in one-to-one correspondence with the scheme-theoretic points of "X" under the map that, in one direction, takes each subscheme to its generic point, and in the other direction, takes each point to the unique reduced subscheme supported on the closure of the point. Consequently formula_5 can also be described as the free abelian group on the points of "X".
A cycle formula_6 is rationally equivalent to zero, written formula_7, if there are a finite number of formula_8-dimensional subvarieties formula_9 of formula_10 and non-zero rational functions formula_11 such that formula_12, where formula_13 denotes the divisor of a rational function on "W""i". The cycles rationally equivalent to zero are a subgroup formula_14, and the group of "r"-cycles modulo rational equivalence is the quotient
formula_15
This group is also denoted formula_16. Elements of the group
formula_17
are called cycle classes on "X". Cycle classes are said to be effective or positive if they can be represented by an effective cycle.
If "X" is smooth, projective, and of pure dimension "N", the above groups are sometimes reindexed cohomologically as
formula_18
and
formula_19
In this case, formula_20 is called the Chow ring of "X" because it has a multiplication operation given by the intersection product.
There are several variants of the above definition. We may substitute another ring for integers as our coefficient ring. The case of rational coefficients is widely used. Working with families of cycles over a base, or using cycles in arithmetic situations, requires a relative setup. Let formula_21, where "S" is a regular Noetherian scheme. An "r"-cycle is a formal sum of closed integral subschemes of "X" whose relative dimension is "r"; here the relative dimension of formula_22 is the transcendence degree of formula_23 over formula_24 minus the codimension of formula_25 in "S".
Rational equivalence can also be replaced by several other coarser equivalence relations on algebraic cycles. Other equivalence relations of interest include "algebraic equivalence", "homological equivalence" for a fixed cohomology theory (such as singular cohomology or étale cohomology), "numerical equivalence", as well as all of the above modulo torsion. These equivalence relations have (partially conjectural) applications to the theory of motives.
Flat pullback and proper pushforward.
There is a covariant and a contravariant functoriality of the group of algebraic cycles. Let "f" : "X" → "X'" be a map of varieties.
If "f" is flat of some constant relative dimension (i.e. all fibers have the same dimension), we can define for any subvariety "Y'" ⊂ "X'":
formula_26
which by assumption has the same codimension as "Y′".
Conversely, if "f" is proper, for "Y" a subvariety of "X" the pushforward is defined to be
formula_27
where "n" is the degree of the extension of function fields ["k"("Y") : "k"("f"("Y"))] if the restriction of "f" to "Y" is finite and 0 otherwise.
By linearity, these definitions extend to homomorphisms of abelian groups
formula_28
(the latter by virtue of the convention) are homomorphisms of abelian groups. See Chow ring for a discussion of the functoriality related to the ring structure.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\operatorname{CH}^2(S)"
},
{
"math_id": 1,
"text": "H^2(S)"
},
{
"math_id": 2,
"text": "\\sum n_i [V_i]"
},
{
"math_id": 3,
"text": "Z_r X = \\bigoplus_{V \\subseteq X} \\mathbf{Z} \\cdot [V],"
},
{
"math_id": 4,
"text": "Z_* X = \\bigoplus_r Z_r X."
},
{
"math_id": 5,
"text": "Z_* X"
},
{
"math_id": 6,
"text": "\\alpha"
},
{
"math_id": 7,
"text": "\\alpha \\sim 0"
},
{
"math_id": 8,
"text": "(r + 1)"
},
{
"math_id": 9,
"text": "W_i"
},
{
"math_id": 10,
"text": "X"
},
{
"math_id": 11,
"text": "r_i \\in k(W_i)^\\times"
},
{
"math_id": 12,
"text": "\\alpha = \\sum [\\operatorname{div}_{W_i}(r_i)]"
},
{
"math_id": 13,
"text": "\\operatorname{div}_{W_i}"
},
{
"math_id": 14,
"text": "Z_r(X)_{\\text{rat}} \\subseteq Z_r(X)"
},
{
"math_id": 15,
"text": "A_r(X) = Z_r(X) / Z_r(X)_{\\text{rat}}."
},
{
"math_id": 16,
"text": "\\operatorname{CH}_r(X)"
},
{
"math_id": 17,
"text": "A_*(X) = \\bigoplus_r A_r(X)"
},
{
"math_id": 18,
"text": "Z^{N - r} X = Z_r X"
},
{
"math_id": 19,
"text": "A^{N - r} X = A_r X."
},
{
"math_id": 20,
"text": "A^* X"
},
{
"math_id": 21,
"text": "\\phi \\colon X \\to S"
},
{
"math_id": 22,
"text": "Y \\subseteq X"
},
{
"math_id": 23,
"text": "k(Y)"
},
{
"math_id": 24,
"text": "k(\\overline{\\phi(Y)})"
},
{
"math_id": 25,
"text": "\\overline{\\phi(Y)}"
},
{
"math_id": 26,
"text": "f^*([Y']) = [f^{-1}(Y')]\\,\\!"
},
{
"math_id": 27,
"text": "f_*([Y]) = n [f(Y)]\\,\\!"
},
{
"math_id": 28,
"text": "f^* \\colon Z^k(X') \\to Z^k(X) \\quad\\text{and}\\quad f_* \\colon Z_k(X) \\to Z_k(X') \\,\\!"
}
] | https://en.wikipedia.org/wiki?curid=9520954 |
9522381 | Fundamental diagram of traffic flow | Type of diagram
The fundamental diagram of traffic flow is a diagram that gives a relation between road traffic flux (vehicles/hour) and the traffic density (vehicles/km). A macroscopic traffic model involving traffic flux, traffic density and velocity forms the basis of the fundamental diagram. It can be used to predict the capability of a road system, or its behaviour when applying inflow regulation or speed limits.
Basic statements.
The primary tool for graphically displaying information in the study traffic flow is the fundamental diagram. Fundamental diagrams consist of three different graphs: flow-density, speed-flow, and speed-density. The graphs are two dimensional graphs. All the graphs are related by the equation “flow = speed * density”; this equation is the essential equation in traffic flow. The fundamental diagrams were derived by the plotting of field data points and giving these data points a best fit curve. With the fundamental diagrams researchers can explore the relationship between speed, flow, and density of traffic.
Speed-density.
The speed-density relationship is linear with a negative slope; therefore, as the density increases the speed of the roadway decreases. The line crosses the speed axis, y, at the free flow speed, and the line crosses the density axis, x, at the jam density. Here the speed approaches free flow speed as the density approaches zero. As the density increases, the speed of the vehicles on the roadway decreases. The speed reaches approximately zero when the density equals the jam density.
Flow-density.
In the study of traffic flow theory, the flow-density diagram is used to determine the traffic state of a roadway. Currently, there are two types of flow density graphs: parabolic and triangular. Academia views the triangular flow-density curve as more the accurate representation of real world events. The triangular curve consists of two vectors. The first vector is the freeflow side of the curve. This vector is created by placing the freeflow velocity vector of a roadway at the origin of the flow-density graph. The second vector is the congested branch, which is created by placing the vector of the shock wave speed at zero flow and jam density. The congested branch has a negative slope, which implies that the higher the density on the congested branch the lower the flow; therefore, even though there are more cars on the road, the number of cars passing a single point is less than if there were fewer cars on the road. The intersection of freeflow and congested vectors is the apex of the curve and is considered the capacity of the roadway, which is the traffic condition at which the maximum number of vehicles can pass by a point in a given time period. The flow and capacity at which this point occurs is the optimum flow and optimum density, respectively. The flow density diagram is used to give the traffic condition of a roadway. With the traffic conditions, time-space diagrams can be created to give travel time, delay, and queue lengths of a road segment.
Speed-flow.
Speed – flow diagrams are used to determine the speed at which the optimum flow occurs. There are currently two shapes of the speed-flow curve. The speed-flow curve also consists of two branches, the free flow and congested branches. The diagram is not a function, allowing the flow variable to exist at two different speeds. The flow variable existing at two different speeds occurs when the speed is higher and the density is lower or when the speed is lower and the density is higher, which allows for the same flow rate. In the first speed-flow diagram, the free flow branch is a horizontal line, which shows that the roadway is at free flow speed until the optimum flow is reached. Once the optimum flow is reached, the diagram switches to the congested branch, which is a parabolic shape. The second speed flow diagram is a parabola. The parabola suggests that the only time there is free flow speed is when the density approaches zero; it also suggests that as the flow increases the speed decreases. This parabolic graph also contains an optimum flow. The optimum flow also divides the free flow and congested branches on the parabolic graph.
Macroscopic fundamental diagram.
A macroscopic fundamental diagram (MFD) is type of traffic flow fundamental diagram that relates space-mean flow, density and speed of an entire network with n number of links as shown in Figure 1. The MFD thus represents the capacity, formula_0, of the network in terms of vehicle density with formula_1 being the maximum capacity of the network and formula_2 being the jam density of the network. The maximum capacity or “sweet spot” of the network is the region at the peak of the MFD function.
Flow.
The space-mean flow, formula_3, across all the links of a given network can be expressed by:
formula_4, where B is the area in the time-space diagram shown in Figure 2.
Density.
The space-mean density, formula_5, across all the links of a given network can be expressed by:
formula_6, where A is the area in the time-space diagram shown in Figure 2.
Speed.
The space-mean speed, formula_7, across all the links of a given network can be expressed by:
formula_8, where B is the area in the space-time diagram shown in Figure 2.
Average travel time.
The MFD function can be expressed in terms of the number of vehicles in the network such that:
formula_9 where formula_10 represents the total lane miles of the network.
Let formula_11 be the average distance driven by a user in the network. The average travel time (formula_12) is:
formula_13
Application of the Macroscopic Fundamental Diagram (MFD).
In 2008, the traffic flow data of the city street network of Yokohama, Japan was collected using 500 fixed sensors and 140 mobile sensors. The study revealed that city sectors with approximate area of 10 km2 are expected to have well-defined MFD functions. However, the observed MFD does not produce the full MFD function in the congested region of higher densities. Most beneficially though, the MFD function of a city network was shown to be independent of the traffic demand. Thus, through the continuous collection of traffic flow data the MFD for urban neighborhoods and cities can be obtained and used for analysis and traffic engineering purposes.
These MFD functions can aid agencies in improving network accessibility and help to reduce congestion by monitoring the number of vehicles in the network. In turn, using congestion pricing, perimeter control, and other various traffic control methods, agencies can maintain optimum network performance at the "sweet spot" peak capacity. Agencies can also use the MFD to estimate average trip times for public information and engineering purposes.
Keyvan-Ekbatani et al. have exploited the notion of MFD to improve mobility in saturated traffic conditions via application of gating measures, based on an appropriate simple feedback control structure. They developed a simple (nonlinear and linearized) control design model, incorporating the operational MFD, which allows for the gating problem to be cast in a proper feedback control design setting. This allows for application and comparison of a variety of linear or nonlinear, feedback or predictive (e.g. Smith predictor, internal model control and other) control design methods from the control engineering arsenal; among them, a simple but efficient PI controller was developed and successfully tested in a fairly realistic microscopic simulation environment.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mu(n)"
},
{
"math_id": 1,
"text": "\\mu_1"
},
{
"math_id": 2,
"text": "\\eta"
},
{
"math_id": 3,
"text": "\\bar q"
},
{
"math_id": 4,
"text": " \\bar q = \\frac{\\sum_{k=1}^n d_i(B)}{nTL} "
},
{
"math_id": 5,
"text": "\\bar k"
},
{
"math_id": 6,
"text": " \\bar k = \\frac{\\sum_{k=1}^n t_i(A)}{nTL} "
},
{
"math_id": 7,
"text": "\\bar v"
},
{
"math_id": 8,
"text": "\\bar v = \\frac{\\bar q}{\\bar k}"
},
{
"math_id": 9,
"text": "<Math> n=\\bar k \\sum_{k=1}^n l_i = \\bar k L </Math>"
},
{
"math_id": 10,
"text": "L"
},
{
"math_id": 11,
"text": "d"
},
{
"math_id": 12,
"text": "\\tau"
},
{
"math_id": 13,
"text": "\\tau = \\frac{d}{\\bar v} = \\frac{nd}{MFD(n)L}"
}
] | https://en.wikipedia.org/wiki?curid=9522381 |
9523634 | Kaplansky density theorem | In the theory of von Neumann algebras, the Kaplansky density theorem, due to Irving Kaplansky, is a fundamental approximation theorem. The importance and ubiquity of this technical tool led Gert Pedersen to comment in one of his books that,
"The density theorem is Kaplansky's great gift to mankind. It can be used every day, and twice on Sundays."
Formal statement.
Let "K"− denote the strong-operator closure of a set "K" in "B(H)", the set of bounded operators on the Hilbert space "H", and let ("K")1 denote the intersection of "K" with the unit ball of "B(H)".
Kaplansky density theorem. If formula_0 is a self-adjoint algebra of operators in formula_1, then each element formula_2 in the unit ball of the strong-operator closure of formula_0 is in the strong-operator closure of the unit ball of formula_0. In other words, formula_3. If formula_4 is a self-adjoint operator in formula_5, then formula_4 is in the strong-operator closure of the set of self-adjoint operators in formula_6.
The Kaplansky density theorem can be used to formulate some approximations with respect to the strong operator topology.
1) If "h" is a positive operator in ("A"−)1, then "h" is in the strong-operator closure of the set of self-adjoint operators in ("A"+)1, where "A"+ denotes the set of positive operators in "A".
2) If "A" is a C*-algebra acting on the Hilbert space "H" and "u" is a unitary operator in A−, then "u" is in the strong-operator closure of the set of unitary operators in "A".
In the density theorem and 1) above, the results also hold if one considers a ball of radius "r" > "0", instead of the unit ball.
Proof.
The standard proof uses the fact that a bounded continuous real-valued function "f" is strong-operator continuous. In other words, for a net {"aα"} of self-adjoint operators in "A", the continuous functional calculus "a" → "f"("a") satisfies,
formula_7
in the strong operator topology. This shows that self-adjoint part of the unit ball in "A"− can be approximated strongly by self-adjoint elements in "A". A matrix computation in "M"2("A") considering the self-adjoint operator with entries "0" on the diagonal and "a" and "a"* at the other positions, then removes the self-adjointness restriction and proves the theorem.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "A"
},
{
"math_id": 1,
"text": "B(H)"
},
{
"math_id": 2,
"text": "a"
},
{
"math_id": 3,
"text": "(A)_1^{-} = (A^{-})_1"
},
{
"math_id": 4,
"text": "h"
},
{
"math_id": 5,
"text": "(A^{-})_1"
},
{
"math_id": 6,
"text": "(A)_1"
},
{
"math_id": 7,
"text": "\\lim f(a_{\\alpha}) = f (\\lim a_{\\alpha})"
}
] | https://en.wikipedia.org/wiki?curid=9523634 |
9524572 | M8 (cipher) | Block cipher
In cryptography, M8 is a block cipher designed by Hitachi in 1999. It is a modification of Hitachi's earlier M6 algorithm, designed for greater security and high performance in both hardware and 32-bit software implementations. M8 was registered by Hitachi in March 1999 as ISO/IEC 9979-0020.
Like M6, M8 is a Feistel cipher with a block size of 64 bits. The round function can include 32-bit rotations, XORs, and modular addition, making it an early example of an ARX cipher.
The cipher features a variable number of rounds (any positive integer N), each of which has a structure determined by a round-specific "algorithm decision key". Making the rounds key-dependent is intended to make cryptanalysis more difficult (see FROG for a similar design philosophy).
Cipher description.
The round count can be set to any positive integer N, but a round count of at least 10 is recommended. The key consists of four components: a 64-bit data key, 256-bit key expansion key, a set of N 24-bit algorithm decision keys, and a set of N 96-bit algorithm expansion keys.
The round function is used for both key expansion and encryption/decryption. The key expansion process transforms the 64-bit data key and 256-bit key expansion key into a 256-bit execution key, consisting of 4 pairs of 32-bit numbers formula_0.
The cipher has a typical Feistel cipher design. First, the 64-bit input block is split into two 32-bit halves. In each round, the left half undergoes a key-dependent transformation, and is then combined with the right half. Finally, the halves are swapped. In total, the round function consists of a sequence of nine customizable operations and three bitwise rotations:
formula_1
formula_2 denotes the round number, which takes inputs formula_3 and formula_4. formula_5 are the three 32-bit words of the round's algorithm expansion key. formula_6 are words from the execution key. formula_7 denotes a left bitwise rotation. formula_8 and formula_9 are defined by the 24-bit algorithm decision key as follows:
where op1 to op9 are each one bit (0 = addition mod 232, 1 = XOR) and S1 to S3 are five bits each.
Key expansion consists of eight cipher rounds, using the first eight algorithm decision and expansion keys, the key expansion key as the execution key, and the data key as the input block. The eight intermediate outputs, formula_10 are used as the eight components of the execution key formula_0.
Cipher implementation.
The following is an implementation of the cipher in Python.
M = 0xffffffff
def add(x, y):
return (x + y) & M
def xor(x, y):
return x ^ y
def rol(x, s):
return ((x « s) | (x » (32 - s))) & M
def m8_round(L, R, ri, k, adk, aek):
One round of the algorithm.
L, R: input
ri: round index
k: 256-bit execution key
adk: 24-bit algorithm decision key
aek: 96-bit algorithm expansion key
op = [[add, xor][(adk » (23 - i)) & 1] for i in range(9)]
S1 = (adk » 10) & 0x1f
S2 = (adk » 5) & 0x1f
S3 = (adk » 0) & 0x1f
A = (aek » 64) & M
B = (aek » 32) & M
C = (aek » 0) & M
KR = (k » (32 + 64 * (3 - ri % 4))) & M
KL = (k » (0 + 64 * (3 - ri % 4))) & M
x = op[0](L, KL)
y = op[2](op[1](rol(x, S1), x), A)
z = op[5](op[4](op[3](rol(y, S2), y), B), KR)
return op[8](op[7](op[6](rol(z, S3), z), C), R), L
def m8_keyexpand(dk, kek, adks, aeks):
Key expansion.
dk: 64-bit data key
kek: 256-bit key expansion key
adks: algorithm decision keys
aeks: algorithm expansion keys
L = (dk » 32) & M
R = (dk » 0) & M
k = 0
for i in range(8):
L, R = m8_round(L, R, i, kek, adks[i], aeks[i])
k |= (L « (32 * (7 - i)))
return k
def m8_encrypt(data, N, dk, kek, adks, aeks):
Encrypt one block with M8.
data: 64-bit input block
N: number of rounds (must be >= 8)
dk: 64-bit data key
kek: 256-bit key expansion key
adks: a list of N 24-bit algorithm decision keys
aeks: a list of N 96-bit algorithm expansion keys
ek = m8_keyexpand(dk, kek, adks, aeks)
L = (data » 32) & M
R = (data » 0) & M
for i in range(N):
L, R = m8_round(L, R, i, ek, adks[i], aeks[i])
return (L « 32) | R
result = m8_encrypt(
0x0000_0000_0000_0001,
126,
0x0123_4567_89AB_CDEF,
0,
[0x848B6D, 0x8489BB, 0x84B762, 0x84EDA2] * 32,
[0x0000_0001_0000_0000_0000_0000] * 126,
assert result == 0xFE4B_1622_E446_36C0
Test vectors.
The published version of ISO/IEC 9979-0020 includes the following test data:
Cryptanalysis.
The key-dependent behaviour of the cipher results in a large class of [[weak key]]s which expose the cipher to a range of attacks, including [[differential cryptanalysis]], [[linear cryptanalysis]] and [[mod n cryptanalysis]].
References.
<templatestyles src="Reflist/styles.css" />
[[Category:Feistel ciphers]]
[[Category:Block ciphers]] | [
{
"math_id": 0,
"text": "K_{R_0}, K_{L_0}, ..., K_{R_3}, K_{L_3}"
},
{
"math_id": 1,
"text": "\n\\begin{align}\nR_{i+1}&=L_i \\\\\nx&=L_{i} \\operatorname{op}_1 K_{L_{i\\bmod 4}}\\\\\ny&=((x <<< S_1) \\operatorname{op}_2 x) \\operatorname{op}_3 \\alpha \\\\\nz&=(((y <<< S_2) \\operatorname{op}_4 y) \\operatorname{op}_5 \\beta) \\operatorname{op}_6 K_{R_{i\\bmod 4}} \\\\\nL_{i+1}&=(((z <<< S_3) \\operatorname{op}_7 z) \\operatorname{op}_8 \\gamma) \\operatorname{op}_9 R_i\n\\end{align}\n"
},
{
"math_id": 2,
"text": "i"
},
{
"math_id": 3,
"text": "L_i"
},
{
"math_id": 4,
"text": "R_i"
},
{
"math_id": 5,
"text": "\\alpha, \\beta, \\gamma"
},
{
"math_id": 6,
"text": "K_{R_{i\\bmod 4}}, K_{L_{i\\bmod 4}}"
},
{
"math_id": 7,
"text": " <<< "
},
{
"math_id": 8,
"text": "\\operatorname{op}_j"
},
{
"math_id": 9,
"text": "S_k"
},
{
"math_id": 10,
"text": "L_1, L_2, ..., L_7, L_8"
}
] | https://en.wikipedia.org/wiki?curid=9524572 |
9526571 | Scattering length | The scattering length in quantum mechanics describes low-energy scattering. For potentials that decay faster than formula_0 as formula_1, it is defined as the following low-energy limit:
formula_2
where formula_3 is the scattering length, formula_4 is the wave number, and formula_5 is the phase shift of the outgoing spherical wave. The elastic cross section, formula_6, at low energies is determined solely by the scattering length:
formula_7
General concept.
When a slow particle scatters off a short ranged scatterer (e.g. an impurity in a solid or a heavy particle) it cannot resolve the structure of the object since its de Broglie wavelength is very long. The idea is that then it should not be important what precise potential formula_8 one scatters off, but only how the potential looks at long length scales. The formal way to solve this problem is to do a partial wave expansion (somewhat analogous to the multipole expansion in classical electrodynamics), where one expands in the angular momentum components of the outgoing wave. At very low energy the incoming particle does not see any structure, therefore to lowest order one has only a spherical outgoing wave, called the s-wave in analogy with the atomic orbital at angular momentum quantum number "l"=0. At higher energies one also needs to consider p and d-wave ("l"=1,2) scattering and so on.
The idea of describing low energy properties in terms of a few parameters and symmetries is very powerful, and is also behind the concept of renormalization.
The concept of the scattering length can also be extended to potentials that decay slower than formula_0 as formula_1. A famous example, relevant for proton-proton scattering, is the Coulomb-modified scattering length.
Example.
As an example on how to compute the s-wave (i.e. angular momentum formula_9) scattering length for a given potential we look at the infinitely repulsive spherical potential well of radius formula_10 in 3 dimensions. The radial Schrödinger equation (formula_9) outside of the well is just the same as for a free particle:
formula_11
where the hard core potential requires that the wave function formula_12 vanishes at formula_13, formula_14.
The solution is readily found:
formula_15.
Here formula_16 and formula_17 is the s-wave phase shift (the phase difference between incoming and outgoing wave), which is fixed by the boundary condition formula_14; formula_18 is an arbitrary normalization constant.
One can show that in general formula_19 for small formula_4 (i.e. low energy scattering). The parameter formula_20 of dimension length is defined as the scattering length. For our potential we have therefore formula_21, in other words the scattering length for a hard sphere is just the radius. (Alternatively one could say that an arbitrary potential with s-wave scattering length formula_20 has the same low energy scattering properties as a hard sphere of radius formula_20.)
To relate the scattering length to physical observables that can be measured in a scattering experiment we need to compute the cross section formula_22. In scattering theory one writes the asymptotic wavefunction as (we assume there is a finite ranged scatterer at the origin and there is an incoming plane wave along the formula_23-axis):
formula_24
where formula_25 is the scattering amplitude. According to the probability interpretation of quantum mechanics the differential cross section is given by formula_26 (the probability per unit time to scatter into the direction formula_27). If we consider only s-wave scattering the differential cross section does not depend on the angle formula_28, and the total scattering cross section is just formula_29. The s-wave part of the wavefunction formula_30 is projected out by using the standard expansion of a plane wave in terms of spherical waves and Legendre polynomials formula_31:
formula_32
By matching the formula_9 component of formula_30 to the s-wave solution formula_33 (where we normalize formula_18 such that the incoming wave formula_34 has a prefactor of unity) one has:
formula_35
This gives:
formula_36 | [
{
"math_id": 0,
"text": "1/r^3"
},
{
"math_id": 1,
"text": "r\\to \\infty"
},
{
"math_id": 2,
"text": "\n\\lim_{k\\to 0} k\\cot\\delta(k) =- \\frac{1}{a}\\;,\n"
},
{
"math_id": 3,
"text": "a"
},
{
"math_id": 4,
"text": "k"
},
{
"math_id": 5,
"text": "\\delta(k)"
},
{
"math_id": 6,
"text": "\\sigma_e"
},
{
"math_id": 7,
"text": "\n\\lim_{k\\to 0} \\sigma_e = 4\\pi a^2\\;.\n"
},
{
"math_id": 8,
"text": "V(r)"
},
{
"math_id": 9,
"text": "l=0"
},
{
"math_id": 10,
"text": "r_0"
},
{
"math_id": 11,
"text": "-\\frac{\\hbar^2}{2m} u''(r)=E u(r),"
},
{
"math_id": 12,
"text": "u(r)"
},
{
"math_id": 13,
"text": "r=r_0"
},
{
"math_id": 14,
"text": "u(r_0)=0"
},
{
"math_id": 15,
"text": "u(r)=A \\sin(k r+\\delta_s)"
},
{
"math_id": 16,
"text": "k=\\sqrt{2m E}/\\hbar"
},
{
"math_id": 17,
"text": "\\delta_s=-k \\cdot r_0"
},
{
"math_id": 18,
"text": "A"
},
{
"math_id": 19,
"text": "\\delta_s(k)\\approx-k \\cdot a_s +O(k^2)"
},
{
"math_id": 20,
"text": "a_s"
},
{
"math_id": 21,
"text": "a=r_0"
},
{
"math_id": 22,
"text": "\\sigma"
},
{
"math_id": 23,
"text": "z"
},
{
"math_id": 24,
"text": "\\psi(r,\\theta)=e^{i k z}+f(\\theta) \\frac{e^{i k r}}{r}"
},
{
"math_id": 25,
"text": "f"
},
{
"math_id": 26,
"text": "d\\sigma/d\\Omega=|f(\\theta)|^2"
},
{
"math_id": 27,
"text": "\\mathbf{k}"
},
{
"math_id": 28,
"text": "\\theta"
},
{
"math_id": 29,
"text": "\\sigma=4 \\pi |f|^2"
},
{
"math_id": 30,
"text": "\\psi(r,\\theta)"
},
{
"math_id": 31,
"text": "P_l(\\cos \\theta)"
},
{
"math_id": 32,
"text": "e^{i k z}\\approx\\frac{1}{2 i k r}\\sum_{l=0}^{\\infty}(2l+1)P_l(\\cos \\theta)\\left[ (-1)^{l+1}e^{-i k r} + e^{i k r}\\right] "
},
{
"math_id": 33,
"text": "\\psi(r)=A \\sin(k r+\\delta_s)/r"
},
{
"math_id": 34,
"text": "e^{i k z}"
},
{
"math_id": 35,
"text": "f=\\frac{1}{2 i k}(e^{2 i \\delta_s}-1)\\approx \\delta_s/k \\approx - a_s"
},
{
"math_id": 36,
"text": "\\sigma= \\frac{4 \\pi}{k^2} \\sin^2 \\delta_s =4 \\pi a_s^2 "
}
] | https://en.wikipedia.org/wiki?curid=9526571 |
9528027 | American death triangle | Dangerous type of climbing anchor
The American Death Triangle, also known as the "American Triangle", "Triangle Anchor" or simply the "Death Triangle", is a dangerous type of rock and ice climbing anchor infamous for both magnifying load forces on fixed anchors and lack of redundancy in attachment to the anchor.
Description.
A two-point climbing anchor requires three carabiners: one at each fixed point and one at the "master point" where the load is transferred to the climbing rope. The aim is to distribute the force equally to each fixed point. A triangle anchor is formed by clipping a length of webbing or cord through all three carabiners, creating a shape which gives the dangerous anchor its descriptive name.
The force on each fixed point depends on the angle at the focal point. The following table lists the percentage of force transferred to the fixed point for various focal point angles, along with figures for a standard V-shaped anchor.
Table values are derived from vector analysis:
<br>
The load on the sling is the same in each example. For the V arrangement, the anchor force is equal to the tension in the sling, but for the triangle the anchor force is greater than the sling tension.
Aside from the magnification of forces, the "death triangle" violates several best practices for building climbing anchors, including
An alternative V-shaped form of the "death triangle" involves clipping a single loop of webbing or cord to both anchors, then clipping the third carabiner over the loop rather than through it, allowing the latter to slip off the loop if either anchor fails. Two better methods are (a) putting a half twist in the cord and clipping the free carabiner through it. If either anchor fails, the free carabiner will remain attached to the cord but if the cord fails, the entire anchor still fails or (b) tying off both strands of the cord with e.g. an overhand knot achieving redundancy by sacrificing perfect equalization, since the length of the cord to each anchor is now fixed.
Special circumstances, such as when an experienced climber employs opposing forces to keep passive chocks, simple cams, or spring-loaded multiple camming devices in a crack, may call for a triangle. Even then, special provision must be made to provide redundancy and eliminate extension in the protection system.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "F_{\\mathrm{Anchor}} = \\frac {\\mathrm{Weight}} {2 \\cos(\\frac 1 2 {\\theta_\\mathrm{Bottom}})} \\approx \\mathrm{Weight}\\times 0.5 + O({\\theta_\\mathrm{Bottom}}^2)"
},
{
"math_id": 1,
"text": "F_{\\mathrm{Anchor}} =\\frac {\\mathrm{Weight}} {2 \\cos (45 ^{\\circ} + \\frac 1 4 {\\theta_{\\mathrm{Bottom}}} )} \\approx\\mathrm{Weight}\\times 0.707 + O(\\theta_{\\mathrm{Bottom}})"
}
] | https://en.wikipedia.org/wiki?curid=9528027 |
952915 | Mean anomaly | Specifies the orbit of an object in space
In celestial mechanics, the mean anomaly is the fraction of an elliptical orbit's period that has elapsed since the orbiting body passed periapsis, expressed as an angle which can be used in calculating the position of that body in the classical two-body problem. It is the angular distance from the pericenter which a fictitious body would have if it moved in a circular orbit, with constant speed, in the same orbital period as the actual body in its elliptical orbit.
Definition.
Define T as the time required for a particular body to complete one orbit. In time T, the radius vector sweeps out 2π radians, or 360°. The average rate of sweep, n, is then
formula_0
which is called the "mean angular motion" of the body, with dimensions of radians per unit time or degrees per unit time.
Define τ as the time at which the body is at the pericenter. From the above definitions, a new quantity, M, the "mean anomaly" can be defined
formula_1
which gives an angular distance from the pericenter at arbitrary time t with dimensions of radians or degrees.
Because the rate of increase, n, is a constant average, the mean anomaly increases uniformly (linearly) from 0 to 2π radians or 0° to 360° during each orbit. It is equal to 0 when the body is at the pericenter, π radians (180°) at the apocenter, and 2π radians (360°) after one complete revolution. If the mean anomaly is known at any given instant, it can be calculated at any later (or prior) instant by simply adding (or subtracting) n⋅δt where δt represents the small time difference.
Mean anomaly does not measure an angle between any physical objects (except at pericenter or apocenter, or for a circular orbit). It is simply a convenient uniform measure of how far around its orbit a body has progressed since pericenter. The mean anomaly is one of three angular parameters (known historically as "anomalies") that define a position along an orbit, the other two being the eccentric anomaly and the true anomaly.
Mean anomaly at epoch.
The "mean anomaly at epoch", M0, is defined as the instantaneous mean anomaly at a given epoch, t0. This value is sometimes provided with other orbital elements to enable calculations of the object's past and future positions along the orbit. The epoch for which M0 is defined is often determined by convention in a given field or discipline. For example, planetary ephemerides often define M0 for the epoch J2000, while for earth orbiting objects described by a two-line element set the epoch is specified as a date in the first line.
Formulae.
The mean anomaly M can be computed from the eccentric anomaly E and the eccentricity e with Kepler's equation:
formula_2
Mean anomaly is also frequently seen as
formula_3
where M0 is the mean anomaly at the epoch t0, which may or may not coincide with τ, the time of pericenter passage. The classical method of finding the position of an object in an elliptical orbit from a set of orbital elements is to calculate the mean anomaly by this equation, and then to solve Kepler's equation for the eccentric anomaly.
Define ϖ as the "longitude of the pericenter", the angular distance of the pericenter from a reference direction. Define ℓ as the "mean longitude", the angular distance of the body from the same reference direction, assuming it moves with uniform angular motion as with the mean anomaly. Thus mean anomaly is also
formula_4
Mean angular motion can also be expressed,
formula_5
where μ is the gravitational parameter, which varies with the masses of the objects, and a is the semi-major axis of the orbit. Mean anomaly can then be expanded,
formula_6
and here mean anomaly represents uniform angular motion on a circle of radius a.
Mean anomaly can be calculated from the eccentricity and the true anomaly f by finding the eccentric anomaly and then using Kepler's equation. This gives, in radians:
formula_7
where atan2(y, x) is the angle from the x-axis of the ray from (0, 0) to (x, y), having the same sign as y.
For parabolic and hyperbolic trajectories the mean anomaly is not defined, because they don't have a period. But in those cases, as with elliptical orbits, the area swept out by a chord between the attractor and the object following the trajectory increases linearly with time. For the hyperbolic case, there is a formula similar to the above giving the elapsed time as a function of the angle (the true anomaly in the elliptic case), as explained in the article Kepler orbit. For the parabolic case there is a different formula, the limiting case for either the elliptic or the hyperbolic case as the distance between the foci goes to infinity – see Parabolic trajectory#Barker's equation.
Mean anomaly can also be expressed as a series expansion:
formula_8
with formula_9
formula_10
A similar formula gives the true anomaly directly in terms of the mean anomaly:
formula_11
A general formulation of the above equation can be written as the equation of the center:
formula_12
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "n = \\frac{\\,2\\,\\pi\\,}{T} = \\frac{\\,360^\\circ\\,}{T}~,"
},
{
"math_id": 1,
"text": "M = n\\,(t - \\tau) ~,"
},
{
"math_id": 2,
"text": "M = E - e \\,\\sin E ~."
},
{
"math_id": 3,
"text": "M = M_0 + n\\left(t - t_0\\right) ~,"
},
{
"math_id": 4,
"text": "M = \\ell - \\varpi~."
},
{
"math_id": 5,
"text": "n = \\sqrt{\\frac{\\mu}{\\;a^3 \\,} \\,}~,"
},
{
"math_id": 6,
"text": "M = \\sqrt{\\frac{\\mu}{\\; a^3 \\,}\\,}\\,\\left(t - \\tau\\right)~,"
},
{
"math_id": 7,
"text": "M = \\operatorname{atan2}\\left(-\\ \\sqrt{1 - e^2} \\sin f, -\\ e - \\cos f \\right)+\\pi-e \\frac{\\sqrt{1 - e^2} \\sin f}{1 + e \\cos f}"
},
{
"math_id": 8,
"text": "M = f +2\\sum_{n=1}^{\\infty}(-1)^n \\left[\\frac{1}{n} +\\sqrt{1-e^2} \\right]\\beta^{n}\\sin{nf}"
},
{
"math_id": 9,
"text": "\\beta = \\frac{1-\\sqrt{1-e^2}}{e}"
},
{
"math_id": 10,
"text": "M = f - 2\\,e \\sin f + \\left( \\frac{3}{4}e^2 + \\frac{1}{8}e^4 \\right)\\sin 2f - \\frac{1}{3} e^3 \\sin 3f + \\frac{5}{32} e^4 \\sin 4f + \\operatorname{\\mathcal{O}}\\left(e^5\\right)"
},
{
"math_id": 11,
"text": "f = M + \\left( 2\\,e - \\frac{1}{4} e^3 \\right) \\sin M + \\frac{5}{4} e^2 \\sin 2M + \\frac{13}{12} e^3 \\sin 3M + \\operatorname{\\mathcal{O}}\\left(e^4\\right)"
},
{
"math_id": 12,
"text": " f = M +2 \\sum_{s=1}^{\\infty} \\frac{1}{s} \\left[ J_{s}(se) +\\sum_{p=1}^{\\infty} \\beta^{p}\\big(J_{s-p}(se) +J_{s+p}(se) \\big)\\right]\\sin(sM) "
}
] | https://en.wikipedia.org/wiki?curid=952915 |
953148 | Lagrange reversion theorem | In mathematics, the Lagrange reversion theorem gives series or formal power series expansions of certain implicitly defined functions; indeed, of compositions with such functions.
Let "v" be a function of "x" and "y" in terms of another function "f" such that
formula_0
Then for any function "g", for small enough "y":
formula_1
If "g" is the identity, this becomes
formula_2
In which case the equation can be derived using perturbation theory.
In 1770, Joseph Louis Lagrange (1736–1813) published his power series solution of the implicit equation for "v" mentioned above. However, his solution used cumbersome series expansions of logarithms. In 1780, Pierre-Simon Laplace (1749–1827) published a simpler proof of the theorem, which was based on relations between partial derivatives with respect to the variable x and the parameter y. Charles Hermite (1822–1901) presented the most straightforward proof of the theorem by using contour integration.
Lagrange's reversion theorem is used to obtain numerical solutions to Kepler's equation.
Simple proof.
We start by writing:
formula_3
Writing the delta-function as an integral we have:
formula_4
The integral over "k" then gives formula_5 and we have:
formula_6
Rearranging the sum and cancelling then gives the result:
formula_7
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "v=x+yf(v)"
},
{
"math_id": 1,
"text": "g(v)=g(x)+\\sum_{k=1}^\\infty\\frac{y^k}{k!}\\left(\\frac\\partial{\\partial x}\\right)^{k-1}\\left(f(x)^kg'(x)\\right)."
},
{
"math_id": 2,
"text": "v=x+\\sum_{k=1}^\\infty\\frac{y^k}{k!}\\left(\\frac\\partial{\\partial x}\\right)^{k-1}\\left(f(x)^k\\right)"
},
{
"math_id": 3,
"text": " g(v) = \\int \\delta(y f(z) - z + x) g(z) (1-y f'(z)) \\, dz"
},
{
"math_id": 4,
"text": "\n\\begin{align}\ng(v) & = \\iint \\exp(ik[y f(z) - z + x]) g(z) (1-y f'(z)) \\, \\frac{dk}{2\\pi} \\, dz \\\\[10pt]\n& =\\sum_{n=0}^\\infty \\iint \\frac{(ik y f(z))^n}{n!} g(z) (1-y f'(z)) e^{ik(x-z)}\\, \\frac{dk}{2\\pi} \\, dz \\\\[10pt]\n& =\\sum_{n=0}^\\infty \\left(\\frac{\\partial}{\\partial x}\\right)^n\\iint \\frac{(y f(z))^n}{n!} g(z) (1-y f'(z)) e^{ik(x-z)} \\, \\frac{dk}{2\\pi} \\, dz\n\\end{align}\n"
},
{
"math_id": 5,
"text": "\\delta(x-z)"
},
{
"math_id": 6,
"text": "\n\\begin{align}\ng(v) & = \\sum_{n=0}^\\infty \\left(\\frac{\\partial}{\\partial x}\\right)^n \\left[ \\frac{(y f(x))^n}{n!} g(x) (1-y f'(x))\\right] \\\\[10pt]\n& =\\sum_{n=0}^\\infty \\left(\\frac{\\partial}{\\partial x}\\right)^n \\left[ \n \\frac{y^n f(x)^n g(x)}{n!} - \\frac{y^{n+1}}{(n+1)!}\\left\\{ (g(x) f(x)^{n+1})' - g'(x) f(x)^{n+1}\\right\\} \\right]\n\\end{align}\n"
},
{
"math_id": 7,
"text": "g(v)=g(x)+\\sum_{k=1}^\\infty\\frac{y^k}{k!}\\left(\\frac\\partial{\\partial x}\\right)^{k-1}\\left(f(x)^kg'(x)\\right)"
}
] | https://en.wikipedia.org/wiki?curid=953148 |
9536737 | Kinematic chain | Mathematical model for a mechanical system
In mechanical engineering, a kinematic chain is an assembly of rigid bodies connected by joints to provide constrained motion that is the mathematical model for a mechanical system. As the word chain suggests, the rigid bodies, or links, are constrained by their connections to other links. An example is the simple open chain formed by links connected in series, like the usual chain, which is the kinematic model for a typical robot manipulator.
Mathematical models of the connections, or joints, between two links are termed kinematic pairs. Kinematic pairs model the hinged and sliding joints fundamental to robotics, often called "lower pairs" and the surface contact joints critical to cams and gearing, called "higher pairs." These joints are generally modeled as holonomic constraints. A kinematic diagram is a schematic of the mechanical system that shows the kinematic chain.
The modern use of kinematic chains includes compliance that arises from flexure joints in precision mechanisms, link compliance in compliant mechanisms and micro-electro-mechanical systems, and cable compliance in cable robotic and tensegrity systems.
Mobility formula.
The degrees of freedom, or "mobility," of a kinematic chain is the number of parameters that define the configuration of the chain.
A system of n rigid bodies moving in space has 6"n" degrees of freedom measured relative to a fixed frame. This frame is included in the count of bodies, so that mobility does not depend on link that forms the fixed frame. This means the degree-of-freedom of this system is "M" = 6("N" − 1), where "N" = "n" + 1 is the number of moving bodies plus the fixed body.
Joints that connect bodies impose constraints. Specifically, hinges and sliders each impose five constraints and therefore remove five degrees of freedom. It is convenient to define the number of constraints c that a joint imposes in terms of the joint's freedom f, where "c" = 6 − "f". In the case of a hinge or slider, which are one-degree-of-freedom joints, have "f" = 1 and therefore "c" = 6 − 1 = 5.
The result is that the mobility of a kinematic chain formed from n moving links and j joints each with freedom fi, "i" = 1, 2, …, "j", is given by
formula_0
Recall that N includes the fixed link.
Analysis of kinematic chains.
The constraint equations of a kinematic chain couple the range of movement allowed at each joint to the dimensions of the links in the chain, and form algebraic equations that are solved to determine the configuration of the chain associated with specific values of input parameters, called degrees of freedom.
The constraint equations for a kinematic chain are obtained using rigid transformations [Z] to characterize the relative movement allowed at each joint and separate rigid transformations [X] to define the dimensions of each link. In the case of a serial open chain, the result is a sequence of rigid transformations alternating joint and link transformations from the base of the chain to its end link, which is equated to the specified position for the end link. A chain of n links connected in series has the kinematic equations,
formula_1
where ["T"] is the transformation locating the end-link—notice that the chain includes a "zeroth" link consisting of the ground frame to which it is attached. These equations are called the forward kinematics equations of the serial chain.
Kinematic chains of a wide range of complexity are analyzed by equating the kinematics equations of serial chains that form loops within the kinematic chain. These equations are often called "loop equations".
The complexity (in terms of calculating the forward and inverse kinematics) of the chain is determined by the following factors:
Explanation
Two or more rigid bodies in space are collectively called a rigid body system. We can hinder the motion of these independent rigid bodies with kinematic constraints. Kinematic constraints are constraints between rigid bodies that result in the decrease of the degrees of freedom of rigid body system.
Synthesis of kinematic chains.
The constraint equations of a kinematic chain can be used in reverse to determine the dimensions of the links from a specification of the desired movement of the system. This is termed "kinematic synthesis."
Perhaps the most developed formulation of kinematic synthesis is for four-bar linkages, which is known as Burmester theory.
Ferdinand Freudenstein is often called the father of modern kinematics for his contributions to the kinematic synthesis of linkages beginning in the 1950s. His use of the newly developed computer to solve "Freudenstein's equation" became the prototype of computer-aided design systems.
This work has been generalized to the synthesis of spherical and spatial mechanisms.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " M = 6n - \\sum_{i=1}^j (6 - f_i) = 6(N-1 - j) + \\sum_{i=1}^j f_i "
},
{
"math_id": 1,
"text": "[T] = [Z_1][X_1][Z_2][X_2]\\cdots[X_{n-1}][Z_n],\\!"
}
] | https://en.wikipedia.org/wiki?curid=9536737 |
954006 | Classical XY model | Lattice model of statistical mechanics
The classical XY model (sometimes also called classical rotor (rotator) model or O(2) model) is a lattice model of statistical mechanics. In general, the XY model can be seen as a specialization of Stanley's "n"-vector model for "n"
2.
Definition.
Given a D-dimensional lattice Λ, per each lattice site "j" ∈ Λ there is a two-dimensional, unit-length vector s"j"
(cos "θj", sin "θj")
The "spin configuration", s
(s"j")"j" ∈ Λ is an assignment of the angle −"π" < "θj" ≤ "π" for each "j" ∈ Λ.
Given a "translation-invariant" interaction "Jij"
"J"("i" − "j") and a point dependent external field formula_0, the "configuration energy" is
formula_1
The case in which "Jij"
0 except for ij nearest neighbor is called "nearest neighbor" case.
The "configuration probability" is given by the Boltzmann distribution with inverse temperature "β" ≥ 0:
formula_2
where Z is the normalization, or partition function. The notation formula_3 indicates the expectation of the random variable "A"(s) in the infinite volume limit, after "periodic boundary conditions" have been imposed.
Rigorous results.
One dimension.
As in any 'nearest-neighbor' "n"-vector model with free (non-periodic) boundary conditions, if the external field is zero, there exists a simple exact solution. In the free boundary conditions case, the Hamiltonian is
formula_6
therefore the partition function factorizes under the change of coordinates
formula_7
This gives
formula_8
where formula_9 is the modified Bessel function of the first kind. The partition function can be used to find several important thermodynamic quantities. For example, in the thermodynamic limit (formula_10), the free energy per spin is
formula_11
Using the properties of the modified Bessel functions, the specific heat (per spin) can be expressed as
formula_12
where formula_13, and formula_14 is the short-range correlation function,
formula_15
Even in the thermodynamic limit, there is no divergence in the specific heat. Indeed, like the one-dimensional Ising model, the one-dimensional XY model has no phase transitions at finite temperature.
The same computation for periodic boundary condition (and still "h"
0) requires the transfer matrix formalism, though the result is the same.<templatestyles src="Template:Hidden begin/styles.css"/>(Click "show" at right to see the details of the transfer matrix formalism.)
The partition function can be evaluated as
formula_16
which can be treated as the trace of a matrix, namely a product of matrices (scalars, in this case). The trace of a matrix is simply the sum of its eigenvalues, and in the thermodynamic limit formula_10 only the largest eigenvalue will survive, so the partition function can be written as a repeated product of this maximal eigenvalue. This requires solving the eigenvalue problem
formula_17
Note the expansion
formula_18
which represents a diagonal matrix representation in the basis of its plane-wave eigenfunctions formula_19. The eigenvalues of the matrix is simply are modified Bessel functions evaluated at formula_20, namely formula_21. For any particular value of formula_20, these modified Bessel functions satisfy formula_22 and formula_23. Therefore in the thermodynamic limit the eigenvalue formula_9 will dominate the trace, and so formula_24.
This transfer matrix approach is also required when using free boundary conditions, but with an applied field formula_25. If the applied field formula_26 is small enough that it can be treated as a perturbation to the system in zero-field, then the magnetic susceptibility formula_27 can be estimated. This is done by using the eigenstates computed by the transfer matrix approach and computing the energy shift with second-order perturbation theory, then comparing with the free-energy expansion formula_28. One finds
formula_29
where formula_30 is the Curie constant (a value typically associated with the susceptibility in magnetic materials). This expression is also true for the one-dimensional Ising model, with the replacement formula_31.
Two dimensions.
The two-dimensional XY model with nearest-neighbor interactions is an example of a two-dimensional system with continuous symmetry that does not have long-range order as required by the Mermin–Wagner theorem. Likewise, there is not a conventional phase transition present that would be associated with symmetry breaking. However, as will be discussed later, the system does show signs of a transition from a disordered high-temperature state to a quasi-ordered state below some critical temperature, called the Kosterlitz-Thouless transition. In the case of a discrete lattice of spins, the two-dimensional XY model can be evaluated using the transfer matrix approach, reducing the model to an eigenvalue problem and utilizing the largest eigenvalue from the transfer matrix. Though the exact solution is intractable, it is possible to use certain approximations to get estimates for the critical temperature formula_32 which occurs at low temperatures. For example, Mattis (1984) used an approximation to this model to estimate a critical temperature of the system as
formula_33
formula_34
The 2D XY model has also been studied in great detail using Monte Carlo simulations, for example with the Metropolis algorithm. These can be used to compute thermodynamic quantities like the system energy, specific heat, magnetization, etc., over a range of temperatures and time-scales. In the Monte Carlo simulation, each spin is associated to a continuously-varying angle formula_35 (often, it can be discretized into finitely-many angles, like in the related Potts model, for ease of computation. However, this is not a requirement.) At each time step the Metropolis algorithm chooses one spin at random and rotates its angle by some random increment formula_36. This change in angle causes a change in the energy formula_37 of the system, which can be positive or negative. If negative, the algorithm accepts the change in angle; if positive, the configuration is accepted with probability formula_38, the Boltzmann factor for the energy change. The Monte Carlo method has been used to verify, with various methods, the critical temperature of the system, and is estimated to be formula_39. The Monte Carlo method can also compute average values that are used to compute thermodynamic quantities like magnetization, spin-spin correlation, correlation lengths, and specific heat. These are important ways to characterize the behavior of the system near the critical temperature. The magnetization and squared magnetization, for example, can be computed as
formula_40
formula_41
where formula_42 are the number of spins. The mean magnetization characterizes the magnitude of the net magnetic moment of the system; in many magnetic systems this is zero above a critical temperature and becomes non-zero spontaneously at low temperatures. Similarly the mean-squared magnetization characterizes the average of the square of net components of the spins across the lattice. Either of these are commonly used to characterize the order parameter of a system. Rigorous analysis of the XY model shows the magnetization in the thermodynamic limit is zero, and that the square magnetization approximately follows formula_43, which vanishes in the thermodynamic limit. Indeed, at high temperatures this quantity approaches zero since the components of the spins will tend to be randomized and thus sum to zero. However at low temperatures for a finite system, the mean-square magnetization increases, suggesting there are regions of the spin space that are aligned to contribute to a non-zero contribution. The magnetization shown (for a 25x25 lattice) is one example of this, that appears to suggest a phase transition, while no such transition exists in the thermodynamic limit.
Furthermore, using statistical mechanics one can relate thermodynamic averages to quantities like specific heat by calculating
formula_44
The specific heat is shown at low temperatures near the critical temperature formula_45. There is no feature in the specific heat consistent with critical behavior (like a divergence) at this predicted temperature. Indeed, estimating the critical temperature comes from other methods, like from the helicity modulus, or the temperature dependence of the divergence of susceptibility. However, there is a feature in the specific heat in the form of a peak at formula_46. This peak position and height have been shown not to depend on system size, for lattices of linear size greater than 256; indeed, the specific heat anomaly remains rounded and finite for increasing lattice size, with no divergent peak.
The nature of the critical transitions and vortex formation can be elucidated by considering a continuous version of the XY model. Here, the discrete spins formula_47 are replaced by a field formula_48 representing the spin's angle at any point in space. In this case the angle of the spins formula_48 must vary smoothly over changes in position. Expanding the original cosine as a Taylor series, the Hamiltonian can be expressed in the continuum approximation as
formula_49
The continuous version of the XY model is often used to model systems that possess order parameters with the same kinds of symmetry, e.g. superfluid helium, hexatic liquid crystals. This is what makes them peculiar from other phase transitions which are always accompanied with a symmetry breaking. Topological defects in the XY model lead to a vortex-unbinding transition from the low-temperature phase to the high-temperature disordered phase. Indeed, the fact that at high temperature correlations decay exponentially fast, while at low temperatures decay with power law, even though in both regimes "M"("β") = 0, is called Kosterlitz–Thouless transition. Kosterlitz and Thouless provided a simple argument of why this would be the case: this considers the ground state consisting of all spins in the same orientation, with the addition then of a single vortex. The presence of these contributes an entropy of roughly formula_50, where formula_51 is an effective length scale (for example, the lattice size for a discrete lattice) Meanwhile, the energy of the system increases due to the vortex, by an amount formula_52. Putting these together, the free energy of a system would change due to the spontaneous formation of a vortex by an amount
formula_53
In the thermodynamic limit, the system does not favor the formation of vortices at low temperatures, but does favor them at high temperatures, above the critical temperature formula_54. This indicates that at low temperatures, any vortices that arise will want to annihilate with antivortices to lower the system energy. Indeed, this will be the case qualitatively if one watches 'snapshots' of the spin system at low temperatures, where vortices and antivortices gradually come together to annihilate. Thus, the low-temperature state will consist of bound vortex-antivortex pairs. Meanwhile at high temperatures, there will be a collection of unbound vortices and antivortices that are free to move about the plane.
To visualize the Ising model, one can use an arrow pointing up or down, or represented as a point colored black/white to indicate its state. To visualize the XY spin system, the spins can be represented as an arrow pointing in some direction, or as being represented as a point with some color. Here it is necessary to represent the spin with a spectrum of colors due to each of the possible continuous variables. This can be done using, for example, a continuous and periodic red-green-blue spectrum. As shown on the figure, cyan corresponds to a zero angle (pointing to the right), whereas red corresponds to a 180 degree angle (pointing to the left). One can then study snapshots of the spin configurations at different temperatures to elucidate what happens above and below the critical temperature of the XY model. At high temperatures, the spins will not have a preferred orientation and there will be unpredictable variation of angles between neighboring spins, as there will be no preferred energetically favorable configuration. In this case, the color map will look highly pixellated. Meanwhile at low temperatures, one possible ground-state configuration has all spins pointed in the same orientation (same angle); these would correspond to regions (domains) of the color map where all spins have roughly the same color.
To identify vortices (or antivortices) present as a result of the Kosterlitz–Thouless transition, one can determine the signed change in angle by traversing a circle of lattice points counterclockwise. If the total change in angle is zero, this corresponds to no vortex being present; whereas a total change in angle of formula_55 corresponds to a vortex (or antivortex). These vortexes are topologically non-trivial objects that come in vortex-antivortex pairs, which can separate or pair-annihilate. In the colormap, these defects can be identified in regions where there is a large color gradient where all colors of the spectrum meet around a point. Qualitatively, these defects can look like inward- or outward-pointing sources of flow, or whirlpools of spins that collectively clockwise or counterclockwise, or hyperbolic-looking features with some spins pointing toward and some spins pointing away from the defect. As the configuration is studied at long time scales and at low temperatures, it is observed that many of these vortex-antivortex pairs get closer together and eventually pair-annihilate. It is only at high temperatures that these vortices and antivortices are liberated and unbind from one another.
In the continuous XY model, the high-temperature spontaneous magnetization vanishes:
formula_56
Besides, cluster expansion shows that the spin correlations cluster exponentially fast: for instance
formula_57
At low temperatures, i.e. "β" ≫ 1, the spontaneous magnetization remains zero (see the Mermin–Wagner theorem),
formula_58
but the decay of the correlations is only power law: Fröhlich and Spencer found the lower bound
formula_59
while McBryan and Spencer found the upper bound, for any formula_60
formula_61
Three and higher dimensions.
Independently of the range of the interaction, at low enough temperature the magnetization is positive.
Phase transition.
As mentioned above in one dimension the XY model does not have a phase transition, while in two dimensions it has the Berezinski-Kosterlitz-Thouless transition between the phases with exponentially and powerlaw decaying correlation functions.
In three and higher dimensions the XY model has a ferromagnet-paramagnet phase transition. At low temperatures the spontaneous magnetization is nonzero: this is the ferromagnetic phase. As the temperature is increased, spontaneous magnetization gradually decreases and vanishes at a critical temperature. It remains zero at all higher temperatures: this is the paramagnetic phase.
In four and higher dimensions the phase transition has mean field theory critical exponents (with logarithmic corrections in four dimensions).
Three dimensional case: the critical exponents.
The three dimensional case is interesting because the critical exponents at the phase transition are nontrivial. Many three-dimensional physical systems belong to the same universality class as the three dimensional XY model and share the same critical exponents, most notably easy-plane magnets and liquid Helium-4. The values of these critical exponents are measured by experiments, Monte Carlo simulations, and can also be computed by theoretical methods of quantum field theory, such as the renormalization group and the conformal bootstrap. Renormalization group methods are applicable because the critical point of the XY model is believed to be described by a renormalization group fixed point. Conformal bootstrap methods are applicable because it is also believed to be a unitary three dimensional conformal field theory.
Most important critical exponents of the three dimensional XY model are formula_67. All of them can be expressed via just two numbers: the scaling dimensions formula_68 and formula_69 of the complex order parameter field formula_70 and of the leading singlet operator formula_71 (same asformula_72 in the Ginzburg–Landau description). Another important field is formula_73(same as formula_74), whose dimension formula_75 determines the correction-to-scaling exponent formula_76. According to a conformal bootstrap computation, these three dimensions are given by:
This gives the following values of the critical exponents:
Monte Carlo methods give compatible determinations: formula_77.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbf{h}_{j}=(h_j,0)"
},
{
"math_id": 1,
"text": " H(\\mathbf{s}) = - \\sum_{i\\neq j} J_{ij}\\; \\mathbf{s}_i\\cdot\\mathbf{s}_j -\\sum_j \\mathbf{h}_j\\cdot \\mathbf{s}_j =- \\sum_{i\\neq j} J_{ij}\\; \\cos(\\theta_i-\\theta_j) -\\sum_j h_j\\cos\\theta_j "
},
{
"math_id": 2,
"text": "P(\\mathbf{s})=\\frac{e^{-\\beta H(\\mathbf{s})}}{Z} \\qquad Z=\\int_{[-\\pi,\\pi]^\\Lambda} \\prod_{j\\in \\Lambda} d\\theta_j\\;e^{-\\beta H(\\mathbf{s})}."
},
{
"math_id": 3,
"text": "\\langle A(\\mathbf{s})\\rangle"
},
{
"math_id": 4,
"text": "\\langle \\mathbf{s}_i\\cdot \\mathbf{s}_j\\rangle_{J,2\\beta} \\le \\langle \\sigma_i\\sigma_j\\rangle_{J,\\beta}"
},
{
"math_id": 5,
"text": " \\beta_c^{XY}\\ge 2\\beta_c^{\\rm Is} "
},
{
"math_id": 6,
"text": "H(\\mathbf{s}) = - J [\\cos(\\theta_1-\\theta_2)+\\cdots+\\cos(\\theta_{L-1}-\\theta_L)]"
},
{
"math_id": 7,
"text": "\\theta_j=\\theta_j'+\\theta_{j-1}\\qquad j\\ge 2"
},
{
"math_id": 8,
"text": "\\begin{align}\nZ & = \\int_{-\\pi}^\\pi d\\theta_1 \\cdots d\\theta_L \\; e^{\\beta J\\cos(\\theta_1-\\theta_2)} \\cdots e^{\\beta J\\cos(\\theta_{L-1}-\\theta_L)} \\\\\n& = 2\\pi \\prod_{j=2}^L\\int_{-\\pi}^\\pi d\\theta'_j \\;e^{\\beta J\\cos\\theta'_j} = (2\\pi) \\left[\\int_{-\\pi}^\\pi d\\theta'_j \\;e^{\\beta J\\cos\\theta'_j}\\right]^{L-1} = (2\\pi)^L (I_0 (\\beta J))^{L-1}\n\\end{align}"
},
{
"math_id": 9,
"text": "I_0"
},
{
"math_id": 10,
"text": "L\\to \\infty"
},
{
"math_id": 11,
"text": "f(\\beta,h=0)=-\\lim_{L\\to \\infty} \\frac{1}{\\beta L} \\ln Z = - \\frac{1}{\\beta} \\ln [2\\pi I_0(\\beta J)]"
},
{
"math_id": 12,
"text": " \\frac{c}{k_{\\rm B}} = \\lim_{L\\to \\infty} \\frac{1}{L(k_{\\rm B} T)^2} \\frac{\\partial^2}{\\partial \\beta^2} (\\ln Z) = K^2 \\left(1 - \\frac{\\mu}{K} - \\mu^2\\right) "
},
{
"math_id": 13,
"text": "K = J/k_{\\rm B} T"
},
{
"math_id": 14,
"text": "\\mu"
},
{
"math_id": 15,
"text": "\\mu(K) = \\langle \\cos(\\theta - \\theta') \\rangle = \\frac{I_1(K)}{I_0(K)}"
},
{
"math_id": 16,
"text": "Z = \\text{tr}\\left\\{ \\prod_{i=1}^N \\oint d\\theta_i e^{\\beta J\\cos(\\theta_i - \\theta_{i+1})} \\right\\} "
},
{
"math_id": 17,
"text": "\\oint d\\theta' \\exp\\{\\beta J\\cos(\\theta' - \\theta)\\} \\psi(\\theta') = z_i \\psi(\\theta) "
},
{
"math_id": 18,
"text": "\\exp\\{\\beta J\\cos(\\theta-\\theta')\\} = \\sum_{n=-\\infty}^\\infty I_n(\\beta J) e^{in(\\theta-\\theta')} = \\sum_{n=-\\infty}^\\infty \\omega_n \\psi^*_n(\\theta') \\psi_n(\\theta) "
},
{
"math_id": 19,
"text": "\\psi = \\exp(in\\theta)"
},
{
"math_id": 20,
"text": "\\beta J"
},
{
"math_id": 21,
"text": "\\omega_n = 2\\pi I_n (\\beta J)"
},
{
"math_id": 22,
"text": "I_0 > I_1 > I_2 > \\cdots "
},
{
"math_id": 23,
"text": "I_{-n}(\\beta J) = I_n(\\beta J)"
},
{
"math_id": 24,
"text": "Z = [2\\pi I_0(\\beta J)]^L"
},
{
"math_id": 25,
"text": "h \\neq 0"
},
{
"math_id": 26,
"text": "h"
},
{
"math_id": 27,
"text": "\\chi\\equiv\\partial M/\\partial h"
},
{
"math_id": 28,
"text": "F=F_0 - \\frac{1}{2} \\chi h^2"
},
{
"math_id": 29,
"text": "\\chi(h\\to 0) = \\frac{C}{T} \\frac{1+\\mu}{1-\\mu}"
},
{
"math_id": 30,
"text": "C"
},
{
"math_id": 31,
"text": "\\mu = \\tanh K"
},
{
"math_id": 32,
"text": "T_c"
},
{
"math_id": 33,
"text": "(2k_{\\rm B}T_c/J) \\ln(2k_{\\rm B}T_c/J) = 1 "
},
{
"math_id": 34,
"text": " k_{\\rm B}T_c/J \\approx 0.8816"
},
{
"math_id": 35,
"text": "\\theta_i"
},
{
"math_id": 36,
"text": "\\Delta \\theta_i \\in (-\\Delta, \\Delta)"
},
{
"math_id": 37,
"text": "\\Delta E_i"
},
{
"math_id": 38,
"text": "e^{-\\beta \\Delta E_i}"
},
{
"math_id": 39,
"text": "k_{\\rm B} T_c/J = 0.8935(1)"
},
{
"math_id": 40,
"text": "\\frac{\\langle M \\rangle}{N} = \\frac{1}{N} |\\langle \\mathbf {s} \\rangle| = \\frac{1}{N} \\left| \\left\\langle \\left( \\sum_{i=1}^N \\cos \\theta_i, \\sum_{i=1}^N \\sin \\theta_i \\right)\\right\\rangle \\right|"
},
{
"math_id": 41,
"text": "\\frac{\\langle M^2 \\rangle }{N^2}= \\frac{1}{N^2} \\left\\langle s_x^2 + s_y^2 \\right\\rangle = \\frac{1}{N^2} \\left\\langle \\left( \\sum_{i=1}^N \\cos \\theta_i\\right)^2 + \\left(\\sum_{i=1}^N \\sin \\theta_i\\right)^2 \\right\\rangle "
},
{
"math_id": 42,
"text": "N=L\\times L"
},
{
"math_id": 43,
"text": "\\langle M^2 \\rangle \\approx N^{-T/4\\pi}"
},
{
"math_id": 44,
"text": "c/k_{\\rm B} = \\frac{ \\langle E^2 \\rangle - \\langle E \\rangle^2 }{N(k_{\\rm B} T)^2}"
},
{
"math_id": 45,
"text": "k_{\\rm B}T_c/J \\approx 0.88"
},
{
"math_id": 46,
"text": "1.167(1) k_{\\rm B} T/J"
},
{
"math_id": 47,
"text": "\\theta_n"
},
{
"math_id": 48,
"text": "\\theta(\\textbf{x})"
},
{
"math_id": 49,
"text": "E = \\int \\frac{J}{2} (\\nabla\\theta)^2 \\, d^2 \\mathbf{x}"
},
{
"math_id": 50,
"text": "\\Delta S = k_{\\rm B} \\ln(L^2/a^2)"
},
{
"math_id": 51,
"text": "a"
},
{
"math_id": 52,
"text": "\\Delta E = \\pi J \\ln(L/a)"
},
{
"math_id": 53,
"text": "\\Delta F = \\Delta E - T\\Delta S = (\\pi J - 2k_{\\rm B} T) \\ln (L/a)"
},
{
"math_id": 54,
"text": "T_c = \\pi J/2k_{\\rm B}"
},
{
"math_id": 55,
"text": "\\pm 2\\pi"
},
{
"math_id": 56,
"text": " M(\\beta):=|\\langle \\mathbf{s}_i \\rangle|=0 "
},
{
"math_id": 57,
"text": " |\\langle \\mathbf{s}_i\\cdot \\mathbf{s}_j\\rangle| \\le C(\\beta) e^{-c(\\beta)|i-j|}"
},
{
"math_id": 58,
"text": " M(\\beta):=|\\langle \\mathbf{s}_i\\rangle|=0"
},
{
"math_id": 59,
"text": "|\\langle \\mathbf{s}_i\\cdot \\mathbf{s}_j\\rangle| \\ge\\frac{C(\\beta)}{1+|i-j|^{\\eta(\\beta)}} "
},
{
"math_id": 60,
"text": "\\epsilon>0"
},
{
"math_id": 61,
"text": "|\\langle \\mathbf{s}_i\\cdot \\mathbf{s}_j\\rangle| \\le\\frac{C(\\beta,\\epsilon)}{1+|i-j|^{\\eta(\\beta,\\epsilon)}}"
},
{
"math_id": 62,
"text": " M(\\beta):=|\\langle \\mathbf{s}_i\\rangle|=0 "
},
{
"math_id": 63,
"text": " |\\langle \\mathbf{s}_i\\cdot \\mathbf{s}_j\\rangle| \\le C(\\beta)e^{-c(\\beta)|i-j|} "
},
{
"math_id": 64,
"text": " M(\\beta):=|\\langle \\mathbf{s}_i\\rangle|>0"
},
{
"math_id": 65,
"text": " \\langle \\; \\cdot \\; \\rangle^\\theta"
},
{
"math_id": 66,
"text": " \\langle \\mathbf{s}_i\\rangle^\\theta= M(\\beta) (\\cos \\theta, \\sin \\theta) "
},
{
"math_id": 67,
"text": "\\alpha,\\beta,\\gamma,\\delta,\\nu,\\eta"
},
{
"math_id": 68,
"text": "\\Delta_\\phi"
},
{
"math_id": 69,
"text": "\\Delta_s"
},
{
"math_id": 70,
"text": "\\phi"
},
{
"math_id": 71,
"text": "s"
},
{
"math_id": 72,
"text": "|\\phi|^2"
},
{
"math_id": 73,
"text": "s'"
},
{
"math_id": 74,
"text": "|\\phi|^4"
},
{
"math_id": 75,
"text": "\\Delta_{s'}"
},
{
"math_id": 76,
"text": "\\omega"
},
{
"math_id": 77,
"text": "\\eta=0.03810(8),\\nu=0.67169(7), \\omega=0.789(4)"
}
] | https://en.wikipedia.org/wiki?curid=954006 |
9541 | Design of experiments | Design of tasks
The design of experiments (DOE or DOX), also known as experiment design or experimental design, is the design of any task that aims to describe and explain the variation of information under conditions that are hypothesized to reflect the variation. The term is generally associated with experiments in which the design introduces conditions that directly affect the variation, but may also refer to the design of quasi-experiments, in which natural conditions that influence the variation are selected for observation.
In its simplest form, an experiment aims at predicting the outcome by introducing a change of the preconditions, which is represented by one or more independent variables, also referred to as "input variables" or "predictor variables." The change in one or more independent variables is generally hypothesized to result in a change in one or more dependent variables, also referred to as "output variables" or "response variables." The experimental design may also identify control variables that must be held constant to prevent external factors from affecting the results. Experimental design involves not only the selection of suitable independent, dependent, and control variables, but planning the delivery of the experiment under statistically optimal conditions given the constraints of available resources. There are multiple approaches for determining the set of design points (unique combinations of the settings of the independent variables) to be used in the experiment.
Main concerns in experimental design include the establishment of validity, reliability, and replicability. For example, these concerns can be partially addressed by carefully choosing the independent variable, reducing the risk of measurement error, and ensuring that the documentation of the method is sufficiently detailed. Related concerns include achieving appropriate levels of statistical power and sensitivity.
Correctly designed experiments advance knowledge in the natural and social sciences and engineering, with design of experiments methodology recognised as a key tool in the successful implementation of a Quality by Design (QbD) framework. Other applications include marketing and policy making. The study of the design of experiments is an important topic in metascience.
History.
Statistical experiments, following Charles S. Peirce.
A theory of statistical inference was developed by Charles S. Peirce in "Illustrations of the Logic of Science" (1877–1878) and "A Theory of Probable Inference" (1883), two publications that emphasized the importance of randomization-based inference in statistics.
Randomized experiments.
Charles S. Peirce randomly assigned volunteers to a blinded, repeated-measures design to evaluate their ability to discriminate weights.
Peirce's experiment inspired other researchers in psychology and education, which developed a research tradition of randomized experiments in laboratories and specialized textbooks in the 1800s.
Optimal designs for regression models.
Charles S. Peirce also contributed the first English-language publication on an optimal design for regression models in 1876. A pioneering optimal design for polynomial regression was suggested by Gergonne in 1815. In 1918, Kirstine Smith published optimal designs for polynomials of degree six (and less).
Sequences of experiments.
The use of a sequence of experiments, where the design of each may depend on the results of previous experiments, including the possible decision to stop experimenting, is within the scope of sequential analysis, a field that was pioneered by Abraham Wald in the context of sequential tests of statistical hypotheses. Herman Chernoff wrote an overview of optimal sequential designs, while adaptive designs have been surveyed by S. Zacks. One specific type of sequential design is the "two-armed bandit", generalized to the multi-armed bandit, on which early work was done by Herbert Robbins in 1952.
Fisher's principles.
A methodology for designing experiments was proposed by Ronald Fisher, in his innovative books: "The Arrangement of Field Experiments" (1926) and "The Design of Experiments" (1935). Much of his pioneering work dealt with agricultural applications of statistical methods. As a mundane example, he described how to test the lady tasting tea hypothesis, that a certain lady could distinguish by flavour alone whether the milk or the tea was first placed in the cup. These methods have been broadly adapted in biological, psychological, and agricultural research.
In some fields of study it is not possible to have independent measurements to a traceable metrology standard. Comparisons between treatments are much more valuable and are usually preferable, and often compared against a scientific control or traditional treatment that acts as baseline.
Random assignment is the process of assigning individuals at random to groups or to different groups in an experiment, so that each individual of the population has the same chance of becoming a participant in the study. The random assignment of individuals to groups (or conditions within a group) distinguishes a rigorous, "true" experiment from an observational study or "quasi-experiment". There is an extensive body of mathematical theory that explores the consequences of making the allocation of units to treatments by means of some random mechanism (such as tables of random numbers, or the use of randomization devices such as playing cards or dice). Assigning units to treatments at random tends to mitigate confounding, which makes effects due to factors other than the treatment to appear to result from the treatment.
The risks associated with random allocation (such as having a serious imbalance in a key characteristic between a treatment group and a control group) are calculable and hence can be managed down to an acceptable level by using enough experimental units. However, if the population is divided into several subpopulations that somehow differ, and the research requires each subpopulation to be equal in size, stratified sampling can be used. In that way, the units in each subpopulation are randomized, but not the whole sample. The results of an experiment can be generalized reliably from the experimental units to a larger statistical population of units only if the experimental units are a random sample from the larger population; the probable error of such an extrapolation depends on the sample size, among other things.
Measurements are usually subject to variation and measurement uncertainty; thus they are repeated and full experiments are replicated to help identify the sources of variation, to better estimate the true effects of treatments, to further strengthen the experiment's reliability and validity, and to add to the existing knowledge of the topic. However, certain conditions must be met before the replication of the experiment is commenced: the original research question has been published in a peer-reviewed journal or widely cited, the researcher is independent of the original experiment, the researcher must first try to replicate the original findings using the original data, and the write-up should state that the study conducted is a replication study that tried to follow the original study as strictly as possible.
Blocking is the non-random arrangement of experimental units into groups (blocks) consisting of units that are similar to one another. Blocking reduces known but irrelevant sources of variation between units and thus allows greater precision in the estimation of the source of variation under study.
Orthogonality concerns the forms of comparison (contrasts) that can be legitimately and efficiently carried out. Contrasts can be represented by vectors and sets of orthogonal contrasts are uncorrelated and independently distributed if the data are normal. Because of this independence, each orthogonal treatment provides different information to the others. If there are "T" treatments and "T" – 1 orthogonal contrasts, all the information that can be captured from the experiment is obtainable from the set of contrasts.
Use of multifactorial experiments instead of the one-factor-at-a-time method. These are efficient at evaluating the effects and possible interactions of several factors (independent variables). Analysis of experiment design is built on the foundation of the analysis of variance, a collection of models that partition the observed variance into components, according to what factors the experiment must estimate or test.
Example.
This example of design experiments is attributed to Harold Hotelling, building on examples from Frank Yates. The experiments designed in this example involve combinatorial designs.
Weights of eight objects are measured using a pan balance and set of standard weights. Each weighing measures the weight difference between objects in the left pan and any objects in the right pan by adding calibrated weights to the lighter pan until the balance is in equilibrium. Each measurement has a random error. The average error is zero; the standard deviations of the probability distribution of the errors is the same number σ on different weighings; errors on different weighings are independent. Denote the true weights by
formula_0
We consider two different experiments:
formula_1
Let "Y""i" be the measured difference for "i" = 1, ..., 8. Then the estimated value of the weight "θ"1 is
formula_2
Similar estimates can be found for the weights of the other items:
formula_3
The question of design of experiments is: which experiment is better?
The variance of the estimate "X"1 of "θ"1 is "σ"2 if we use the first experiment. But if we use the second experiment, the variance of the estimate given above is "σ"2/8. Thus the second experiment gives us 8 times as much precision for the estimate of a single item, and estimates all items simultaneously, with the same precision. What the second experiment achieves with eight would require 64 weighings if the items are weighed separately. However, note that the estimates for the items obtained in the second experiment have errors that correlate with each other.
Many problems of the design of experiments involve combinatorial designs, as in this example and others.
Avoiding false positives.
False positive conclusions, often resulting from the pressure to publish or the author's own confirmation bias, are an inherent hazard in many fields.
Use of double-blind designs can prevent biases potentially leading to false positives in the data collection phase. When a double-blind design is used, participants are randomly assigned to experimental groups but the researcher is unaware of what participants belong to which group. Therefore, the researcher can not affect the participants' response to the intervention.
Experimental designs with undisclosed degrees of freedom are a problem, in that they can lead to conscious or unconscious "p-hacking": trying multiple things until you get the desired result. It typically involves the manipulation – perhaps unconsciously – of the process of statistical analysis and the degrees of freedom until they return a figure below the p<.05 level of statistical significance.
P-hacking can be prevented by preregistering researches, in which researchers have to send their data analysis plan to the journal they wish to publish their paper in before they even start their data collection, so no data manipulation is possible.
Another way to prevent this is taking a double-blind design to the data-analysis phase, making the study triple-blind, where the data are sent to a data-analyst unrelated to the research who scrambles up the data so there is no way to know which participants belong to before they are potentially taken away as outliers.
Clear and complete documentation of the experimental methodology is also important in order to support replication of results.
Discussion topics when setting up an experimental design.
An experimental design or randomized clinical trial requires careful consideration of several factors before actually doing the experiment. An experimental design is the laying out of a detailed experimental plan in advance of doing the experiment. Some of the following topics have already been discussed in the principles of experimental design section:
The independent variable of a study often has many levels or different groups. In a true experiment, researchers can have an experimental group, which is where their intervention testing the hypothesis is implemented, and a control group, which has all the same element as the experimental group, without the interventional element. Thus, when everything else except for one intervention is held constant, researchers can certify with some certainty that this one element is what caused the observed change. In some instances, having a control group is not ethical. This is sometimes solved using two different experimental groups. In some cases, independent variables cannot be manipulated, for example when testing the difference between two groups who have a different disease, or testing the difference between genders (obviously variables that would be hard or unethical to assign participants to). In these cases, a quasi-experimental design may be used.
Causal attributions.
In the pure experimental design, the independent (predictor) variable is manipulated by the researcher – that is – every participant of the research is chosen randomly from the population, and each participant chosen is assigned randomly to conditions of the independent variable. Only when this is done is it possible to certify with high probability that the reason for the differences in the outcome variables are caused by the different conditions. Therefore, researchers should choose the experimental design over other design types whenever possible. However, the nature of the independent variable does not always allow for manipulation. In those cases, researchers must be aware of not certifying about causal attribution when their design doesn't allow for it. For example, in observational designs, participants are not assigned randomly to conditions, and so if there are differences found in outcome variables between conditions, it is likely that there is something other than the differences between the conditions that causes the differences in outcomes, that is – a third variable. The same goes for studies with correlational design (Adér & Mellenbergh, 2008).
Statistical control.
It is best that a process be in reasonable statistical control prior to conducting designed experiments. When this is not possible, proper blocking, replication, and randomization allow for the careful conduct of designed experiments.
To control for nuisance variables, researchers institute control checks as additional measures. Investigators should ensure that uncontrolled influences (e.g., source credibility perception) do not skew the findings of the study. A manipulation check is one example of a control check. Manipulation checks allow investigators to isolate the chief variables to strengthen support that these variables are operating as planned.
One of the most important requirements of experimental research designs is the necessity of eliminating the effects of spurious, intervening, and antecedent variables. In the most basic model, cause (X) leads to effect (Y). But there could be a third variable (Z) that influences (Y), and X might not be the true cause at all. Z is said to be a spurious variable and must be controlled for. The same is true for intervening variables (a variable in between the supposed cause (X) and the effect (Y)), and anteceding variables (a variable prior to the supposed cause (X) that is the true cause). When a third variable is involved and has not been controlled for, the relation is said to be a zero order relationship. In most practical applications of experimental research designs there are several causes (X1, X2, X3). In most designs, only one of these causes is manipulated at a time.
Experimental designs after Fisher.
Some efficient designs for estimating several main effects were found independently and in near succession by Raj Chandra Bose and K. Kishen in 1940 at the Indian Statistical Institute, but remained little known until the Plackett–Burman designs were published in "Biometrika" in 1946. About the same time, C. R. Rao introduced the concepts of orthogonal arrays as experimental designs. This concept played a central role in the development of Taguchi methods by Genichi Taguchi, which took place during his visit to Indian Statistical Institute in early 1950s. His methods were successfully applied and adopted by Japanese and Indian industries and subsequently were also embraced by US industry albeit with some reservations.
In 1950, Gertrude Mary Cox and William Gemmell Cochran published the book "Experimental Designs," which became the major reference work on the design of experiments for statisticians for years afterwards.
Developments of the theory of linear models have encompassed and surpassed the cases that concerned early writers. Today, the theory rests on advanced topics in linear algebra, algebra and combinatorics.
As with other branches of statistics, experimental design is pursued using both frequentist and Bayesian approaches: In evaluating statistical procedures like experimental designs, frequentist statistics studies the sampling distribution while Bayesian statistics updates a probability distribution on the parameter space.
Some important contributors to the field of experimental designs are C. S. Peirce, R. A. Fisher, F. Yates, R. C. Bose, A. C. Atkinson, R. A. Bailey, D. R. Cox, G. E. P. Box, W. G. Cochran, W. T. Federer, V. V. Fedorov, A. S. Hedayat, J. Kiefer, O. Kempthorne, J. A. Nelder, Andrej Pázman, Friedrich Pukelsheim, D. Raghavarao, C. R. Rao, Shrikhande S. S., J. N. Srivastava, William J. Studden, G. Taguchi and H. P. Wynn.
The textbooks of D. Montgomery, R. Myers, and G. Box/W. Hunter/J.S. Hunter have reached generations of students and practitioners. Furthermore, there is ongoing discussion of experimental design in the context of model building for models either static or dynamic models, also known as system identification.
Human participant constraints.
Laws and ethical considerations preclude some carefully designed
experiments with human subjects. Legal constraints are dependent on
jurisdiction. Constraints may involve
institutional review boards, informed consent
and confidentiality affecting both clinical (medical) trials and
behavioral and social science experiments.
In the field of toxicology, for example, experimentation is performed
on laboratory "animals" with the goal of defining safe exposure limits
for "humans". Balancing
the constraints are views from the medical field. Regarding the randomization of patients,
"... if no one knows which therapy is better, there is no ethical
imperative to use one therapy or another." (p 380) Regarding
experimental design, "...it is clearly not ethical to place subjects
at risk to collect data in a poorly designed study when this situation
can be easily avoided...". (p 393)
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" />
Sources.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\theta_1, \\dots, \\theta_8.\\,"
},
{
"math_id": 1,
"text": "\n\\begin{array}{lcc}\n& \\text{left pan} & \\text{right pan} \\\\\n\\hline\n\\text{1st weighing:} & 1\\ 2\\ 3\\ 4\\ 5\\ 6\\ 7\\ 8 & \\text{(empty)} \\\\ \n\\text{2nd:} & 1\\ 2\\ 3\\ 8\\ & 4\\ 5\\ 6\\ 7 \\\\\n\\text{3rd:} & 1\\ 4\\ 5\\ 8\\ & 2\\ 3\\ 6\\ 7 \\\\\n\\text{4th:} & 1\\ 6\\ 7\\ 8\\ & 2\\ 3\\ 4\\ 5 \\\\\n\\text{5th:} & 2\\ 4\\ 6\\ 8\\ & 1\\ 3\\ 5\\ 7 \\\\\n\\text{6th:} & 2\\ 5\\ 7\\ 8\\ & 1\\ 3\\ 4\\ 6 \\\\\n\\text{7th:} & 3\\ 4\\ 7\\ 8\\ & 1\\ 2\\ 5\\ 6 \\\\\n\\text{8th:} & 3\\ 5\\ 6\\ 8\\ & 1\\ 2\\ 4\\ 7\n\\end{array}\n"
},
{
"math_id": 2,
"text": "\\widehat{\\theta}_1 = \\frac{Y_1 + Y_2 + Y_3 + Y_4 - Y_5 - Y_6 - Y_7 - Y_8}{8}. "
},
{
"math_id": 3,
"text": "\n\\begin{align}\n\\widehat{\\theta}_2 & = \\frac{Y_1 + Y_2 - Y_3 - Y_4 + Y_5 + Y_6 - Y_7 - Y_8} 8. \\\\[5pt]\n\\widehat{\\theta}_3 & = \\frac{Y_1 + Y_2 - Y_3 - Y_4 - Y_5 - Y_6 + Y_7 + Y_8} 8. \\\\[5pt]\n\\widehat{\\theta}_4 & = \\frac{Y_1 - Y_2 + Y_3 - Y_4 + Y_5 - Y_6 + Y_7 - Y_8} 8. \\\\[5pt]\n\\widehat{\\theta}_5 & = \\frac{Y_1 - Y_2 + Y_3 - Y_4 - Y_5 + Y_6 - Y_7 + Y_8} 8. \\\\[5pt]\n\\widehat{\\theta}_6 & = \\frac{Y_1 - Y_2 - Y_3 + Y_4 + Y_5 - Y_6 - Y_7 + Y_8} 8. \\\\[5pt]\n\\widehat{\\theta}_7 & = \\frac{Y_1 - Y_2 - Y_3 + Y_4 - Y_5 + Y_6 + Y_7 - Y_8} 8. \\\\[5pt]\n\\widehat{\\theta}_8 & = \\frac{Y_1 + Y_2 + Y_3 + Y_4 + Y_5 + Y_6 + Y_7 + Y_8} 8.\n\\end{align}\n"
}
] | https://en.wikipedia.org/wiki?curid=9541 |
9541424 | Compressibility equation | Equation which relates the isothermal compressibility to the structure of the liquid
In statistical mechanics and thermodynamics the compressibility equation refers to an equation which relates the isothermal compressibility (and indirectly the pressure) to the structure of the liquid. It reads:formula_0where formula_1 is the number density, g(r) is the radial distribution function and formula_2 is the isothermal compressibility.
Using the Fourier representation of the Ornstein-Zernike equation the compressibility equation can be rewritten in the form:
formula_3
where h(r) and c(r) are the indirect and direct correlation functions respectively. The compressibility equation is one of the many integral equations in statistical mechanics.
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "kT\\left(\\frac{\\partial \\rho}{\\partial p}\\right)=1+\\rho \\int_V \\mathrm{d} \\mathbf{r} [g(r)-1] "
},
{
"math_id": 1,
"text": "\\rho"
},
{
"math_id": 2,
"text": "kT\\left(\\frac{\\partial \\rho}{\\partial p}\\right)"
},
{
"math_id": 3,
"text": "\\frac{1}{kT}\\left(\\frac{\\partial p}{\\partial \\rho}\\right) = \\frac{1}{1+\\rho \\int h(r) \\mathrm{d} \\mathbf{r} }=\\frac{1}{1+\\rho \\hat{H}(0)}=1-\\rho\\hat{C}(0)=1-\\rho \\int c(r) \\mathrm{d} \\mathbf{r} "
}
] | https://en.wikipedia.org/wiki?curid=9541424 |
954281 | Cache replacement policies | Algorithm for caching data
In computing, cache replacement policies (also known as cache replacement algorithms or cache algorithms) are optimizing instructions or algorithms which a computer program or hardware-maintained structure can utilize to manage a cache of information. Caching improves performance by keeping recent or often-used data items in memory locations which are faster, or computationally cheaper to access, than normal memory stores. When the cache is full, the algorithm must choose which items to discard to make room for new data.
Overview.
The average memory reference time is
formula_0
where
formula_1 = miss ratio = 1 - (hit ratio)
formula_2 = time to make main-memory access when there is a miss (or, with a multi-level cache, average memory reference time for the next-lower cache)
formula_3= latency: time to reference the cache (should be the same for hits and misses)
formula_4 = secondary effects, such as queuing effects in multiprocessor systems
A cache has two primary figures of merit: latency and hit ratio. A number of secondary factors also affect cache performance.
The hit ratio of a cache describes how often a searched-for item is found. More efficient replacement policies track more usage information to improve the hit rate for a given cache size.
The latency of a cache describes how long after requesting a desired item the cache can return that item when there is a hit. Faster replacement strategies typically track of less usage information—or, with a direct-mapped cache, no information—to reduce the time required to update the information. Each replacement strategy is a compromise between hit rate and latency.
Hit-rate measurements are typically performed on benchmark applications, and the hit ratio varies by application. Video and audio streaming applications often have a hit ratio near zero, because each bit of data in the stream is read once (a compulsory miss), used, and then never read or written again. Many cache algorithms (particularly LRU) allow streaming data to fill the cache, pushing out information which will soon be used again (cache pollution). Other factors may be size, length of time to obtain, and expiration. Depending on cache size, no further caching algorithm to discard items may be needed. Algorithms also maintain cache coherence when several caches are used for the same data, such as multiple database servers updating a shared data file.
Policies.
Bélády's Anomaly in Page Replacement Algorithms.
The most efficient caching algorithm would be to discard information which would not be needed for the longest time; this is known as Bélády's optimal algorithm, optimal replacement policy, or the clairvoyant algorithm. Since it is generally impossible to predict how far in the future information will be needed, this is unfeasible in practice. The practical minimum can be calculated after experimentation, and the effectiveness of a chosen cache algorithm can be compared.
When a page fault occurs, a set of pages is in memory. In the example, the sequence of 5, 0, 1 is accessed by Frame 1, Frame 2, and Frame 3 respectively. When 2 is accessed, it replaces value 5 (which is in frame 1, predicting that value 5 will not be accessed in the near future. Because a general-purpose operating system cannot predict when 5 will be accessed, Bélády's algorithm cannot be implemented there.
Random replacement (RR).
Random replacement selects an item and discards it to make space when necessary. This algorithm does not require keeping any access history. It has been used in ARM processors due to its simplicity, and it allows efficient stochastic simulation.
Simple queue-based policies.
First in first out (FIFO).
With this algorithm, the cache behaves like a FIFO queue; it evicts blocks in the order in which they were added, regardless of how often or how many times they were accessed before.
Last in first out (LIFO) or First in last out (FILO).
The cache behaves like a stack, and unlike a FIFO queue. The cache evicts the block added most recently first, regardless of how often or how many times it was accessed before.
SIEVE.
SIEVE is a simple eviction algorithm designed specifically for web caches, such as key-value caches and Content Delivery Networks.
It uses the idea of lazy promotion and quick demotion. Therefore, SIEVE does not update the global data structure at cache hits and delays the update till eviction time; meanwhile, it quickly evicts newly inserted objects because cache workloads tend to show high one-hit-wonder ratios, and most of the new objects are not worthwhile to be kept in the cache. SIEVE uses a single FIFO queue and uses a moving hand to select objects to evict. Objects in the cache have one bit of metadata indicating whether the object has been requested after being admitted into the cache. The eviction hand points to the tail of the queue at the beginning and moves toward the head over time. Compared with the CLOCK eviction algorithm, retained objects in SIEVE stay in the old position. Therefore, new objects are always at the head, and the old objects are always at the tail. As the hand moves toward the head, new objects are quickly evicted (quick demotion), which is the key to the high efficiency in the SIEVE eviction algorithm. SIEVE is simpler than LRU, but achieves surprisingly lower miss ratios than LRU on par with state-of-the-art eviction algorithms. Moreover, on stationary skewed workloads, SIEVE is better than existing known algorithms including LFU.
Simple recency-based policies.
Least Recently Used (LRU).
Discards least recently used items first. This algorithm requires keeping track of what was used and when, which is cumbersome. It requires "age bits" for cache lines, and tracks the least recently used cache line based on them. When a cache line is used, the age of the other cache lines changes. LRU is a family of caching algorithms, that includes 2Q by Theodore Johnson and Dennis Shasha and LRU/K by Pat O'Neil, Betty O'Neil and Gerhard Weikum. The access sequence for the example is A B C D E D F:
When A B C D is installed in the blocks with sequence numbers (increment 1 for each new access) and E is accessed, it is a miss and must be installed in a block. With the LRU algorithm, E will replace A because A has the lowest rank (A(0)). In the next-to-last step, D is accessed and the sequence number is updated. F is then accessed, replacing B – which had the lowest rank, (B(1)).
Time-aware, least-recently used.
Time-aware, least-recently-used (TLRU) is a variant of LRU designed for when the contents of a cache have a valid lifetime. The algorithm is suitable for network cache applications such as information-centric networking (ICN), content delivery networks (CDNs) and distributed networks in general. TLRU introduces a term: TTU (time to use), a timestamp of content (or a page) which stipulates the usability time for the content based on its locality and the content publisher. TTU provides more control to a local administrator in regulating network storage.
When content subject to TLRU arrives, a cache node calculates the local TTU based on the TTU assigned by the content publisher. The local TTU value is calculated with a locally-defined function. When the local TTU value is calculated, content replacement is performed on a subset of the total content of the cache node. TLRU ensures that less-popular and short-lived content is replaced with incoming content.
Most-recently-used (MRU).
Unlike LRU, MRU discards the most-recently-used items first. At the 11th VLDB conference, Chou and DeWitt said: "When a file is being repeatedly scanned in a [looping sequential] reference pattern, MRU is the best replacement algorithm." Researchers presenting at the 22nd VLDB conference noted that for random access patterns and repeated scans over large datasets (also known as cyclic access patterns), MRU cache algorithms have more hits than LRU due to their tendency to retain older data. MRU algorithms are most useful in situations where the older an item is, the more likely it is to be accessed. The access sequence for the example is A B C D E C D B:
A B C D are placed in the cache, since there is space available. At the fifth access (E), the block which held D is replaced with E since this block was used most recently. At the next access (to D), C is replaced since it was the block accessed just before D.
Segmented LRU (SLRU).
An SLRU cache is divided into two segments: probationary and protected. Lines in each segment are ordered from most- to least-recently-accessed. Data from misses is added to the cache at the most-recently-accessed end of the probationary segment. Hits are removed from where they reside and added to the most-recently-accessed end of the protected segment; lines in the protected segment have been accessed at least twice. The protected segment is finite; migration of a line from the probationary segment to the protected segment may force the migration of the LRU line in the protected segment to the most-recently-used end of the probationary segment, giving this line another chance to be accessed before being replaced. The size limit of the protected segment is an SLRU parameter which varies according to I/O workload patterns. When data must be discarded from the cache, lines are obtained from the LRU end of the probationary segment.
LRU approximations.
LRU may be expensive in caches with higher associativity. Practical hardware usually employs an approximation to achieve similar performance at a lower hardware cost.
Pseudo-LRU (PLRU).
For CPU caches with large associativity (generally > four ways), the implementation cost of LRU becomes prohibitive. In many CPU caches, an algorithm that almost always discards one of the least recently used items is sufficient; many CPU designers choose a PLRU algorithm, which only needs one bit per cache item to work. PLRU typically has a slightly-worse miss ratio, slightly-better latency, uses slightly less power than LRU, and has a lower overhead than LRU.
Bits work as a binary tree of one-bit pointers which point to a less-recently-used sub-tree. Following the pointer chain to the leaf node identifies the replacement candidate. With an access, all pointers in the chain from the accessed way's leaf node to the root node are set to point to a sub-tree which does not contain the accessed path. The access sequence in the example is A B C D E:
When there is access to a value (such as A) and it is not in the cache, it is loaded from memory and placed in the block where the arrows are pointing in the example. After that block is placed, the arrows are flipped to point the opposite way. A, B, C and D are placed; E replaces A as the cache fills because that was where the arrows were pointing, and the arrows which led to A flip to point in the opposite direction (to B, the block which will be replaced on the next cache miss).
Clock-Pro.
The LRU algorithm cannot be implemented in the critical path of computer systems, such as operating systems, due to its high overhead; Clock, an approximation of LRU, is commonly used instead. Clock-Pro is an approximation of LIRS for low-cost implementation in systems. Clock-Pro has the basic Clock framework, with three advantages. It has three "clock hands" (unlike Clock's single "hand"), and can approximately measure the reuse distance of data accesses. Like LIRS, it can quickly evict one-time-access or low-locality data items. Clock-Pro is as complex as Clock, and is easy to implement at low cost. The buffer-cache replacement implementation in the 2017 version of Linux combines LRU and Clock-Pro.
Simple frequency-based policies.
Least frequently used (LFU).
The LFU algorithm counts how often an item is needed; those used less often are discarded first. This is similar to LRU, except that how many times a block was accessed is stored instead of how recently. While running an access sequence, the block which was used the fewest times will be removed from the cache.
Least frequent recently used (LFRU).
The least frequent recently used (LFRU) algorithm combines the benefits of LFU and LRU. LFRU is suitable for network cache applications such as ICN, CDNs, and distributed networks in general. In LFRU, the cache is divided into two partitions: privileged and unprivileged. The privileged partition is protected and, if content is popular, it is pushed into the privileged partition. In replacing the privileged partition, LFRU evicts content from the unprivileged partition; pushes content from the privileged to the unprivileged partition, and inserts new content into the privileged partition. LRU is used for the privileged partition and an approximated LFU (ALFU) algorithm for the unprivileged partition.
LFU with dynamic aging (LFUDA).
A variant, LFU with dynamic aging (LFUDA), uses dynamic aging to accommodate shifts in a set of popular objects; it adds a cache-age factor to the reference count when a new object is added to the cache or an existing object is re-referenced. LFUDA increments cache age when evicting blocks by setting it to the evicted object's key value, and the cache age is always less than or equal to the minimum key value in the cache. If an object was frequently accessed in the past and becomes unpopular, it will remain in the cache for a long time (preventing newly- or less-popular objects from replacing it). Dynamic aging reduces the number of such objects, making them eligible for replacement, and LFUDA reduces cache pollution caused by LFU when a cache is small.
S3-FIFO.
This is a new eviction algorithm designed in 2023. Compared to existing algorithms, which mostly build on LRU (least-recently-used), S3-FIFO only uses three FIFO queues: a small queue occupying 10% of cache space, a main queue that uses 90% of the cache space, and a ghost queue that only stores object metadata. The small queue is used to filter out one-hit-wonders (objects that are only accessed once in a short time window); the main queue is used to store popular objects and uses reinsertion to keep them in the cache; and the ghost queue is used to catch potentially-popular objects that are evicted from the small queue. Objects are first inserted into the small queue (if they are not found in the ghost queue, otherwise inserted into the main queue); upon eviction from the small queue, if an object has been requested, it is reinserted into the main queue, otherwise, it is evicted and the metadata is tracked in the ghost queue.
S3-FIFO demonstrates that FIFO queues are sufficient to design efficient and scalable eviction algorithms. Compared to LRU and LRU-based algorithms, S3-FIFO can achieve 6x higher throughput. Besides, on web cache workloads, S3-FIFO achieves the lowest miss ratio among 11 state-of-the-art algorithms the authors compared with.
RRIP-style policies.
RRIP-style policies are the basis for other cache replacement policies, including Hawkeye.
Re-Reference Interval Prediction (RRIP).
RRIP is a flexible policy, proposed by Intel, which attempts to provide good scan resistance while allowing older cache lines that have not been reused to be evicted. All cache lines have a prediction value, the RRPV (re-reference prediction value), that should correlate with when the line is expected to be reused. The RRPV is usually high on insertion; if a line is not reused soon, it will be evicted to prevent scans (large amounts of data used only once) from filling the cache. When a cache line is reused the RRPV is set to zero, indicating that the line has been reused once and is likely to be reused again.
On a cache miss, the line with an RRPV equal to the maximum possible RRPV is evicted; with 3-bit values, a line with an RRPV of 23 - 1 = 7 is evicted. If no lines have this value, all RRPVs in the set are increased by 1 until one reaches it. A tie-breaker is needed, and usually, it is the first line on the left. The increase is needed to ensure that older lines are aged properly and will be evicted if they are not reused.
Static RRIP (SRRIP).
SRRIP inserts lines with an RRPV value of maxRRPV; a line which has just been inserted will be the most likely to be evicted on a cache miss.
Bimodal RRIP (BRRIP).
SRRIP performs well normally, but suffers when the working set is much larger than the cache size and causes cache thrashing. This is remedied by inserting lines with an RRPV value of maxRRPV most of the time, and inserting lines with an RRPV value of maxRRPV - 1 randomly with a low probability. This causes some lines to "stick" in the cache, and helps prevent thrashing. BRRIP degrades performance, however, on non-thrashing accesses. SRRIP performs best when the working set is smaller than the cache, and BRRIP performs best when the working set is larger than the cache.
Dynamic RRIP (DRRIP).
DRRIP uses set dueling to select whether to use SRRIP or BRRIP. It dedicates a few sets (typically 32) to use SRRIP and another few to use BRRIP, and uses a policy counter which monitors set performance to determine which policy will be used by the rest of the cache.
Policies approximating Bélády's algorithm.
Bélády's algorithm is the optimal cache replacement policy, but it requires knowledge of the future to evict lines that will be reused farthest in the future. A number of replacement policies have been proposed which attempt to predict future reuse distances from past access patterns, allowing them to approximate the optimal replacement policy. Some of the best-performing cache replacement policies attempt to imitate Bélády's algorithm.
Hawkeye.
Hawkeye attempts to emulate Bélády's algorithm by using past accesses by a PC to predict whether the accesses it produces generate cache-friendly (used later) or cache-averse accesses (not used later). It samples a number of non-aligned cache sets, uses a history of length formula_5 and emulates Bélády's algorithm on these accesses. This allows the policy to determine which lines should have been cached and which should not, predicting whether an instruction is cache-friendly or cache-averse. This data is then fed into an RRIP; accesses from cache-friendly instructions have a lower RRPV value (likely to be evicted later), and accesses from cache-averse instructions have a higher RRPV value (likely to be evicted sooner). The RRIP backend makes the eviction decisions. The sampled cache and OPT generator set the initial RRPV value of the inserted cache lines. Hawkeye won the CRC2 cache championship in 2017, and Harmony is an extension of Hawkeye which improves prefetching performance.
Mockingjay.
Mockingjay tries to improve on Hawkeye in several ways. It drops the binary prediction, allowing it to make more fine-grained decisions about which cache lines to evict, and leaves the decision about which cache line to evict for when more information is available.
Mockingjay keeps a sampled cache of unique accesses, the PCs that produced them, and their timestamps. When a line in the sampled cache is accessed again, the time difference will be sent to the reuse distance predictor. The RDP uses temporal difference learning, where the new RDP value will be increased or decreased by a small number to compensate for outliers; the number is calculated as formula_6. If the value has not been initialized, the observed reuse distance is inserted directly. If the sampled cache is full and a line needs to be discarded, the RDP is instructed that the PC that last accessed it produces streaming accesses.
On an access or insertion, the estimated time of reuse (ETR) for this line is updated to reflect the predicted reuse distance. On a cache miss, the line with the highest ETR value is evicted. Mockingjay has results which are close to the optimal Bélády's algorithm.
Machine-learning policies.
A number of policies have attempted to use perceptrons, markov chains or other types of machine learning to predict which line to evict. Learning augmented algorithms also exist for cache replacement.
Other policies.
Low inter-reference recency set (LIRS).
LIRS is a page replacement algorithm with better performance than LRU and other, newer replacement algorithms. Reuse distance is a metric for dynamically ranking accessed pages to make a replacement decision. LIRS addresses the limits of LRU by using recency to evaluate inter-reference recency (IRR) to make a replacement decision.
In the diagram, X indicates that a block is accessed at a particular time. If block A1 is accessed at time 1, its recency will be 0; this is the first-accessed block and the IRR will be 1, since it predicts that A1 will be accessed again in time 3. In time 2, since A4 is accessed, the recency will become 0 for A4 and 1 for A1; A4 is the most recently accessed object, and the IRR will become 4. At time 10, the LIRS algorithm will have two sets: an LIR set = {A1, A2} and an HIR set = {A3, A4, A5}. At time 10, if there is access to A4 a miss occurs; LIRS will evict A5 instead of A2 because of its greater recency.
Adaptive replacement cache.
Adaptive replacement cache (ARC) constantly balances between LRU and LFU to improve the combined result. It improves SLRU by using information about recently-evicted cache items to adjust the size of the protected and probationary segments to make the best use of available cache space.
Clock with adaptive replacement.
Clock with adaptive replacement (CAR) combines the advantages of ARC and Clock. CAR performs comparably to ARC, and outperforms LRU and Clock. Like ARC, CAR is self-tuning and requires no user-specified parameters.
Multi-queue.
The multi-queue replacement (MQ) algorithm was developed to improve the performance of a second-level buffer cache, such as a server buffer cache, and was introduced in a paper by Zhou, Philbin, and Li. The MQ cache contains an "m" number of LRU queues: Q0, Q1, ..., Q"m"-1. The value of "m" represents a hierarchy based on the lifetime of all blocks in that queue.
Pannier.
Pannier is a container-based flash caching mechanism which identifies containers whose blocks have variable access patterns. Pannier has a priority-queue-based survival-queue structure to rank containers based on their survival time, which is proportional to live data in the container.
Static analysis.
Static analysis determines which accesses are cache hits or misses to indicate the worst-case execution time of a program. An approach to analyzing properties of LRU caches is to give each block in the cache an "age" (0 for the most recently used) and compute intervals for possible ages. This analysis can be refined to distinguish cases where the same program point is accessible by paths that result in misses or hits. An efficient analysis may be obtained by abstracting sets of cache states by antichains which are represented by compact binary decision diagrams.
LRU static analysis does not extend to pseudo-LRU policies. According to computational complexity theory, static-analysis problems posed by pseudo-LRU and FIFO are in higher complexity classes than those for LRU.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "T = m \\times T_m + T_h + E"
},
{
"math_id": 1,
"text": "m"
},
{
"math_id": 2,
"text": "T_m"
},
{
"math_id": 3,
"text": "T_h"
},
{
"math_id": 4,
"text": "E"
},
{
"math_id": 5,
"text": "8 \\times \\text{the cache size}"
},
{
"math_id": 6,
"text": "w = \\min\\left(1, \\frac{\\text{timestamp difference}}{16}\\right)"
}
] | https://en.wikipedia.org/wiki?curid=954281 |
954307 | Vibrating structure gyroscope | Inexpensive gyroscope based on vibration
A vibrating structure gyroscope (VSG), defined by the IEEE as a Coriolis vibratory gyroscope (CVG), is a gyroscope that uses a vibrating structure to determine the rate of rotation. A vibrating structure gyroscope functions much like the halteres of flies (insects in the order Diptera).
The underlying physical principle is that a vibrating object tends to continue vibrating in the same plane even if its support rotates. The Coriolis effect causes the object to exert a force on its support, and by measuring this force the rate of rotation can be determined.
Vibrating structure gyroscopes are simpler and cheaper than conventional rotating gyroscopes of similar accuracy. Inexpensive vibrating structure gyroscopes manufactured with micro-electromechanical systems (MEMS) technology are widely used in smartphones, gaming devices, cameras and many other applications.
Theory of operation.
Consider two proof masses vibrating in plane (as in the MEMS gyro) at frequency formula_0. The Coriolis effect induces an acceleration on the proof masses equal to formula_1, where formula_2 is a velocity and formula_3 is an angular rate of rotation. The in-plane velocity of the proof masses is given by formula_4, if the in-plane position is given by formula_5. The out-of-plane motion formula_6, induced by rotation, is given by:
formula_7
where
formula_8 is a mass of the proof mass,
formula_9 is a spring constant in the out of plane direction,
formula_3 is a magnitude of a rotation vector in the plane of and perpendicular to the driven proof mass motion.
By measuring formula_6, we can thus determine the rate of rotation formula_3.
Implementations.
Cylindrical resonator gyroscope (CRG).
This type of gyroscope was developed by GEC Marconi and Ferranti in the 1980s using metal alloys with attached piezoelectric elements and a single-piece piezoceramic design. Subsequently, in the 90s, CRGs with magneto-electric excitation and readout were produced by American-based Inertial Engineering, Inc. in California, and piezoceramic variants by Watson Industries. A recently patented variant by Innalabs uses a cylindrical design resonator made from Elinvar-type alloy with piezoceramic elements for excitation and pickoff at its bottom.
This breakthrough technology gave a substantially increased product life (MTBF > 500,000 hours); with its shock resistance (>300G), it should qualify for "tactical" (mid-accuracy) applications.
The resonator is operated in its second-order resonant mode. The Q-factor is usually about 20,000; that predetermines its noise and angular random walks. Standing waves are elliptically-shaped oscillations with four antinodes and four nodes located circumferentially along the rim.
The angle between two adjacent antinode – nodes is 45 degrees. One of the elliptical resonant modes is excited to a prescribed amplitude. When the device rotates about its sensitive axis (along its inner stem), the resulting Coriolis forces acting on the resonator's vibrating mass elements excite the second resonant mode. The angle between major axes of the two modes is also 45 degrees.
A closed loop drives the second resonant mode to zero, and the force required to null this mode is proportional to the input rotation rate. This control loop is designated the force-rebalanced mode.
Piezoelectric elements on the resonator produce forces and sense induced motions. This electromechanical system provides the low output noise and large dynamic range that demanding applications require, but suffers from intense acoustic noises and high overloads.
Piezoelectric gyroscopes.
A piezoelectric material can be induced to vibrate, and lateral motion due to Coriolis force can be measured to produce a signal related to the rate of rotation.
Tuning fork gyroscope.
This type of gyroscope uses a pair of test masses driven to resonance. Their displacement from the plane of oscillation is measured to produce a signal related to the system's rate of rotation.
Frederick William Meredith registered a patent for such a device in 1942 while working at the Royal Aircraft Establishment. Further development was carried out at the RAE in 1958 by G.H. Hunt and A.E.W. Hobbs, who demonstrated drift of less than 1°/h or (2.78×10-4)°/s.
Modern variants of tactical gyros use doubled tuning forks such as those produced by American manufacturer Systron Donner in California and French manufacturer / Safran Group.
Wine-glass resonator.
Also called a hemispherical resonator gyroscope or HRG, a wine-glass resonator uses a thin solid-state hemisphere anchored by a thick stem. The hemisphere with its stem is driven to flexural resonance and the nodal points are measured to detect rotation. There are two basic variants of such a system: one based on a rate regime of operation ("force-to-rebalance mode") and another variant based on an integrating regime of operation ("whole-angle mode"). Usually, the latter one is used in combination with a controlled parametric excitation. It is possible to use both regimes with the same hardware, which is a feature unique to these gyroscopes.
For a single-piece design (i.e., the hemispherical cup and stem(s) form a monolithic part) made from high-purity quartz glass, it is possible to reach a Q-factor greater than 30-50 million in vacuum, so the corresponding random walks are extremely low. The Q is limited by the coating, an extremely thin film of gold or platinum, and by fixture losses. Such resonators have to be fine-tuned by ion-beam micro-erosion of the glass or by laser ablation. Engineers and researchers in several countries have been working on further improvements of these sophisticated state-of-art technologies.
Safran and Northrop Grumman are the major manufacturers of HRG.
Vibrating wheel gyroscope.
A wheel is driven to rotate a fraction of a full turn about its axis. The tilt of the wheel is measured to produce a signal related to the rate of rotation.
MEMS gyroscopes.
Inexpensive vibrating structure microelectromechanical systems (MEMS) gyroscopes have become widely available. These are packaged similarly to other integrated circuits and may provide either analogue or digital outputs. In many cases, a single part includes gyroscopic sensors for multiple axes. Some parts incorporate multiple gyroscopes and accelerometers (or multiple-axis gyroscopes and accelerometers), to achieve output that has six full degrees of freedom. These units are called inertial measurement units, or IMUs. Panasonic, Robert Bosch GmbH, InvenSense, Seiko Epson, Sensonor, Hanking Electronics, STMicroelectronics, Freescale Semiconductor, and Analog Devices are major manufacturers.
Internally, MEMS gyroscopes use micro-lithographically constructed versions of one or more of the mechanisms outlined above (tuning forks, vibrating wheels, or resonant solids of various designs, i.e., similar to TFG, CRG, or HRG mentioned above).
MEMS gyroscopes are used in automotive roll-over prevention and airbag systems, image stabilization, and have many other potential applications.
Applications of gyroscopes.
Automotive.
Automotive yaw sensors can be built around vibrating structure gyroscopes. These are used to detect error states in yaw compared to a predicted response when connected as an input to electronic stability control systems in conjunction with a steering wheel sensor. Advanced systems could conceivably offer rollover detection based on a second VSG but it is cheaper to add longitudinal and vertical accelerometers to the existing lateral one to this end.
Entertainment.
The Nintendo Game Boy Advance game uses a piezoelectric gyroscope to detect rotational movement. The Sony SIXAXIS PS3 controller uses a single MEMS gyroscope to measure the sixth axis (yaw). The Nintendo Wii MotionPlus accessory uses multi-axis MEMS gyroscopes provided by InvenSense to augment the motion sensing capabilities of the Wii Remote. Most modern smartphones and gaming devices also feature MEMS gyroscopes.
Hobbies.
Vibrating structure gyroscopes are commonly used in radio-controlled helicopters to help control the helicopter's tail rotor and in radio-controlled airplanes to help keep the attitude steady during flight. They are also used in multirotor flight controllers, since multirotors are inherently aerodynamically unstable and cannot stay airborne without electronic stabilization.
Industrial robotics.
Epson Robots uses a quartz MEMS gyroscope, called QMEMS, to detect and control vibrations on their robots. This helps the robots position the robot end effector with high precision in high speed and fast-deceleration motion.
Photography.
Many image stabilization systems on video and still cameras employ vibrating structure gyroscopes.
Spacecraft orientation.
The oscillation can also be induced and controlled in the vibrating structure gyroscope for the positioning of spacecraft such as "Cassini–Huygens". These small hemispherical resonator gyroscopes made of quartz glass operate in vacuum. There are also prototypes of elastically decoupled cylindrical resonator gyroscopes (CRG) made from high-purity single-crystalline sapphire. The high-purity leuko-sapphire have Q-factor an order of value higher than quartz glass used for HRG, but this material is hard and has anisotropy. They provide accurate 3 axis positioning of the spacecraft and are highly reliable over the years as they have no moving parts.
Other.
The Segway Human Transporter uses a vibrating structure gyroscope made by Silicon Sensing Systems to stabilize the operator platform.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\omega_r"
},
{
"math_id": 1,
"text": "a_\\mathrm{c} = 2(\\Omega\\times v)"
},
{
"math_id": 2,
"text": "v"
},
{
"math_id": 3,
"text": "\\Omega"
},
{
"math_id": 4,
"text": "X_\\text{ip} \\omega_r \\cos(\\omega_r t)"
},
{
"math_id": 5,
"text": "X_\\text{ip} \\sin(\\omega_r t)"
},
{
"math_id": 6,
"text": "y_\\text{op}"
},
{
"math_id": 7,
"text": "y_\\text{op} = \\frac{F_c}{k_\\text{op}} = \\frac{1}{k_\\text{op}}2m\\Omega X_\\text{ip} \\omega_r \\cos(\\omega_r t)"
},
{
"math_id": 8,
"text": "m"
},
{
"math_id": 9,
"text": " k_\\text{op}"
}
] | https://en.wikipedia.org/wiki?curid=954307 |
954328 | Ridge regression | Regularization technique for ill-posed problems
Ridge regression is a method of estimating the coefficients of multiple-regression models in scenarios where the independent variables are highly correlated. It has been used in many fields including econometrics, chemistry, and engineering. Also known as Tikhonov regularization, named for Andrey Tikhonov, it is a method of regularization of ill-posed problems. It is particularly useful to mitigate the problem of multicollinearity in linear regression, which commonly occurs in models with large numbers of parameters. In general, the method provides improved efficiency in parameter estimation problems in exchange for a tolerable amount of bias (see bias–variance tradeoff).
The theory was first introduced by Hoerl and Kennard in 1970 in their "Technometrics" papers "Ridge regressions: biased estimation of nonorthogonal problems" and "Ridge regressions: applications in nonorthogonal problems". This was the result of ten years of research into the field of ridge analysis.
Ridge regression was developed as a possible solution to the imprecision of least square estimators when linear regression models have some multicollinear (highly correlated) independent variables—by creating a ridge regression estimator (RR). This provides a more precise ridge parameters estimate, as its variance and mean square estimator are often smaller than the least square estimators previously derived.
Overview.
In the simplest case, the problem of a near-singular moment matrix formula_0 is alleviated by adding positive elements to the diagonals, thereby decreasing its condition number. Analogous to the ordinary least squares estimator, the simple ridge estimator is then given by
formula_1
where formula_2 is the regressand, formula_3 is the design matrix, formula_4 is the identity matrix, and the ridge parameter formula_5 serves as the constant shifting the diagonals of the moment matrix. It can be shown that this estimator is the solution to the least squares problem subject to the constraint formula_7, which can be expressed as a Lagrangian:
formula_8
which shows that formula_6 is nothing but the Lagrange multiplier of the constraint. Typically, formula_6 is chosen according to a heuristic criterion, so that the constraint will not be satisfied exactly. Specifically in the case of formula_9, in which the constraint is non-binding, the ridge estimator reduces to ordinary least squares. A more general approach to Tikhonov regularization is discussed below.
History.
Tikhonov regularization was invented independently in many different contexts.
It became widely known through its application to integral equations in the works of Andrey Tikhonov and David L. Phillips. Some authors use the term Tikhonov–Phillips regularization.
The finite-dimensional case was expounded by Arthur E. Hoerl, who took a statistical approach, and by Manus Foster, who interpreted this method as a Wiener–Kolmogorov (Kriging) filter. Following Hoerl, it is known in the statistical literature as ridge regression, named after ridge analysis ("ridge" refers to the path from the constrained maximum).
Tikhonov regularization.
Suppose that for a known matrix formula_10 and vector formula_11, we wish to find a vector formula_12 such that
formula_13
where formula_12 and formula_11 may be of different sizes and formula_10 may be non-square.
The standard approach is ordinary least squares linear regression. However, if no formula_12 satisfies the equation or more than one formula_12 does—that is, the solution is not unique—the problem is said to be ill posed. In such cases, ordinary least squares estimation leads to an overdetermined, or more often an underdetermined system of equations. Most real-world phenomena have the effect of low-pass filters in the forward direction where formula_10 maps formula_12 to formula_11. Therefore, in solving the inverse-problem, the inverse mapping operates as a high-pass filter that has the undesirable tendency of amplifying noise (eigenvalues / singular values are largest in the reverse mapping where they were smallest in the forward mapping). In addition, ordinary least squares implicitly nullifies every element of the reconstructed version of formula_12 that is in the null-space of formula_10, rather than allowing for a model to be used as a prior for formula_12.
Ordinary least squares seeks to minimize the sum of squared residuals, which can be compactly written as
formula_14
where formula_15 is the Euclidean norm.
In order to give preference to a particular solution with desirable properties, a regularization term can be included in this minimization:
formula_16
for some suitably chosen Tikhonov matrix formula_17. In many cases, this matrix is chosen as a scalar multiple of the identity matrix (formula_18), giving preference to solutions with smaller norms; this is known as "L"2 regularization. In other cases, high-pass operators (e.g., a difference operator or a weighted Fourier operator) may be used to enforce smoothness if the underlying vector is believed to be mostly continuous.
This regularization improves the conditioning of the problem, thus enabling a direct numerical solution. An explicit solution, denoted by formula_19, is given by
formula_20
The effect of regularization may be varied by the scale of matrix formula_21. For formula_22 this reduces to the unregularized least-squares solution, provided that ("A"T"A")−1 exists.
"L"2 regularization is used in many contexts aside from linear regression, such as classification with logistic regression or support vector machines, and matrix factorization.
Application to existing fit results.
Since Tikhonov Regularization simply adds a quadratic term to the objective function in optimization problems,
it is possible to do so after the unregularised optimisation has taken place.
E.g., if the above problem with formula_22 yields the solution formula_23,
the solution in the presence of formula_24 can be expressed as:
formula_25
with the "regularisation matrix" formula_26.
If the parameter fit comes with a covariance matrix of the estimated parameter uncertainties formula_27,
then the regularisation matrix will be
formula_28
and the regularised result will have a new covariance
formula_29
In the context of arbitrary likelihood fits, this is valid, as long as the quadratic approximation of the likelihood function is valid. This means that, as long as the perturbation from the unregularised result is small, one can regularise any result that is presented as a best fit point with a covariance matrix. No detailed knowledge of the underlying likelihood function is needed.
Generalized Tikhonov regularization.
For general multivariate normal distributions for formula_30 and the data error, one can apply a transformation of the variables to reduce to the case above. Equivalently, one can seek an formula_30 to minimize
formula_31
where we have used formula_32 to stand for the weighted norm squared formula_33 (compare with the Mahalanobis distance). In the Bayesian interpretation formula_34 is the inverse covariance matrix of formula_35, formula_36 is the expected value of formula_30, and formula_37 is the inverse covariance matrix of formula_30. The Tikhonov matrix is then given as a factorization of the matrix formula_38 (e.g. the Cholesky factorization) and is considered a whitening filter.
This generalized problem has an optimal solution formula_39 which can be written explicitly using the formula
formula_40
or equivalently, when "Q" is not a null matrix:
formula_41
Lavrentyev regularization.
In some situations, one can avoid using the transpose formula_42, as proposed by Mikhail Lavrentyev. For example, if formula_10 is symmetric positive definite, i.e. formula_43, so is its inverse formula_44, which can thus be used to set up the weighted norm squared formula_45 in the generalized Tikhonov regularization, leading to minimizing
formula_46
or, equivalently up to a constant term,
formula_47
This minimization problem has an optimal solution formula_39 which can be written explicitly using the formula
formula_48
which is nothing but the solution of the generalized Tikhonov problem where formula_49
The Lavrentyev regularization, if applicable, is advantageous to the original Tikhonov regularization, since the Lavrentyev matrix formula_50 can be better conditioned, i.e., have a smaller condition number, compared to the Tikhonov matrix formula_51
Regularization in Hilbert space.
Typically discrete linear ill-conditioned problems result from discretization of integral equations, and one can formulate a Tikhonov regularization in the original infinite-dimensional context. In the above we can interpret formula_10 as a compact operator on Hilbert spaces, and formula_52 and formula_53 as elements in the domain and range of formula_10. The operator formula_54 is then a self-adjoint bounded invertible operator.
Relation to singular-value decomposition and Wiener filter.
With formula_18, this least-squares solution can be analyzed in a special way using the singular-value decomposition. Given the singular value decomposition
formula_55
with singular values formula_56, the Tikhonov regularized solution can be expressed as
formula_57
where formula_58 has diagonal values
formula_59
and is zero elsewhere. This demonstrates the effect of the Tikhonov parameter on the condition number of the regularized problem. For the generalized case, a similar representation can be derived using a generalized singular-value decomposition.
Finally, it is related to the Wiener filter:
formula_60
where the Wiener weights are formula_61 and formula_62 is the rank of formula_10.
Determination of the Tikhonov factor.
The optimal regularization parameter formula_63 is usually unknown and often in practical problems is determined by an "ad hoc" method. A possible approach relies on the Bayesian interpretation described below. Other approaches include the discrepancy principle, cross-validation, L-curve method, restricted maximum likelihood and unbiased predictive risk estimator. Grace Wahba proved that the optimal parameter, in the sense of leave-one-out cross-validation minimizes
formula_64
where formula_65 is the residual sum of squares, and formula_66 is the effective number of degrees of freedom.
Using the previous SVD decomposition, we can simplify the above expression:
formula_67
formula_68
and
formula_69
Relation to probabilistic formulation.
The probabilistic formulation of an inverse problem introduces (when all uncertainties are Gaussian) a covariance matrix formula_70 representing the "a priori" uncertainties on the model parameters, and a covariance matrix formula_71 representing the uncertainties on the observed parameters. In the special case when these two matrices are diagonal and isotropic, formula_72 and formula_73, and, in this case, the equations of inverse theory reduce to the equations above, with formula_74.
Bayesian interpretation.
Although at first the choice of the solution to this regularized problem may look artificial, and indeed the matrix formula_21 seems rather arbitrary, the process can be justified from a Bayesian point of view. Note that for an ill-posed problem one must necessarily introduce some additional assumptions in order to get a unique solution. Statistically, the prior probability distribution of formula_52 is sometimes taken to be a multivariate normal distribution. For simplicity here, the following assumptions are made: the means are zero; their components are independent; the components have the same standard deviation formula_75. The data are also subject to errors, and the errors in formula_53 are also assumed to be independent with zero mean and standard deviation formula_76. Under these assumptions the Tikhonov-regularized solution is the most probable solution given the data and the "a priori" distribution of formula_52, according to Bayes' theorem.
If the assumption of normality is replaced by assumptions of homoscedasticity and uncorrelatedness of errors, and if one still assumes zero mean, then the Gauss–Markov theorem entails that the solution is the minimal unbiased linear estimator.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbf{X}^\\mathsf{T}\\mathbf{X}"
},
{
"math_id": 1,
"text": "\\hat{\\beta}_{R} = \\left(\\mathbf{X}^{\\mathsf{T}} \\mathbf{X} + \\lambda \\mathbf{I}\\right)^{-1} \\mathbf{X}^{\\mathsf{T}} \\mathbf{y}"
},
{
"math_id": 2,
"text": "\\mathbf{y}"
},
{
"math_id": 3,
"text": "\\mathbf{X}"
},
{
"math_id": 4,
"text": "\\mathbf{I}"
},
{
"math_id": 5,
"text": "\\lambda \\geq 0"
},
{
"math_id": 6,
"text": "\\lambda"
},
{
"math_id": 7,
"text": "\\beta^\\mathsf{T}\\beta = c"
},
{
"math_id": 8,
"text": "\\min_{\\beta} \\, \\left(\\mathbf{y} - \\mathbf{X} \\beta\\right)^\\mathsf{T} \\left(\\mathbf{y} - \\mathbf{X} \\beta\\right) + \\lambda \\left(\\beta^\\mathsf{T}\\beta - c\\right)"
},
{
"math_id": 9,
"text": "\\lambda = 0"
},
{
"math_id": 10,
"text": "A"
},
{
"math_id": 11,
"text": "\\mathbf{b}"
},
{
"math_id": 12,
"text": "\\mathbf{x}"
},
{
"math_id": 13,
"text": "A\\mathbf{x} = \\mathbf{b},"
},
{
"math_id": 14,
"text": "\\left\\|A\\mathbf{x} - \\mathbf{b}\\right\\|_2^2,"
},
{
"math_id": 15,
"text": "\\|\\cdot\\|_2"
},
{
"math_id": 16,
"text": "\\left\\|A\\mathbf{x} - \\mathbf{b}\\right\\|_2^2 + \\left\\|\\Gamma \\mathbf{x}\\right\\|_2^2"
},
{
"math_id": 17,
"text": "\\Gamma "
},
{
"math_id": 18,
"text": "\\Gamma = \\alpha I"
},
{
"math_id": 19,
"text": "\\hat{x}"
},
{
"math_id": 20,
"text": "\\hat{x} = \\left(A^\\mathsf{T} A + \\Gamma^\\mathsf{T} \\Gamma\\right)^{-1} A^\\mathsf{T} \\mathbf{b}."
},
{
"math_id": 21,
"text": "\\Gamma"
},
{
"math_id": 22,
"text": "\\Gamma = 0"
},
{
"math_id": 23,
"text": "\\hat{x}_0"
},
{
"math_id": 24,
"text": "\\Gamma \\ne 0"
},
{
"math_id": 25,
"text": "\\hat{x} = B \\hat{x}_0,"
},
{
"math_id": 26,
"text": "B = \\left(A^\\mathsf{T} A + \\Gamma^\\mathsf{T} \\Gamma\\right)^{-1} A^\\mathsf{T} A"
},
{
"math_id": 27,
"text": "V_0"
},
{
"math_id": 28,
"text": "B = (V_0^{-1} + \\Gamma^\\mathsf{T}\\Gamma)^{-1} V_0^{-1},"
},
{
"math_id": 29,
"text": "V = B V_0 B^\\mathsf{T}."
},
{
"math_id": 30,
"text": "\\mathbf x"
},
{
"math_id": 31,
"text": "\\left\\|A \\mathbf x - \\mathbf b\\right\\|_P^2 + \\left\\|\\mathbf x - \\mathbf x_0\\right\\|_Q^2,"
},
{
"math_id": 32,
"text": "\\left\\|\\mathbf{x}\\right\\|_Q^2"
},
{
"math_id": 33,
"text": "\\mathbf{x}^\\mathsf{T} Q \\mathbf{x}"
},
{
"math_id": 34,
"text": "P"
},
{
"math_id": 35,
"text": "\\mathbf b"
},
{
"math_id": 36,
"text": "\\mathbf x_0"
},
{
"math_id": 37,
"text": "Q"
},
{
"math_id": 38,
"text": "Q = \\Gamma^\\mathsf{T} \\Gamma"
},
{
"math_id": 39,
"text": "\\mathbf x^*"
},
{
"math_id": 40,
"text": "\\mathbf x^* = \\left(A^\\mathsf{T} PA + Q\\right)^{-1} \\left(A^\\mathsf{T} P \\mathbf{b} + Q \\mathbf{x}_0\\right),"
},
{
"math_id": 41,
"text": "\\mathbf x^* = \\mathbf x_0 + \\left(A^\\mathsf{T} P A + Q \\right)^{-1} \\left(A^\\mathsf{T} P \\left(\\mathbf b - A \\mathbf x_0\\right)\\right)."
},
{
"math_id": 42,
"text": "A^\\mathsf{T}"
},
{
"math_id": 43,
"text": "A = A^\\mathsf{T} > 0"
},
{
"math_id": 44,
"text": "A^{-1}"
},
{
"math_id": 45,
"text": "\\left\\|\\mathbf x\\right\\|_P^2 = \\mathbf x^\\mathsf{T} A^{-1} \\mathbf x"
},
{
"math_id": 46,
"text": "\\left\\|A \\mathbf x - \\mathbf b\\right\\|_{A^{-1}}^2 + \\left\\|\\mathbf x - \\mathbf x_0 \\right\\|_Q^2"
},
{
"math_id": 47,
"text": "\\mathbf x^\\mathsf{T} \\left(A+Q\\right) \\mathbf x - 2 \\mathbf x^\\mathsf{T} \\left(\\mathbf b + Q \\mathbf x_0\\right)."
},
{
"math_id": 48,
"text": "\\mathbf x^* = \\left(A + Q\\right)^{-1} \\left(\\mathbf b + Q \\mathbf x_0\\right),"
},
{
"math_id": 49,
"text": "A = A^\\mathsf{T} = P^{-1}."
},
{
"math_id": 50,
"text": "A + Q"
},
{
"math_id": 51,
"text": "A^\\mathsf{T} A + \\Gamma^\\mathsf{T} \\Gamma."
},
{
"math_id": 52,
"text": "x"
},
{
"math_id": 53,
"text": "b"
},
{
"math_id": 54,
"text": "A^* A + \\Gamma^\\mathsf{T} \\Gamma "
},
{
"math_id": 55,
"text": "A = U \\Sigma V^\\mathsf{T}"
},
{
"math_id": 56,
"text": "\\sigma _i"
},
{
"math_id": 57,
"text": "\\hat{x} = V D U^\\mathsf{T} b,"
},
{
"math_id": 58,
"text": "D"
},
{
"math_id": 59,
"text": "D_{ii} = \\frac{\\sigma_i}{\\sigma_i^2 + \\alpha^2}"
},
{
"math_id": 60,
"text": "\\hat{x} = \\sum _{i=1}^q f_i \\frac{u_i^\\mathsf{T} b}{\\sigma_i} v_i,"
},
{
"math_id": 61,
"text": "f_i = \\frac{\\sigma _i^2}{\\sigma_i^2 + \\alpha^2}"
},
{
"math_id": 62,
"text": "q"
},
{
"math_id": 63,
"text": "\\alpha"
},
{
"math_id": 64,
"text": "G = \\frac{\\operatorname{RSS}}{\\tau^2}\n= \\frac{\\left\\|X \\hat{\\beta} - y\\right\\|^2}{ \\left[\\operatorname{Tr}\\left(I - X\\left(X^\\mathsf{T} X + \\alpha^2 I\\right)^{-1} X^\\mathsf{T}\\right)\\right]^2},"
},
{
"math_id": 65,
"text": "\\operatorname{RSS}"
},
{
"math_id": 66,
"text": "\\tau"
},
{
"math_id": 67,
"text": "\\operatorname{RSS} = \\left\\| y - \\sum_{i=1}^q (u_i' b) u_i \\right\\|^2 + \\left\\| \\sum _{i=1}^q \\frac{\\alpha^2}{\\sigma_i^2 + \\alpha^2} (u_i' b) u_i \\right\\|^2,"
},
{
"math_id": 68,
"text": "\\operatorname{RSS} = \\operatorname{RSS}_0 + \\left\\| \\sum_{i=1}^q \\frac{\\alpha^2}{\\sigma_i^2 + \\alpha^2} (u_i' b) u_i \\right\\|^2,"
},
{
"math_id": 69,
"text": "\\tau = m - \\sum_{i=1}^q \\frac{\\sigma_i^2}{\\sigma_i^2 + \\alpha^2}\n= m - q + \\sum_{i=1}^q \\frac{\\alpha^2}{\\sigma _i^2 + \\alpha^2}."
},
{
"math_id": 70,
"text": " C_M"
},
{
"math_id": 71,
"text": " C_D"
},
{
"math_id": 72,
"text": " C_M = \\sigma_M^2 I "
},
{
"math_id": 73,
"text": " C_D = \\sigma_D^2 I "
},
{
"math_id": 74,
"text": " \\alpha = {\\sigma_D}/{\\sigma_M} "
},
{
"math_id": 75,
"text": "\\sigma _x"
},
{
"math_id": 76,
"text": "\\sigma _b"
}
] | https://en.wikipedia.org/wiki?curid=954328 |